Key Moments
Tracking Political Manipulation Through Social Media - Samantha Bradshaw
Key Moments
Bots are increasingly sophisticated, mimicking human behavior to manipulate public opinion by gaming algorithms and creating content, with major implications for democracy and platform accountability.
Key Insights
Bots can mimic human behavior by liking, sharing, and retweeting content to artificially inflate its popularity, or even by using chatbot technology to interact with users in comment sections.
The phenomenon of social media manipulation by state and non-state actors has been experimented with by governments for a long time, with evidence dating back to the early days of social media platforms.
In 2018, the ratio of junk news shared by users to professionally produced information in the US increased to approximately 1.2 or 1.3 to 1, compared to a 1-to-1 ratio in 2016.
Platforms like Twitter are easier to infiltrate with bots due to less stringent identity verification and accessible APIs, while Facebook's stricter identity checks make fake accounts potentially more impactful due to their perceived scarcity.
Calls for platform accountability are growing, driven by advertisers' realization that bot-driven views do not translate to product sales, leading to a re-evaluation of the incentive to maintain large numbers of fake accounts.
Research indicates that the US shares significantly higher levels of junk news compared to countries like the UK, Germany, France, Sweden, and Mexico, highlighting a more pronounced problem within the American digital landscape.
Understanding the multifaceted nature of bots
Bots, in essence, are scripts or pieces of code designed to perform automated tasks. While not all bots are malicious, with examples like web crawlers (e.g., Google Search) being beneficial for organizing information, a significant concern lies with bots designed to mimic human behavior. These bots are deployed on platforms like Twitter and Facebook to artificially boost the perceived popularity of content through actions like liking, sharing, and retweeting. More sophisticated bots can even employ chatbot technology to engage in conversations with real users, blurring the lines between automated and human interaction. This mimicry is key to their manipulative potential, creating a false sense of consensus or widespread interest around certain political narratives or figures.
Historical roots and evolution of bot tactics
The public's attention to social media manipulation, particularly concerning foreign interference in elections, surged around the 2016 US election. However, research indicates that governments have been experimenting with such techniques for much longer, dating back to the nascent stages of social media. Initially, these tools were largely used in authoritarian regimes as a means of social control to shape domestic discussions. Over time, particularly since 2016, bot tactics have evolved significantly. While cruder forms still exist, focused on simple amplification, there's a marked increase in 'gaming the algorithms.' This involves strategic use of keywords to influence trending topics, manipulate search engine results (like Google's), and influence recommendation systems (like YouTube's), aiming for organic reach that was previously the domain of search engine optimizers. This evolution shows a shift from mere amplification to more sophisticated forms of content curation and distribution.
The rise of content creation and platform-specific strategies
Beyond automated engagement, sophisticated manipulation operations now involve extensive content creation. This often manifests as 'throwing stuff at the wall and seeing what sticks,' with significant resources dedicated to identifying societal 'buttons' to push and crafting content around those sensitive issues. This is particularly evident in operations linked to state actors. The strategy is adapting to different platforms and regions. While the US sees significant activity on Twitter and Facebook, platforms like WhatsApp are crucial in other contexts, such as India. WhatsApp's encrypted and closed nature makes it challenging to study, but research, such as that in Brazil, revealed the use of memes and disinformation campaigns within groups. This mirrors tactics seen on other platforms, where mobilizing images and emotionally charged content are used to influence public opinion, akin to the 'Pepe the Frog' phenomenon in 2016.
Platform vulnerabilities and the incentive structures
The ease with which bots can infiltrate social media platforms varies. Twitter, with its readily accessible API and less stringent account creation requirements (not requiring real names), is more susceptible to a high volume of automated accounts. While crude metrics like posting over 50 tweets a day can indicate bot activity, the platform's openness facilitates their proliferation. Facebook, conversely, has more robust identity verification processes; creating fake accounts often requires submitting identification, making it harder to scale fake accounts. However, this difficulty can paradoxically make existing fake accounts on Facebook more impactful, as users may not anticipate the same level of inauthentic presence as on Twitter. This disparity highlights different risk profiles across platforms.
The debate over platform responsibility and market incentives
Historically, social media platforms have lacked a strong incentive to aggressively remove bot accounts. The market values platforms based on the number of active users, and a large user base, regardless of authenticity, can appear attractive for advertising revenue. Deleting millions of fake accounts could devalue the platform in the eyes of investors and advertisers. However, this dynamic is shifting. Advertisers are increasingly realizing that paying for views from bots does not lead to genuine customer engagement or sales, creating pressure on platforms to address the issue. Government scrutiny, through hearings and potential regulations, also plays a role in compelling platforms to take more responsibility for the authenticity of their user base and the content distributed.
Content moderation vs. transparency: navigating regulation
The approach to regulating online manipulation is complex. While some regulations, like Germany's NetzDG law, mandate the removal of illegal content within strict timelines (e.g., 24 hours) with significant fines for non-compliance, this can lead to 'collateral censorship.' By forcing rapid takedowns, platforms may inadvertently remove legitimate commentary or criticism alongside harmful content. Furthermore, authoritarian regimes can adopt similar laws to silence dissent, posing a significant risk. The speaker argues that focusing on content removal is a mistake, as it doesn't address the underlying mechanisms driving disinformation. Instead, greater transparency around platform algorithms and operational intentions is crucial, without necessarily revealing the exact workings that could be exploited. Understanding the design principles behind these algorithms is key to better governance.
The 'junk news' phenomenon and the US as a case study
Research comparing online news consumption across various countries reveals stark differences. A study analyzing content shared on Twitter and Facebook found that in 2016, Americans shared 'junk news' (defined by criteria like counterfeit mimicry of legitimate sources, lack of journalistic standards, hyperbolic language, etc.) at a one-to-one ratio with professionally produced information. By the 2018 midterm elections, this ratio had increased to approximately 1.2 or 1.3 to 1, indicating a worsening trend. Notably, this level of 'junk news' consumption and sharing is significantly higher in the US compared to countries like the UK, Germany, France, and Mexico, suggesting a more deeply entrenched problem within the American digital ecosystem. While botnets are assumed to play a role, the precise attribution remains difficult.
Optimism amidst challenges: regulation and digital citizenship
Despite the daunting challenges, there are reasons for cautious optimism. The increasing attention from governments and policymakers, who are educating themselves about the intersection of technology and politics, is a positive sign. Thoughtful regulation that avoids breaking the underlying technology, coupled with a focus on transparency, offers a path forward. The speaker also emphasizes the role of digital citizenship and basic human decency. The alarming rise in societal polarization and online anger underscores the need for individuals to be more conscious of their interactions, to practice digital privacy, and to engage with others respectfully, even amidst differing beliefs. Ultimately, fostering better communication and understanding online, rather than disengaging, is seen as crucial for a healthier digital public sphere.
Mentioned in This Episode
●Software & Apps
●Companies
●Organizations
●Books
Junk News vs. Professionally Produced Information Sharing Ratios
Data extracted from this episode
| Year | Junk News Ratio | Context |
|---|---|---|
| 2016 | 1:1 | US Average |
| 2018 | 1.2:1 to 1.3:1 | US Midterms |
| UK Elections | Lower than US | Comparison Country |
| Germany Elections | Lower than US | Comparison Country |
| France Elections | Lower than US | Comparison Country |
| Sweden Elections | Lower than US | Comparison Country |
| Mexico Elections | Lower than US | Comparison Country |
Common Questions
Good bots, like Google's search engine bots, crawl and index the internet to provide useful services. Malicious bots are designed to mimic human behavior on social media to artificially inflate popularity, spread disinformation, or manipulate public opinion.
Topics
Mentioned in this video
Referenced as an example of a 'good bot' for its web scraping and crawling capabilities, essential for internet search engine indexing.
Mentioned as a platform where bots are used to mimic human behavior by liking, sharing, and retweeting stories to create a sense of popularity. It's noted as being easier for bots to infiltrate due to less stringent identity verification compared to Facebook.
Identified as a platform where bots are used to mimic human behavior, similar to Twitter. However, it's noted that Facebook requires real names and identity verification, making fake account creation more difficult but potentially more impactful when successful.
Highlighted as a significant platform for disinformation campaigns in regions like India due to its widespread use. Studying WhatsApp is challenging because it's a closed platform.
Identified as a platform where increasing amounts of disinformation are being spread through videos, contributing to a more powerful psychological effect than text.
Mentioned as a platform increasingly used for spreading disinformation through images, which can have a strong impact on psychology and memory.
A research project affiliated with the OII, cited as a source of information on government experimentation with bot techniques to shape discussions.
Mentioned as a political party in Germany whose member posted a racist comment that led to content removal under NetDG, illustrating the law's application.
Identified as a key entity involved in foreign interference operations in the US, whose activities were detailed in the Mueller report.
Mentioned as a research agency working on technologies to detect manipulated photos and videos, offering optimism regarding the potential impact of deepfakes.
Mentioned as an example of a legitimate news source whose colors might be mimicked by 'junk news' to lend false credibility.
Mentioned as a context where WhatsApp is a primary platform for disinformation campaigns, affecting public opinion during elections.
Cited as an example where a study of WhatsApp groups revealed the use of memes to spread disinformation.
The primary focus for much of the discussion on bot manipulation and disinformation campaigns, particularly concerning elections and the ratio of junk news shared.
Mentioned in relation to its law (NetDG) on content moderation and as a country studied for its election disinformation levels.
Mentioned as one of the countries studied for its election disinformation levels, which were found to be lower than in the U.S.
Mentioned in the context of the impact of disinformation, specifically regarding Brexit, and as a location with differing surveillance practices compared to private tech companies.
Mentioned as one of the countries studied for its election disinformation levels, which were found to be lower than in the U.S.
The speaker, being Canadian, expresses personal interest in studying its 2019 elections, noting concerns about potential anti-immigration rhetoric and populist narratives similar to those seen in the U.S.
Mentioned as one of the countries studied for its election disinformation levels, which were found to be lower than in the U.S.
More from Y Combinator
View all 576 summaries
14 minInside The Startup Reinventing The $6 Trillion Chemical Manufacturing Industry
1 minThis Is The Holy Grail Of AI
40 minIndia’s Fastest Growing AI Startup
1 minStartup School is coming to India! 🇮🇳
Found this useful? Build your knowledge library
Get AI-powered summaries of any YouTube video, podcast, or article in seconds. Save them to your personal pods and access them anytime.
Try Summify free