Key Moments

Tracking Political Manipulation Through Social Media - Samantha Bradshaw

Y CombinatorY Combinator
Science & Technology6 min read34 min video
Jan 23, 2019|7,344 views|142|25
Save to Pod
TL;DR

Bots are increasingly sophisticated, mimicking human behavior to manipulate public opinion by gaming algorithms and creating content, with major implications for democracy and platform accountability.

Key Insights

1

Bots can mimic human behavior by liking, sharing, and retweeting content to artificially inflate its popularity, or even by using chatbot technology to interact with users in comment sections.

2

The phenomenon of social media manipulation by state and non-state actors has been experimented with by governments for a long time, with evidence dating back to the early days of social media platforms.

3

In 2018, the ratio of junk news shared by users to professionally produced information in the US increased to approximately 1.2 or 1.3 to 1, compared to a 1-to-1 ratio in 2016.

4

Platforms like Twitter are easier to infiltrate with bots due to less stringent identity verification and accessible APIs, while Facebook's stricter identity checks make fake accounts potentially more impactful due to their perceived scarcity.

5

Calls for platform accountability are growing, driven by advertisers' realization that bot-driven views do not translate to product sales, leading to a re-evaluation of the incentive to maintain large numbers of fake accounts.

6

Research indicates that the US shares significantly higher levels of junk news compared to countries like the UK, Germany, France, Sweden, and Mexico, highlighting a more pronounced problem within the American digital landscape.

Understanding the multifaceted nature of bots

Bots, in essence, are scripts or pieces of code designed to perform automated tasks. While not all bots are malicious, with examples like web crawlers (e.g., Google Search) being beneficial for organizing information, a significant concern lies with bots designed to mimic human behavior. These bots are deployed on platforms like Twitter and Facebook to artificially boost the perceived popularity of content through actions like liking, sharing, and retweeting. More sophisticated bots can even employ chatbot technology to engage in conversations with real users, blurring the lines between automated and human interaction. This mimicry is key to their manipulative potential, creating a false sense of consensus or widespread interest around certain political narratives or figures.

Historical roots and evolution of bot tactics

The public's attention to social media manipulation, particularly concerning foreign interference in elections, surged around the 2016 US election. However, research indicates that governments have been experimenting with such techniques for much longer, dating back to the nascent stages of social media. Initially, these tools were largely used in authoritarian regimes as a means of social control to shape domestic discussions. Over time, particularly since 2016, bot tactics have evolved significantly. While cruder forms still exist, focused on simple amplification, there's a marked increase in 'gaming the algorithms.' This involves strategic use of keywords to influence trending topics, manipulate search engine results (like Google's), and influence recommendation systems (like YouTube's), aiming for organic reach that was previously the domain of search engine optimizers. This evolution shows a shift from mere amplification to more sophisticated forms of content curation and distribution.

The rise of content creation and platform-specific strategies

Beyond automated engagement, sophisticated manipulation operations now involve extensive content creation. This often manifests as 'throwing stuff at the wall and seeing what sticks,' with significant resources dedicated to identifying societal 'buttons' to push and crafting content around those sensitive issues. This is particularly evident in operations linked to state actors. The strategy is adapting to different platforms and regions. While the US sees significant activity on Twitter and Facebook, platforms like WhatsApp are crucial in other contexts, such as India. WhatsApp's encrypted and closed nature makes it challenging to study, but research, such as that in Brazil, revealed the use of memes and disinformation campaigns within groups. This mirrors tactics seen on other platforms, where mobilizing images and emotionally charged content are used to influence public opinion, akin to the 'Pepe the Frog' phenomenon in 2016.

Platform vulnerabilities and the incentive structures

The ease with which bots can infiltrate social media platforms varies. Twitter, with its readily accessible API and less stringent account creation requirements (not requiring real names), is more susceptible to a high volume of automated accounts. While crude metrics like posting over 50 tweets a day can indicate bot activity, the platform's openness facilitates their proliferation. Facebook, conversely, has more robust identity verification processes; creating fake accounts often requires submitting identification, making it harder to scale fake accounts. However, this difficulty can paradoxically make existing fake accounts on Facebook more impactful, as users may not anticipate the same level of inauthentic presence as on Twitter. This disparity highlights different risk profiles across platforms.

The debate over platform responsibility and market incentives

Historically, social media platforms have lacked a strong incentive to aggressively remove bot accounts. The market values platforms based on the number of active users, and a large user base, regardless of authenticity, can appear attractive for advertising revenue. Deleting millions of fake accounts could devalue the platform in the eyes of investors and advertisers. However, this dynamic is shifting. Advertisers are increasingly realizing that paying for views from bots does not lead to genuine customer engagement or sales, creating pressure on platforms to address the issue. Government scrutiny, through hearings and potential regulations, also plays a role in compelling platforms to take more responsibility for the authenticity of their user base and the content distributed.

Content moderation vs. transparency: navigating regulation

The approach to regulating online manipulation is complex. While some regulations, like Germany's NetzDG law, mandate the removal of illegal content within strict timelines (e.g., 24 hours) with significant fines for non-compliance, this can lead to 'collateral censorship.' By forcing rapid takedowns, platforms may inadvertently remove legitimate commentary or criticism alongside harmful content. Furthermore, authoritarian regimes can adopt similar laws to silence dissent, posing a significant risk. The speaker argues that focusing on content removal is a mistake, as it doesn't address the underlying mechanisms driving disinformation. Instead, greater transparency around platform algorithms and operational intentions is crucial, without necessarily revealing the exact workings that could be exploited. Understanding the design principles behind these algorithms is key to better governance.

The 'junk news' phenomenon and the US as a case study

Research comparing online news consumption across various countries reveals stark differences. A study analyzing content shared on Twitter and Facebook found that in 2016, Americans shared 'junk news' (defined by criteria like counterfeit mimicry of legitimate sources, lack of journalistic standards, hyperbolic language, etc.) at a one-to-one ratio with professionally produced information. By the 2018 midterm elections, this ratio had increased to approximately 1.2 or 1.3 to 1, indicating a worsening trend. Notably, this level of 'junk news' consumption and sharing is significantly higher in the US compared to countries like the UK, Germany, France, and Mexico, suggesting a more deeply entrenched problem within the American digital ecosystem. While botnets are assumed to play a role, the precise attribution remains difficult.

Optimism amidst challenges: regulation and digital citizenship

Despite the daunting challenges, there are reasons for cautious optimism. The increasing attention from governments and policymakers, who are educating themselves about the intersection of technology and politics, is a positive sign. Thoughtful regulation that avoids breaking the underlying technology, coupled with a focus on transparency, offers a path forward. The speaker also emphasizes the role of digital citizenship and basic human decency. The alarming rise in societal polarization and online anger underscores the need for individuals to be more conscious of their interactions, to practice digital privacy, and to engage with others respectfully, even amidst differing beliefs. Ultimately, fostering better communication and understanding online, rather than disengaging, is seen as crucial for a healthier digital public sphere.

Junk News vs. Professionally Produced Information Sharing Ratios

Data extracted from this episode

YearJunk News RatioContext
20161:1US Average
20181.2:1 to 1.3:1US Midterms
UK ElectionsLower than USComparison Country
Germany ElectionsLower than USComparison Country
France ElectionsLower than USComparison Country
Sweden ElectionsLower than USComparison Country
Mexico ElectionsLower than USComparison Country

Common Questions

Good bots, like Google's search engine bots, crawl and index the internet to provide useful services. Malicious bots are designed to mimic human behavior on social media to artificially inflate popularity, spread disinformation, or manipulate public opinion.

Topics

Mentioned in this video

More from Y Combinator

View all 576 summaries

Found this useful? Build your knowledge library

Get AI-powered summaries of any YouTube video, podcast, or article in seconds. Save them to your personal pods and access them anytime.

Try Summify free