Key Moments
What Is Technology Doing to Us?
Key Moments
Social media has fueled polarization and mental health crises, while AI slop makes it impossible to discern truth, necessitating a return to reputable sources.
Key Insights
Communication technologies from the last decade have significantly contributed to societal polarization, anomie, and mental health crises.
Anonymity and pseudonymity on online platforms inherently lead individuals to behave worse, a phenomenon observed in mobs and online trolling.
The proliferation of AI-generated 'slop' creates a significant problem where discerning the authenticity of content becomes a constant challenge.
Nicholas Christakis abandoned Twitter due to its toxicity and influx of 'garbage,' including far-right conspiracy theories and 'left craziness.'
The potential for AI to generate content from private user data raises concerns about privacy and the training of algorithms without explicit consent.
There's a potential, and perhaps ironic, future where AI slop accelerates a return to privileging reputable sources, as people may be willing to pay for reliability.
The detrimental impact of recent communication technologies
Nicholas Christakis argues that the communication technologies developed over the past decade have been largely harmful to society. He contends that these technologies have exacerbated societal polarization, contributed to feelings of anomie (a state of normlessness), and worsened mental health crises. Furthermore, these platforms have facilitated the growth of a surveillance state, with technologies being used in ways that verge on totalitarianism. Christakis draws a parallel to past environmental cleanup efforts, suggesting that society may eventually overcome these technological challenges, but it will require a significant period of adjustment, perhaps half a generation. This period is characterized by a regression where society has been "yielded to" and "adversely affected by" these technologies before ultimately overcoming them.
Personal disengagement from toxic platforms like Twitter
Christakis shares his personal experience with social media, having become "very disgusted" with Twitter. Initially, he used it as a valuable source of information and a way to access experts. However, over the last few years, the platform became "incredibly toxic," filled with "garbage," trolling, and a significant amount of far-right conspiracy theories and "left craziness." He found the content unusable and eventually stopped using Twitter, migrating to Blue Sky to access scientific content and have more reasonable interactions. He notes that his follower count on Blue Sky is a tenth of what it was on Twitter, but this is acceptable given the improved quality of engagement.
The pervasive problem of AI-generated 'slop'
A significant concern discussed is the rise of "AI slop" – AI-generated content that is often fictional or misleading. Christakis recounts how algorithms, initially feeding him genuine content like BBC photos of baby elephants, began to generate fabricated scenarios, such as a crocodile attacking a baby elephant. He admits to being initially taken in by this "all fiction." This "slop" is deemed useless by Christakis and poses a serious problem, eroding the trustworthiness of online information. The constant inundation of such content raises fundamental questions about the reality of what is being presented online, making it difficult to distinguish truth from fabrication. The implication is that this content floods and degrades the information ecosystem.
Anonymity and its role in online behavior
The conversation touches upon the well-understood link between anonymity and worse behavior. Historically, people in mobs or those wearing masks have exhibited disinhibited actions. Christakis notes that individuals often behave worse when anonymous or pseudonymous online. While he acknowledges that affording people the opportunity to be non-anonymous could improve behavior on social media, he is hesitant to abolish anonymity entirely, seeing it as a tool against totalitarianism. He suggests that social media companies that allow for and privilege non-anonymous accounts might offer a path forward, potentially a reintroduction of something akin to Twitter's old blue checkmark system.
The debate around Section 230 and platform responsibility
The current legal landscape, including a lawsuit against social media companies in California and discussions around Section 230, is briefly mentioned. Christakis struggles with the concept of Section 230, recognizing its historical importance for the internet's emergence and the argument that social media companies are merely carriers. However, he also acknowledges the problematic nature of platforms washing their hands of content entirely, which enables abuses. He does not possess a definitive answer to this complex issue but suggests that the eventual outcome might involve people becoming more discerning, potentially willing to pay for reliability, and that AI itself might paradoxically accelerate this shift by highlighting the prevalence of unreliable content.
The dual nature of AI's promise and peril
Regarding the broader implications of AI, Christakis expresses a sentiment of "dualism," likening it to the character from Fiddler on the Roof who agrees with opposing viewpoints. He notes the existence of expert computer scientists and tech billionaires who espouse the extraordinary promise of AI, while others, equally credible, warn of existential risks, such as a significant human extinction risk. Sam Harris recalls figures like Sam Altman suggesting a 2% to 20% extinction risk from AI, which Christakis finds "psychotic." This dichotomy in expert opinion makes it challenging to form a firm conclusion about AI's ultimate trajectory.
AI's subtle influence on human interaction and social graces
Christakis posits a "toy model" to illustrate how AI can subtly alter human behavior, using Amazon's Alexa as an example. The design of digital assistants necessitates a direct, non-polite interaction style (e.g., "Alexa, weather"), contrasting with human-to-human politeness. He hypothesizes that children interacting with these machines might learn to be rude, which could then extend to their interactions with other humans. His lab is researching how "dumb AI" (AI designed to supplement, not replace, human cognition) can act as a catalyst to optimize human interactions. Early experiments suggest that thoughtful introduction of AI agents can indeed improve collective and individual human performance. This research implies that even non-sentient AI can shape social dynamics and manners, potentially backfiring into less socially appropriate behavior if not carefully designed.
The potential return to valuing reputable sources
Christakis suggests that the sheer volume of unverified and AI-generated content online might paradoxically lead to a "return to privileging reputable sources." Just as people used to trust established news anchors like Dan Rather, there may be a shift back towards valuing publications like The Economist, where information is perceived as more reliable. This reprivatization of trust could mean individuals are less willing to believe whatever they see online and are more inclined to seek out and potentially pay for content from trusted, established authorities. This trend could be accelerated by the very "AI slop" that currently makes discerning truth so difficult, by highlighting the need for verifiable information.
Mentioned in This Episode
●Products
●Software & Apps
●People Referenced
Common Questions
Information technology in the last decade has contributed to societal polarization, anomie, mental health crises, and the development of a surveillance state. These technologies have exploited fundamental human desires, leading to adverse effects on society.
Topics
Mentioned in this video
Director of the Human Nature Lab at Yale, an MD and sociologist who studies the interaction between humans and technology.
Mentioned as a tech billionaire who discussed the potential human extinction risk from AI, with estimates ranging from 2% to potentially higher.
A psychologist with whom Nicholas Christakis has discussed the implications of Westworld and humanoid robots.
A digital assistant used as an example to illustrate how human-machine interaction can influence human behavior and politeness, especially in children.
Large Language Models, where the speaker notes a tendency to be inappropriately polite when typing instructions, reflecting a return of social graces.
A movie referenced for a scene illustrating conflicting viewpoints, used as a metaphor for the polarized expert debates on AI.
A TV series discussed in the context of humanoid robots and the philosophical implications of interacting with indistinguishable artificial beings, particularly regarding psychopathy and moral contamination.
More from Sam Harris
View all 291 summaries
42 minIs the Iran War Already Failing?
90 minFULL EPISODE: The Politics of Pragmatism and the Future of California (Ep. 464)
13 minThe Permission to Hate Jews Has Never Been This Open
24 minThe DEEP VZN Scandal: How Good Intentions Nearly Ended the World
Found this useful? Build your knowledge library
Get AI-powered summaries of any YouTube video, podcast, or article in seconds. Save them to your personal pods and access them anytime.
Try Summify free