Key Moments
How Much Does the Future Matter?: A Conversation with William MacAskill (Episode #292)
Key Moments
William MacAskill discusses effective altruism, longtermism, existential risks, and the importance of safeguarding the future.
Key Insights
Effective altruism (EA) is a philosophy and community focused on maximizing good done with resources, emphasizing evidence and impact.
Longtermism posits that making the long-term future of humanity go well is a key ethical priority, considering the vast potential of future generations.
Existential risks (x-risks) are threats that could cause extinction or civilizational collapse, with a growing consensus that these risks are significant.
Expected value reasoning, while useful, faces challenges like the 'Pascal's mugging' problem, suggesting a need for bounded value functions or alternative decision theories.
Psychological misalignment exists between our evolved moral sentiments and the scale of modern challenges, making it hard to care about distant or future people.
The future provides meaning to the present, and the current historical moment is seen as a 'hinge moment' due to accelerating technological growth and its dual-use nature.
Value lock-in is a serious concern, where a narrow or negative ideology could become permanently dominant, hindering future moral progress.
Artificial intelligence (AI) development presents significant existential risks, particularly concerning misaligned AI, and requires proactive safety and governance measures.
Though predicting technological timelines is difficult, a rapid pace of progress in AI is plausible, necessitating urgent attention to safety alongside development.
Political and economic systems often incentivize short-term thinking, highlighting the need for institutional reforms to better consider long-term interests.
DEFINING EFFECTIVE ALTRUISM, LONGTERMISM, AND EXISTENTIAL RISKS
William MacAskill defines Effective Altruism (EA) as a philosophy and community dedicated to maximizing positive impact with available resources, driven by a rational and evidence-based approach to doing good. Longtermism, a core tenet of much EA work, emphasizes prioritizing the long-term future of humanity, recognizing the potentially immense scale of future lives and the significant impact present actions can have on their well-being. Existential risks (x-risks) are identified as threats that could lead to human extinction or civilizational collapse, with recent assessments suggesting these risks are more probable than commonly perceived, akin to significant personal risks.
ADDRESSING CRITICISMS AND THE CHALLENGES OF EXPECTED VALUE REASONING
MacAskill addresses criticisms, including the idea that market-based economic growth is the sole or primary driver of good, arguing that market failures like externalities necessitate intervention. He also tackles the 'Pascal's mugging' problem, where improbable scenarios with massive expected value can lead to absurd conclusions. This suggests that simple expected value calculations might not fully capture our moral intuitions, perhaps due to bounded value functions or the need for more nuanced decision theories, implying that our ethical frameworks need refinement when dealing with extremely large or small probabilities.
PSYCHOLOGICAL MISALIGNMENT AND THE DIFFICULTY OF CARING FOR THE FUTURE
A significant challenge lies in our evolved psychology, which is not well-suited to caring about distant or future people. Our moral sentiments evolved for small, immediate communities, making abstract concepts like future generations feel less real and urgent. This 'psychological misalignment' means that even well-intentioned individuals may find it easier to feel empathy for immediate suffering than for the potential well-being or suffering of billions yet to be born, highlighting a disconnect between our moral intuitions and the reality of global and temporal scale.
THE FUTURE'S MEANING AND OUR HINGE MOMENT IN HISTORY
The future is presented not just as a recipient of our actions but as a source of meaning for our present endeavors. Projects like building cathedrals or pursuing scientific progress gain significance from their potential to contribute to an ongoing human story. MacAskill argues that we are currently at a critical 'hinge moment' in history. Accelerating technological development, a hallmark of the modern era, brings both immense opportunities and profound risks, suggesting that decisions made now could have an outsized and lasting impact on the entirety of humanity's future.
VALUE LOCK-IN AND THE NEED FOR CONTINUED MORAL PROGRESS
A major concern is 'value lock-in,' where a single, potentially flawed ideology or social system could become permanently entrenched, halting future moral and societal progress. This could arise from powerful technologies like advanced AI, enabling a totalitarian regime to maintain control indefinitely. To prevent this, MacAskill suggests that locking in certain foundational principles, like commitment to debate, tolerance, and restrictions on unchecked power, might be necessary to ensure a space for continued reflection, empathy, and moral advancement, even if it means accepting some limitations on immediate ideological freedom.
THE IMPLICATIONS AND RISKS OF ADVANCED ARTIFICIAL INTELLIGENCE
The development of artificial intelligence (AI) is highlighted as a central existential risk. The potential for AI to surpass human intelligence raises concerns about alignment – ensuring AI goals match human values. MacAskill stresses that the precise timeline for advanced AI is uncertain, but the possibility of rapid progress, combined with the substrate independence of intelligence, makes proactive safety measures crucial. He likens the situation to receiving a future warning, urging a serious, integrated approach to AI safety, akin to how bridge engineers prioritize structural integrity, rather than treating it as an afterthought.
POLITICAL SHORT-TERMISM AND THE OPPORTUNITY FOR CHANGE
The inherent short-termism of current political and economic systems poses a significant obstacle to addressing long-term issues. Politicians are incentivized by immediate electoral cycles, making it difficult to prioritize the interests of future generations who cannot vote or contribute financially today. MacAskill explores potential institutional reforms, such as future-generation ombudspersons or citizen assemblies, but emphasizes that a fundamental cultural shift towards valuing the long-term is needed to align political incentives with the well-being of future populations and create conditions for sustained positive change.
Mentioned in This Episode
●Software & Apps
●Organizations
●Books
●Concepts
●People Referenced
Common Questions
Effective Altruism is a philosophy and community focused on using evidence and reason to find the most impactful ways to do good. Unlike traditional charity which might focus on salient causes, EA encourages a step back to consider all global problems and prioritize those where one's efforts can have the biggest impact.
Topics
Mentioned in this video
Associate Professor in Philosophy at the University of Oxford, TED speaker, co-founder of the Center for Effective Altruism, and author of 'What We Owe The Future.'
Author of the Time Magazine piece on the Effective Altruism movement.
AI researcher whose analogy about machines knowing when they will arrive is used to illustrate the urgency of AI safety, and who advocates for integrating safety into AI development.
Former US President whose Mar-a-Lago home was raided by the FBI for misappropriated documents, leading to concerns about political stature and partisanship.
The Trump-appointed head of the FBI, mentioned in the context of allegations of anti-Trump partisanship.
Friend mentioned by Sam Harris as someone who blurbed William MacAskill's book.
Mentioned as an example of someone who, through business, might do a different species of good than environmental activism by building electric cars.
William MacAskill's colleague and author of 'The Precipice,' who suggests technology is advancing faster than human wisdom.
Host of the Making Sense podcast and author, who interviews William MacAskill and shares his own ethical framework.
Iranian religious leader who pronounced a fatwa against Salman Rushdie.
Philosopher who sketched the implications of 'expectational total utilitarianism' and the 'fanaticism problem' (Pascal's Mugging).
Philosopher and author of 'Death in the Afterlife,' whose work discusses the impact on meaning if humanity's future were foreclosed.
Author attacked on stage, whose plight highlights the issue of religious fanaticism and the cowardice of secularists to defend free speech.
Businessman whose politics were referenced in a Wall Street Journal article criticizing effective altruism.
Academic institution where William MacAskill is an Associate Professor in Philosophy.
Inventor and futurist known for his claims about accelerating technological change.
19th-century philosopher who argued that posterity gives meaning to present projects.
Philosopher with whom William MacAskill co-authored an article on long-termist political institutions.
A thought experiment illustrating how small probabilities of extremely large gains can lead to counter-intuitive or absurd decisions if expected value theory is applied without bounds.
Risks that could cause human extinction or permanent, drastic collapse of civilization, such as advanced bioweapons, unaligned AI, or totalitarian dystopias.
An ancient Chinese philosophy with intellectual roots resembling consequentialism, mentioned by MacAskill as a historical antecedent to effective altruism.
An award William MacAskill received as a social entrepreneur.
A philosophical perspective emphasizing the importance of shaping the long-term future of humanity, recognizing the vast potential value of future generations.
A philosophy and community focused on using evidence and reason to determine the most effective ways to improve the world.
A philosophical argument for believing in God based on expected value, mentioned in the context of the 'fanaticism problem'.
A test of a machine's ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human, expected to fall for chatbots within a decade.
A language model that William MacAskill used to mark undergraduate philosophy exams, demonstrating its surprising capability.
Apple's virtual assistant, used as an example of current AI that could become indistinguishable from humans in conversation.
Amazon's virtual assistant, used as an example of current AI that could become indistinguishable from humans in conversation.
A newspaper that published an article critical of effective altruism, which Sam Harris found to lack substance.
Magazine that featured William MacAskill and the Effective Altruism movement on its cover.
French satirical magazine that suffered a massacre, mentioned as another example of atrocities related to free speech and religious extremism.
More from Sam Harris
View all 278 summaries
13 minThe Permission to Hate Jews Has Never Been This Open
24 minThe DEEP VZN Scandal: How Good Intentions Nearly Ended the World
10 minThe War Was Necessary. The Way Trump Did It Wasn’t.
1 minBen Shapiro Knows Better
Found this useful? Build your knowledge library
Get AI-powered summaries of any YouTube video, podcast, or article in seconds. Save them to your personal pods and access them anytime.
Try Summify free