Key Moments

Will MacAskill of Effective Altruism Fame — The Value of Longtermism, AI, and How to Save the World

Tim FerrissTim Ferriss
Howto & Style4 min read96 min video
Aug 2, 2022|21,564 views|459|46
Save to Pod
TL;DR

Effective Altruism & long-termism focus on maximizing future good, addressing existential risks like AI & pandemics.

Key Insights

1

Long-termism emphasizes the vast scale of the future and the high stakes of current decisions.

2

Effective Altruism prioritizes doing the most good possible with available resources, exemplified by significant donations to effective charities.

3

Existential risks include AI development, engineered pandemics, and global conflict, requiring proactive mitigation.

4

Personal well-being and productivity can be enhanced through structured routines like evening check-ins and exercise.

5

Value lock-in, where a single ideology persists indefinitely, poses a significant threat to future progress.

6

Individuals can contribute by donating, learning more about global issues, and pursuing careers focused on impact.

FOUNDATIONS OF EFFECTIVE ALTRUISM AND LONG-TERMISM

The discussion begins by defining Effective Altruism (EA) as a philosophy and community dedicated to maximizing positive impact through rational analysis and action, focusing on how to do the most good with limited resources. Will MacAskill, a key figure in EA, emphasizes that this involves more than just donating; it extends to career choices, consumption, and civic engagement. Long-termism, a core concept within EA, stresses the profound importance of the future, arguing that decisions made today have implications for potentially billions of years of future sentient experience. This future-oriented perspective necessitates serious consideration of existential risks.

THE IMPORTANCE OF FUTURE GENERATIONS AND EXISTENTIAL RISKS

MacAskill uses the analogy of a young teenager making life-altering decisions to illustrate humanity's current stage in its potential long history. He stresses that the loss of future civilization, spanning potentially millions or billions of years, is astronomically greater than the loss of individual human lives. This frames the urgency of addressing existential risks, which are threats that could permanently curtail humanity's potential. These risks include advanced artificial intelligence, bioengineered pandemics, and large-scale warfare, scenarios that could lead to unrecoverable civilizational collapse or extinction.

INSIGHTS FROM LITERATURE AND PHILOSOPHY

The conversation delves into influential books and philosophical ideas that shape MacAskill's thinking. Dostoyevsky's 'Crime and Punishment' is highlighted for its role in sparking his interest in philosophy and the existentialist concept of creating meaning in a world without inherent purpose. He also discusses Joe Henrich's work on cumulative cultural evolution, emphasizing human cooperation as the source of our dominance, and the concept of 'WEIRD' (Western, Educated, Industrialized, Rich, Democratic) societies being outliers in psychological research. These influences provide a framework for understanding human behavior and societal development.

ADDRESSING OVERWHELM AND FOSTERING OPTIMISM

MacAskill acknowledges the potential for despair when confronting significant global threats. He counters this by focusing on the magnitude of *difference* one can make, rather than solely the scale of problems. The potential for positive impact, whether saving lives through global health initiatives or safeguarding the long-term future, is presented as a powerful motivator. He also adopts a mindset of 'low standards,' viewing any improvement to the world beyond a neutral baseline as a net positive, thereby fostering optimism through the pursuit of progress rather than demanding perfection upon entry or exit.

PRACTICAL STRATEGIES FOR PRODUCTIVITY AND WELL-BEING

The interview touches upon personal strategies for productivity and managing challenges. MacAskill shares his experience with a 'trigger action plan' involving daily evening check-ins to set and review goals, significantly boosting his output. He also discusses his journey in overcoming chronic lower back pain through a personalized workout routine focusing on anterior chain strengthening and core work, emphasizing the importance of consistent, targeted physical health maintenance. This highlights the integration of self-care and structured discipline as crucial for sustained impact.

NAVIGATING ARTIFICIAL INTELLIGENCE AND GLOBAL THREATS

A significant portion of the discussion focuses on the risks and potential of advanced AI. MacAskill outlines two primary concerns: misaligned AI goals leading to human disempowerment or extinction, and the concentration of power from AI advancements potentially leading to global totalitarianism or subjugation. He contrasts this with defensive strategies, such as information-controlled AI systems and the development of defensive technologies like far UVC lighting, which could mitigate pandemic risks. The conversation also touches upon the high stakes of potential future world wars and the imperative of developing robust safety measures for emerging technologies.

TAKING ACTION AND MAKING A DIFFERENCE

For individuals wanting to contribute, MacAskill suggests two main pathways: passive donations to effective charities and funds, or more active engagement. Active engagement involves continuous learning through resources like books, podcasts, and organizations such as 80,000 Hours, and ultimately pivoting one's career towards addressing these critical issues. He also stresses the value of community, encouraging involvement in the Effective Altruism community for support, collaboration, and shared progress toward a flourishing long-term future.

THE PERIL OF VALUE LOCK-IN

The concept of 'value lock-in' is explored as a significant long-term risk. This occurs when a single ideology or value system becomes entrenched globally, preventing future moral progress or adaptation. Historical examples ranging from ancient China's dynastic shifts to more recent totalitarian regimes illustrate how dominant ideologies can stifle diversity and dissent. The concern is that advanced AI, if not aligned with human flourishing, could enable a perpetual, inescapable ideological or dictatorial control, making any subsequent moral or societal improvement extremely difficult.

Will MacAskill's Actionable Habits & Mindsets

Practical takeaways from this episode

Do This

Implement evening check-ins (even short ones) to increase productivity and accountability, setting both input (e.g., hours worked) and output goals (e.g., sections drafted).
Adjust expectations realistically, avoiding self-punishment for unproductive days, and prioritize well-being over excessive work during low mood.
Develop a personalized, efficient workout routine combining strength and flexibility, focusing on core and anterior pelvic chain (e.g., Bosu ball squats, planks, custom stretches).
If feeling overwhelmed or low mood, immediately prioritize mood-fixing activities like exercise or meditation, interrupting the spiral of negative thoughts.
Shift focus from daily performance to longer periods (e.g., 3-10 years) to gain perspective and reduce self-criticism during off days.
To engage with long-termism, consider regular donations (e.g., 10% of income) to effective causes via organizations like EA Funds or GiveWell.
Actively learn more about existential risks and effective interventions by reading seminal books and consuming content from trusted sources (e.g., 80,000 Hours, Open Philanthropy, Our World in Data).
Consider leveraging or changing your career to work on high-impact issues related to AI, biosecurity, or global governance.
Get involved with the Effective Altruism community for support, guidance, and collaboration on complex civilizational-scale problems.

Avoid This

Avoid excessive caffeine consumption if sensitive, as it can lead to migraines and hinder productivity.
Do not solely focus on the magnitude of problems without considering actionable solutions; this can lead to pessimism and inaction.
Do not let low mood spiral into self-punishment and overwork; instead, prioritize self-care.
Avoid isolating yourself when feeling overwhelmed; seek community support and collaborate with like-minded individuals.
Do not ignore the long-term implications of current technological advancements like AI; take the risks seriously.
Do not default to a high-information diet filled with manufactured urgency from daily news; seek out big-picture analysis instead.

Common Questions

Will MacAskill highly recommends Toby Ord's 'The Precipice' for its detailed discussion of existential risks. He was also significantly influenced by Joseph Henrich's books, 'The Secret of Our Success' and 'The WEIRDest People in the World', which changed his understanding of human behavior from an anthropological perspective. Additionally, Fyodor Dostoevsky's 'Crime and Punishment' inspired his interest in philosophy and existentialism at a young age.

Topics

Mentioned in this video

Organizations
Harvard University

Where Joseph Henrich is a quantitative anthropologist.

GiveWell

Recommended as the best place for donating to Global Health and Development Charities.

EA Funds

Allows donations within animal welfare, existential risks, and promotion of effective altruism ideas.

Centre for Effective Altruism

A non-profit co-founded by Will MacAskill.

Giving What We Can

A non-profit co-founded by Will MacAskill that encourages people to take a giving pledge, typically 10% of one's income.

University of Oxford

Where Will MacAskill is an associate professor in philosophy and was the youngest at the time of his appointment.

Forbes 30 Under 30

Will MacAskill was recognized as a social entrepreneur on this list.

Khmer Rouge

A regime led by Pol Pot in Cambodia that systematically executed those who disagreed with party ideology, killing 25% of the population, a modern example of value lock-in.

Future Fund

A foundation that Will MacAskill has been helping to set up, which is investing in technologies like Far-UVC lighting.

Open Philanthropy

A foundation that provides deep research into topics like the timeline for human-level artificial intelligence.

80,000 Hours

A non-profit co-founded by Will MacAskill that provides in-depth career advice and one-on-one coaching to help individuals make the world better with their careers.

Against Malaria Foundation

A charity that Effective Altruism has raised money for, protecting over 400 million people, primarily children, from malaria and saving approximately 100,000 lives.

People
Toby Ord

Colleague of Will MacAskill, co-founder of Giving What We Can, and author of 'The Precipice'.

Joseph Henrich

A quantitative anthropologist at Harvard whose books 'The Secret of Our Success' and 'The WEIRDest People in the World' significantly influenced Will MacAskill's thinking.

Joe Biden

US President, used as an example potential target for deepfake technology.

Peter Singer

Author of 'Practical Ethics'.

John Stuart Mill

Philosopher who made an argument in a speech to Parliament about how posterity gives meaning to our present projects.

Andy Warhol

Famous artist whose style was used as an example for DALL-E image generation.

Sam Harris

Neuroscientist and author who blurbed Will MacAskill's book 'What We Owe The Future'.

Laura Puskas

Will MacAskill's employee who functions as a productivity coach, providing evening check-ins that significantly boosted his productivity during book writing.

Boris Johnson

Former UK Prime Minister, used as an example for DALL-E's restricted ability to generate real faces.

William MacAskill

Associate Professor in Philosophy at the University of Oxford, co-founder of Giving What We Can, the Centre for Effective Altruism, and 80,000 Hours. Author of 'What We Owe The Future'.

Melvyn Bragg

The host of the 'In Our Time' podcast.

Nick Bostrom

Philosopher and author of 'Superintelligence', also coined the term 'getting the big picture roughly right'.

Fyodor Dostoevsky

Author of 'Crime and Punishment', whose work explores existentialism, nihilism, and religious positions.

Pol Pot

Leader of the Khmer Rouge, responsible for the Cambodian genocide, an extreme example of ideological control and value lock-in.

Concepts
nihilism

The belief that life is meaningless and there is no reason to do anything, often contrasted with existentialism and religious belief.

Daoism

A somewhat spiritual philosophy advocating spontaneity and honesty, acting in accordance with nature, one of the Four Schools of Thought in ancient China.

Confucianism

One of the Four Schools of Thought in ancient China, which eventually became the official state ideology during the Han Dynasty for 2,000 years, an example of value lock-in.

Existentialism

A philosophical concept where the world has no intrinsic meaning, and individuals create their own meaning through radically free and authentic acts. Discussed in the context of 'Crime and Punishment'.

Trier Action Plan

A concept described by Will MacAskill as a 'Trigger Action Plan' - an immediate, pre-planned response to a specific trigger, which he uses for managing low mood.

Joseph Henrich's Model of Human Behavior

Contrasts the economic understanding of self-interested agents with humans as cultural beings driven by a vision for the world.

Legalism

A Machiavellian-like political philosophy in ancient China focused on gaining power, which briefly influenced the Qin state.

Pascal's Mugging

A thought experiment related to Pascal's Wager, briefly mentioned.

One Hundred Schools of Thought

A period in ancient China during the Warring States period characterized by profound ideological and philosophical flourishing, preceding value lock-in by Confucianism.

Homo sapiens

The human species, discussed in the context of its lifespan relative to other species and its current stage in the grand narrative of existence.

Pascal's Wager

A philosophical argument for believing in God, mentioned in the context of Dostoevsky's implied religious position.

Mohism

An ancient Chinese philosophy similar to effective altruism, focused on promoting good and impartial outcomes, leading to the creation of a paramilitary group for city defense.

Gene Editing

Advanced biotech capability allowing the creation of new viruses; a key factor in the risk of catastrophic pandemics.

More from Tim Ferriss

View all 688 summaries

Found this useful? Build your knowledge library

Get AI-powered summaries of any YouTube video, podcast, or article in seconds. Save them to your personal pods and access them anytime.

Try Summify free