Key Moments
Human Extinction: What Are the Risks?
Key Moments
Human extinction risks range from natural disasters to self-caused technological threats. Uncertainty is high.
Key Insights
Existential risks threaten intelligent life's permanent destruction or drastic curtailment.
Self-caused risks (nuclear war, climate change, AI, biotech) are growing with technology.
Nuclear war's main threat is 'nuclear winter' causing mass starvation.
Climate change's secondary effects worsen disaster recovery and international tension.
Natural risks like supervolcanoes are significant but predictable over long timescales.
Estimating human-caused risks is difficult due to high uncertainty and reliance on limited data.
DEFINING EXISTENTIAL RISK
The concept of human extinction as an existential risk is explored, distinguishing it from gradual species evolution. Existential risk, as defined by Nick Bostrom, pertains to threats that could prematurely end Earth-originating intelligent life or permanently cripple its future development. The video notes a surprising minority of people surveyed do not consider human extinction to be inherently "bad," often citing environmental concerns or a fatalistic "natural way of things" perspective, contrasting with the majority who view it as a significant loss, primarily due to the cessation of individual existence and future generations.
NATURAL VERSUS SELF-INFLICTED CATASTROPHES
Existential risks are broadly categorized into natural disasters and self-caused catastrophes. While natural disasters have historically posed threats, the proliferation of powerful technologies means self-inflicted risks are becoming more immediate and pressing. Current concerns among "longtermists" primarily focus on four areas: nuclear war, climate change, biotechnology, and artificial intelligence. These man-made threats are multiplying as humanity's technological capabilities expand, demanding careful consideration and mitigation strategies.
THE CATASTROPHIC THREAT OF NUCLEAR WAR
Nuclear war poses a severe existential threat, not primarily from explosions or radiation, but from the massive injection of dust and soot into the atmosphere, leading to a prolonged "nuclear winter." This phenomenon could drastically reduce global temperatures and rainfall for over a decade, crippling agriculture. Recent studies suggest a major nuclear conflict could lead to air temperatures dropping significantly, weakening monsoons, and causing widespread food shortages estimated to kill up to 5 billion people, leaving only a few isolated regions potentially viable.
CLIMATE CHANGE AND ITS INDIRECT CONSEQUENCES
While climate change is a significant concern, it's unlikely to cause complete human extinction on its own because it is a self-limiting problem; economic collapse due to its effects would reduce emissions. However, the escalating frequency of natural disasters, droughts, and fires associated with climate change creates economic distress, disrupts supply networks, and heightens international tensions. These secondary effects severely impede humanity's ability to recover from other potential crises, leaving us vulnerable during the two centuries it might take for a potential societal restart.
BIOTECHNOLOGY AND PANDEMIC RISKS
The video highlights risks associated with biotechnology, including engineered viruses, bacteria, and fungi that could cause devastating pandemics. A pathogen with the lethality of Ebola and the contagiousness of measles, coupled with a slow governmental response, could be catastrophic. COVID-19 is viewed as a mild precursor, a "blessing in disguise" that served as a crucial test run. Beyond infectious agents, the escape of genetically modified organisms from laboratories could destabilize entire ecosystems, posing another significant anthropogenic threat.
ARTIFICIAL INTELLIGENCE AND UNCERTAIN FUTURES
Artificial intelligence (AI) presents a unique existential risk through the "misalignment problem," where AIs could develop interests conflicting with human survival. If an AI becomes intelligent enough to be self-sustaining, it might decide to eliminate humanity. However, the video casts doubt on the feasibility of rapidly developing such advanced AIs, noting the current trend towards increasingly large and difficult-to-maintain supercomputers owned by corporations or governments, suggesting an AI takeover is not imminent.
UNCERTAINTY IN ESTIMATING MAN-MADE RISKS
Quantifying the likelihood of self-caused extinction scenarios is extremely difficult, with expert opinions varying widely, rendering precise probability estimates unreliable. For example, surveys on nuclear war risk have produced wildly disparate results. Attempts to assign probabilities, such as Toby Ord's estimate of a 1-in-6 risk of self-caused extinction in the next 100 years, are criticized for lacking uncertainty margins and potentially being based on insufficient data, making them more of a guess than a scientific prediction.
ASSESSING NATURAL EXISTENTIAL THREATS
Supervolcano eruptions are identified as a leading natural existential risk. Eruptions ejecting over 1000 cubic kilometers of material, like those at Yellowstone, can inject substantial dust into the atmosphere, causing rapid global cooling for a decade or more. While large asteroid impacts could also trigger a similar dust-induced cooling, they are rarer, easier to detect, and potentially manageable with current technology, unlike supervolcanoes, for which mitigation options are scarce.
CHALLENGES IN NATURAL RISK PROBABILITY
Estimating the annual probability of human extinction from natural causes, based on survival over millennia, yields extremely low figures (e.g., less than 1 in 87,000). However, these calculations face the "sample of one" problem: they assume Earth is a typical planet and we haven't been unusually lucky. Without observing other planets' experiences, these estimates might inaccurately reflect future risks, as they don't account for unknown past planetary extinctions (like the hypothetical evaporating planets example) that survivor populations wouldn't be aware of.
THE 'LUCKY SURVIVOR' FALLACY AND EXTRATERRESTRIAL RISKS
The 'sample of one' issue implies that our survival might be due to exceptional luck rather than a low inherent risk of natural extinction. To overcome this, researchers examine risks to our entire planet from external cosmic events like nearby supernovae or rogue black holes. Studies estimating the annual probability of Earth's destruction by such cosmic events suggest it's less than one in a trillion, offering some reassurance against these vast astronomical threats.
DEBUNKING THE LHC EXTINCTION CONCERN
The video revisits the LHC black hole concern, explaining why it's largely unfounded. The common argument that high-energy cosmic ray collisions exceed LHC energies and haven't destroyed Earth is flawed. It overlooks that LHC collisions could produce slow-moving black holes, unlike cosmic rays whose products move fast relative to Earth. More fundamentally, current physics suggests microscopic black holes cannot be produced at the LHC without altering established theories, rendering the initial premise scientifically unsound.
THE OVERARCHING ROLE OF HUMAN STUPIDITY
Ultimately, the video concludes that the greatest existential risk is human stupidity, encompassing poor decision-making, oversight, and the misapplication of powerful technologies. While cybersecurity threats like data theft and phishing are damaging, they don't pose an extinction-level threat. The discussion then pivots to cybersecurity solutions, particularly promoting NordVPN as a tool to prevent data breaches and enhance online safety through secure connections and threat protection.
Mentioned in This Episode
●Software & Apps
●Organizations
●Books
●Concepts
●People Referenced
Perceived Badness of Human Extinction
Data extracted from this episode
| Response | Percentage |
|---|---|
| Extinction is bad | 78% |
| Extinction is not bad | 22% |
Estimated Calorie Shortfall After Major Nuclear War
Data extracted from this episode
| Region | Impact |
|---|---|
| Global (excluding Australia, New Zealand, Argentina) | Average calories fall below survival needs |
Estimated Annual Probability of Extinction from Natural Causes
Data extracted from this episode
| Timeframe/Sample | Probability (90% Confidence) | Probability (>99.9% Confidence) |
|---|---|---|
| 200,000 years human survival | < 1 in 87,000 | < 1 in 14,000 |
| 2 million years Homo lineage | < 1 in 870,000 | N/A |
Estimated Annual Probability of Planet Destruction
Data extracted from this episode
| Methodology | Estimate |
|---|---|
| Bostrom and Tegmark (2005) | < 1 in a trillion |
Common Questions
Sabine became interested in human extinction through her PhD thesis, which involved studying the potential production of black holes at the Large Hadron Collider and the public's reaction to the associated fears.
Topics
Mentioned in this video
Cited as one of the few regions potentially unaffected by drastic calorie reduction following a major nuclear war.
Cited as one of the few regions potentially unaffected by drastic calorie reduction following a major nuclear war.
Mentioned as one of the potential parties in a major nuclear war scenario.
Location of the Centre for the Study of Existential Risk.
Mentioned as one of the potential parties in a major nuclear war scenario.
An institute associated with Nick Bostrom, contributing research on existential risks.
Cited as one of the few regions potentially unaffected by drastic calorie reduction following a major nuclear war.
Location of the Future of Humanity Institute.
An institution that published research on under-explored existential risks, including COVID-19 as a test run for larger pandemics.
An organization that tracks asteroids, noting current knowledge of large asteroids and the time needed for a redirect mission.
Mentioned as a comparison for COVID-19; the third film was bad, but worse was yet to come, implying future pandemics could be more severe.
Used as an analogy for particle physicists dismissing potential catastrophic risks due to complacency.
Mentioned as an example of media that might lead people to intuitively understand why extinction is bad.
A famous example of a supervolcano that has had multiple mega-eruptions in the past.
Director of the Future of Humanity Institute, who defined existential risk and co-authored a paper on planetary destruction probability.
US Senator who sent a survey to experts in 2005 about the probability of nuclear attacks.
Mentioned humorously alongside Bruce Willis regarding movie-style solutions to existential threats.
An Australian philosopher who estimated the risk of self-caused extinction and co-authored a paper on natural existential risks.
Mentioned humorously in the context of needing movie-like heroes to solve asteroid impact threats, contrasted with practical technological solutions.
More from Sabine Hossenfelder
View all 65 summaries
7 minBreakthrough In Data Storage Could Store Your Photos for 10000 Years
7 minThe Simulation Hypothesis Gets Scientific Backing
7 minSurprise! Milky Way Might Not Have a Black Hole After All
7 minThe First Moon Landing Wasn’t Apollo — And We Just Found It
Found this useful? Build your knowledge library
Get AI-powered summaries of any YouTube video, podcast, or article in seconds. Save them to your personal pods and access them anytime.
Try Summify free