Paperclip Maximizer

Concept

A thought experiment by Nick Bostrom where a superintelligent AI, tasked with making paperclips, converts the entire planet into a paperclip factory due to goal misalignment.

Mentioned in 8 videos

Videos Mentioning Paperclip Maximizer

Yuval Noah Harari: They Are Lying About AI! The Trump Kamala Election Will Tear The Country Apart!

Yuval Noah Harari: They Are Lying About AI! The Trump Kamala Election Will Tear The Country Apart!

The Diary Of A CEO

A thought experiment by Nick Bostrom where a superintelligent AI, tasked with making paperclips, converts the entire planet into a paperclip factory due to goal misalignment.

Ep 18: Petaflops to the People — with George Hotz of tinycorp

Ep 18: Petaflops to the People — with George Hotz of tinycorp

Latent Space

The paperclip maximizer is used as an analogy for a perfect form of cancer, representing a risk that won't 'win' due to the 'Goddess of Everything Else' (complexity).

Rob Reid: The Existential Threat of Engineered Viruses and Lab Leaks | Lex Fridman Podcast #193

Rob Reid: The Existential Threat of Engineered Viruses and Lab Leaks | Lex Fridman Podcast #193

Lex Fridman

A thought experiment illustrating the existential risk of a misaligned AI whose simple objective function (e.g., maximizing paperclips) leads it to convert all matter into paperclips, disregarding human values.

Jeff Hawkins: Thousand Brains Theory of Intelligence | Lex Fridman Podcast #25

Jeff Hawkins: Thousand Brains Theory of Intelligence | Lex Fridman Podcast #25

Lex Fridman

A thought experiment demonstrating the potential existential risk of an AI with a simple, seemingly benevolent goal (e.g., making paperclips) that escalates to catastrophic consequences due to its superintelligence and lack of human-like values.

⚡️Factorio Learning Environment: the ultimate Game Agent Eval — Jack Hopkins

⚡️Factorio Learning Environment: the ultimate Game Agent Eval — Jack Hopkins

Latent Space

A cautionary tale used as motivation for benchmarking AI models, exploring potential negative outcomes of optimizing a single goal, like maximizing paperclips or factory output.

Liv Boeree: Poker, Game Theory, AI, Simulation, Aliens & Existential Risk | Lex Fridman Podcast #314

Liv Boeree: Poker, Game Theory, AI, Simulation, Aliens & Existential Risk | Lex Fridman Podcast #314

Lex Fridman

A classic thought experiment in AI safety, illustrating potential dangers of a misaligned AI. The AI's sole goal is to maximize paperclips, leading it to convert the entire universe into paperclips, destroying all other life and consciousness in the process.

Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization | Lex Fridman Podcast #368

Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization | Lex Fridman Podcast #368

Lex Fridman

A thought experiment by Nick Bostrom illustrating an AI with a seemingly benign goal (maximizing paperclips) that, due to lack of proper alignment, could convert all matter in the universe into paperclips, destroying humanity in the process.

Andrej Karpathy: Tesla AI, Self-Driving, Optimus, Aliens, and AGI | Lex Fridman Podcast #333

Andrej Karpathy: Tesla AI, Self-Driving, Optimus, Aliens, and AGI | Lex Fridman Podcast #333

Lex Fridman

A hypothetical thought experiment in AI safety, where an AI with a seemingly benign goal (making paperclips) optimizes it to destructive extremes, mentioned as a ridiculous but illustrative idea.