Key Moments
Melanie Mitchell: Concepts, Analogies, Common Sense & Future of AI | Lex Fridman Podcast #61
Key Moments
AI expert Melanie Mitchell discusses concepts, common sense, analogy, and the future of AI.
Key Insights
The term 'Artificial Intelligence' is problematic due to its ambiguity and the ill-defined nature of 'intelligence' itself.
Analogy-making is considered a fundamental aspect of cognition, crucial for forming concepts and understanding the world.
Current AI approaches, like deep learning, are powerful but may have limitations in achieving human-level intelligence without deeper understanding of cognition.
Building AI that truly understands the world may require integrating symbolic approaches, causality, developmental learning, and embodiment.
Common sense is a critical, yet poorly understood, component of intelligence that current AI largely lacks.
While superintelligence poses potential long-term risks, more immediate concerns involve the misuse of AI by powerful entities and societal impact.
THE AMBIGUITY OF 'ARTIFICIAL INTELLIGENCE'
Melanie Mitchell finds the term 'Artificial Intelligence' problematic due to its broad and varied interpretations. She highlights that 'intelligence' itself lacks a clear, universally accepted definition, leading to confusion. The historical context of the term, coined by John McCarthy to distinguish AI from cybernetics, is mentioned, along with a regret from McCarthy himself about the terminology. Herbert Simon's proposed 'complex information processing' was also considered, underscoring the ongoing struggle to define the field's scope and goals.
THE CENTRAL ROLE OF ANALOGY AND CONCEPTS
Mitchell emphasizes Douglas Hofstadter's view that analogy-making is core to human cognition, positing that 'without concepts there can be no thought, and without analogies there can be no concepts.' This perspective suggests that recognizing similarities between different situations—making analogies—is not just a reasoning technique but the very foundation of forming concepts. The Copycat program, developed by Mitchell and Hofstadter, aimed to simulate this process in an idealized domain of letter strings, illustrating how analogies shape perception and understanding.
LIMITATIONS OF CURRENT AI AND THE NEED FOR BROADER APPROACHES
While acknowledging the impressive progress of current AI, particularly deep learning and large data-driven models, Mitchell expresses skepticism about their ability to achieve human-level intelligence solely through scaling. She draws parallels to a DeepMind Atari game-playing program that excelled at a specific task but failed to transfer learning to a slightly modified scenario, indicating a lack of conceptual understanding. This suggests that AI may need to incorporate elements beyond brute-force pattern recognition, such as symbolic reasoning, causality, and a deeper understanding of intuitive physics and psychology.
THE CHALLENGE OF COMMON SENSE AND EMBODIMENT
A significant hurdle for AI, according to Mitchell, is the acquisition of common sense – the vast, often implicit knowledge humans possess about the world. Examples like the long-tail problem in autonomous driving highlight how AI struggles with situations not encountered in training data. Mitchell also leans into the idea of 'embodied intelligence,' suggesting that interaction with the physical world through a body might be crucial for developing a comprehensive understanding and intelligence akin to humans. This contrasts with purely digital or disembodied AI approaches. Intelligence, she posits, is deeply intertwined with our physical existence and social interactions.
REFRAMING THE 'SUPERINTELLIGENCE' DEBATE
Mitchell critiques the common narrative around superintelligent AI, particularly the idea that intelligence can be a single-dimensional property, leading to scenarios like an AI solving climate change by eliminating humans. She argues that intelligence is more holistic, integrating values, emotions, and social understanding. While acknowledging potential long-term risks, she believes more immediate concerns lie with the misuse of powerful AI by corporations and governments, and that the concept of agency in AI needs careful consideration. The inherent limitations and value systems of human-created algorithms pose present-day challenges, regardless of hypothetical superintelligence.
THE Santa FE INSTITUTE AND THE STUDY OF COMPLEX SYSTEMS
Mitchell discusses the Santa Fe Institute, a hub for interdisciplinary research on complex systems. Founded to bridge disciplinary silos, it fosters collaboration among scientists from diverse fields to tackle big questions. The institute studies emergent properties arising from simple interactions in systems ranging from ant colonies to brains. Mitchell highlights cellular automata as a beautiful example of how simple rules can generate complex behavior, a concept that deeply influences her view on the potential to engineer complexity and intelligence, emphasizing a humble yet awe-inspiring perspective on the mystery of emergent phenomena.
Mentioned in This Episode
●Products
●Software & Apps
●Companies
●Organizations
●Books
●Concepts
●People Referenced
Common Questions
Melanie Mitchell notes that 'Artificial Intelligence' is problematic because 'intelligence' itself is not clearly defined and can refer to many different things. John McCarthy, who coined the term, later regretted it. Herbert Simon proposed 'complex information processing,' which was also vague. The term often leads to confusion between narrow AI applications and broader, human-level intelligence.
Topics
Mentioned in this video
A researcher at Tesla who focuses on building actual AI systems that operate in the real world, rather than just philosophical discussions.
A futurist and AI enthusiast who made a bet with Mitchell Kapor that a machine will pass an expert-judged Turing test by 2029.
Professor of computer science at Portland State University and external professor at Santa Fe Institute, author of 'Artificial Intelligence: A Guide for Thinking Humans'. She has worked on adaptive complex systems, genetic algorithms, and the Copycat cognitive architecture.
One of Melanie Mitchell's PhD advisors, a pioneer in genetic algorithms and complex adaptive systems.
A software entrepreneur who made a bet with Ray Kurzweil that a machine will not pass an expert-judged Turing test by 2029.
A mathematician and physicist who was one of the scientists that started the Santa Fe Institute.
A pioneer in AI known for his checker-playing program, which demonstrated the power of self-play in machine learning.
A physicist and mathematician who argued that Turing machines cannot produce intelligence, suggesting that intelligence requires continuous valued numbers and quantum mechanics.
A pioneer in AI who proposed the term 'complex information processing' instead of 'artificial intelligence'.
An AI researcher who believes that fundamental breakthroughs for AI, like unsupervised learning, will be built on top of deep learning.
An AI researcher who agrees with Melanie Mitchell that human cognitive biases are linked to learning, but emphasizes that value alignment is a problem even before hypothetical superintelligence, citing powerful companies.
A physicist mentioned as an external faculty member at the Santa Fe Institute.
The computer scientist who coined the term 'artificial intelligence' but later regretted it, initially to distinguish it from cybernetics.
A philosopher known for his work on existential risk from superintelligent AI, particularly his concept of the 'orthogonality hypothesis' and the 'paperclip maximizer' thought experiment.
Melanie Mitchell's PhD advisor, a physicist, computer scientist, and author known for his work on analogy-making, concepts, and complex systems, particularly for his book 'Gödel, Escher, Bach'.
Philosopher known for his distinction between strong AI (machines actually thinking) and weak AI (machines simulating thinking).
An AI researcher who advocates for a hybrid view of AI, combining deep learning with symbolic approaches.
A pioneering computer scientist who conceived of the Turing test, a measure of machine intelligence, and the theoretical concept of the Turing machine.
CEO of Tesla, who fundamentally believes that LiDAR is a 'crutch' and advocates for a vision-only approach to autonomous driving.
A highly intelligent and sophisticated thinker in AI, who famously underestimated the difficulty of computer vision, assigning it as a summer project.
The creator of the Cyc project, who dedicated his academic career to encoding common-sense knowledge, an approach Melanie Mitchell critiques as potentially flawed.
An AI researcher whose book 'Human Compatible' and associated op-ed argue for aligning AI values with human values to prevent existential threats.
A chemist from the Manhattan Project who was one of the scientists that started the Santa Fe Institute.
A physicist who was one of the scientists that started the Santa Fe Institute.
A Nobel Prize-winning economist who was one of the scientists that started the Santa Fe Institute.
An automotive company known for its electric vehicles and its ambition to achieve fully autonomous driving using a 'vision only' approach, which Melanie Mitchell discusses in the context of AI challenges.
A company developing autonomous driving technology, categorized as providing Level 4 vehicles with safety drivers, known for their cautious and conservative policy.
A cognitive architecture developed by Douglas Hofstadter and Melanie Mitchell that places analogy-making at the core of human cognition, simulating flexible concept application in letter strings.
An AI company known for its Atari game-playing program and AlphaGo, which achieved superhuman performance through data-driven learning and self-play.
A company developing autonomous driving technology, categorized as providing Level 4 vehicles with safety drivers, known for their cautious and conservative policy.
A long-running AI project led by Douglas Lenat, aiming to encode all of common-sense knowledge in a logical representation, which Melanie Mitchell believes is the wrong approach.
A newspaper that published Melanie Mitchell's op-ed critiquing fears of superintelligent AI, and an earlier op-ed by Stuart Russell.
The institution where Melanie Mitchell is a professor of computer science.
An interdisciplinary research institution where Melanie Mitchell is an external professor, focused on complex systems research beyond traditional academic silos.
A research institution whose scientists, primarily physicists and chemists, were instrumental in founding the Santa Fe Institute.
A theoretical model of computation, discussed in relation to whether current hardware, which is in principle a Turing machine, is sufficient for creating intelligence or if new computational paradigms are needed.
A machine learning approach based on large neural networks and big data, which Melanie Mitchell believes has fundamental limits but has achieved surprising success.
A test of a machine's ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human; Melanie Mitchell still considers the original idea a good test for intelligence.
Mathematical models that Melanie Mitchell found beautiful and captivating, demonstrating how simple rules can lead to seemingly unlimited complexity through emergent behavior.
A related movement to AI at the time John McCarthy coined 'artificial intelligence', focusing on communication and control systems in living organisms and machines.
A specific type of reinforcement learning algorithm that allowed DeepMind's AI to achieve superhuman performance in Atari games.
A primary fallback sensor in Tesla vehicles, described as a crude version of LiDAR that is a good detector of obstacles but has problems detecting stopped vehicles.
A finance app that serves as a sponsor for the podcast, allowing users to send money, buy/sell Bitcoin, and invest in stocks.
A common-sense knowledge graph project at MIT, mentioned as an example of efforts to build common-sense networks.
Melanie Mitchell's book, which is a version of her PhD thesis on the Copycat project.
A book by Douglas Hofstadter and Emmanuel Sander about the pervasive role of analogy in human cognition.
Douglas Hofstadter's book that describes the Copycat project in great detail.
Stuart Russell's book summarizing his concerns about superintelligent AI and the need for value alignment, which Melanie Mitchell critiqued.
Douglas Hofstadter's book, whose author was Melanie Mitchell's PhD advisor and collaborator.
Melanie Mitchell's recent book, which explores the field of AI from various perspectives, discussing concepts, analogies, common sense, and the future of AI.
A brand of video game consoles; its games were used as a benchmark for DeepMind's AI, which learned to play them at superhuman levels, demonstrating the power of deep learning.
A remote sensing method used in autonomous vehicles for obstacle detection, but considered too expensive and a 'crutch' by Elon Musk, who favors vision-only systems.
More from Lex Fridman
View all 546 summaries
311 minJeff Kaplan: World of Warcraft, Overwatch, Blizzard, and Future of Gaming | Lex Fridman Podcast #493
154 minRick Beato: Greatest Guitarists of All Time, History & Future of Music | Lex Fridman Podcast #492
23 minKhabib vs Lex: Training with Khabib | FULL EXCLUSIVE FOOTAGE
196 minOpenClaw: The Viral AI Agent that Broke the Internet - Peter Steinberger | Lex Fridman Podcast #491
Found this useful? Build your knowledge library
Get AI-powered summaries of any YouTube video, podcast, or article in seconds. Save them to your personal pods and access them anytime.
Try Summify free