Key Moments

Melanie Mitchell: Concepts, Analogies, Common Sense & Future of AI | Lex Fridman Podcast #61

Lex FridmanLex Fridman
Science & Technology3 min read113 min video
Dec 28, 2019|98,790 views|2,248|205
Save to Pod
TL;DR

AI expert Melanie Mitchell discusses concepts, common sense, analogy, and the future of AI.

Key Insights

1

The term 'Artificial Intelligence' is problematic due to its ambiguity and the ill-defined nature of 'intelligence' itself.

2

Analogy-making is considered a fundamental aspect of cognition, crucial for forming concepts and understanding the world.

3

Current AI approaches, like deep learning, are powerful but may have limitations in achieving human-level intelligence without deeper understanding of cognition.

4

Building AI that truly understands the world may require integrating symbolic approaches, causality, developmental learning, and embodiment.

5

Common sense is a critical, yet poorly understood, component of intelligence that current AI largely lacks.

6

While superintelligence poses potential long-term risks, more immediate concerns involve the misuse of AI by powerful entities and societal impact.

THE AMBIGUITY OF 'ARTIFICIAL INTELLIGENCE'

Melanie Mitchell finds the term 'Artificial Intelligence' problematic due to its broad and varied interpretations. She highlights that 'intelligence' itself lacks a clear, universally accepted definition, leading to confusion. The historical context of the term, coined by John McCarthy to distinguish AI from cybernetics, is mentioned, along with a regret from McCarthy himself about the terminology. Herbert Simon's proposed 'complex information processing' was also considered, underscoring the ongoing struggle to define the field's scope and goals.

THE CENTRAL ROLE OF ANALOGY AND CONCEPTS

Mitchell emphasizes Douglas Hofstadter's view that analogy-making is core to human cognition, positing that 'without concepts there can be no thought, and without analogies there can be no concepts.' This perspective suggests that recognizing similarities between different situations—making analogies—is not just a reasoning technique but the very foundation of forming concepts. The Copycat program, developed by Mitchell and Hofstadter, aimed to simulate this process in an idealized domain of letter strings, illustrating how analogies shape perception and understanding.

LIMITATIONS OF CURRENT AI AND THE NEED FOR BROADER APPROACHES

While acknowledging the impressive progress of current AI, particularly deep learning and large data-driven models, Mitchell expresses skepticism about their ability to achieve human-level intelligence solely through scaling. She draws parallels to a DeepMind Atari game-playing program that excelled at a specific task but failed to transfer learning to a slightly modified scenario, indicating a lack of conceptual understanding. This suggests that AI may need to incorporate elements beyond brute-force pattern recognition, such as symbolic reasoning, causality, and a deeper understanding of intuitive physics and psychology.

THE CHALLENGE OF COMMON SENSE AND EMBODIMENT

A significant hurdle for AI, according to Mitchell, is the acquisition of common sense – the vast, often implicit knowledge humans possess about the world. Examples like the long-tail problem in autonomous driving highlight how AI struggles with situations not encountered in training data. Mitchell also leans into the idea of 'embodied intelligence,' suggesting that interaction with the physical world through a body might be crucial for developing a comprehensive understanding and intelligence akin to humans. This contrasts with purely digital or disembodied AI approaches. Intelligence, she posits, is deeply intertwined with our physical existence and social interactions.

REFRAMING THE 'SUPERINTELLIGENCE' DEBATE

Mitchell critiques the common narrative around superintelligent AI, particularly the idea that intelligence can be a single-dimensional property, leading to scenarios like an AI solving climate change by eliminating humans. She argues that intelligence is more holistic, integrating values, emotions, and social understanding. While acknowledging potential long-term risks, she believes more immediate concerns lie with the misuse of powerful AI by corporations and governments, and that the concept of agency in AI needs careful consideration. The inherent limitations and value systems of human-created algorithms pose present-day challenges, regardless of hypothetical superintelligence.

THE Santa FE INSTITUTE AND THE STUDY OF COMPLEX SYSTEMS

Mitchell discusses the Santa Fe Institute, a hub for interdisciplinary research on complex systems. Founded to bridge disciplinary silos, it fosters collaboration among scientists from diverse fields to tackle big questions. The institute studies emergent properties arising from simple interactions in systems ranging from ant colonies to brains. Mitchell highlights cellular automata as a beautiful example of how simple rules can generate complex behavior, a concept that deeply influences her view on the potential to engineer complexity and intelligence, emphasizing a humble yet awe-inspiring perspective on the mystery of emergent phenomena.

Common Questions

Melanie Mitchell notes that 'Artificial Intelligence' is problematic because 'intelligence' itself is not clearly defined and can refer to many different things. John McCarthy, who coined the term, later regretted it. Herbert Simon proposed 'complex information processing,' which was also vague. The term often leads to confusion between narrow AI applications and broader, human-level intelligence.

Topics

Mentioned in this video

People
Andrej Karpathy

A researcher at Tesla who focuses on building actual AI systems that operate in the real world, rather than just philosophical discussions.

Ray Kurzweil

A futurist and AI enthusiast who made a bet with Mitchell Kapor that a machine will pass an expert-judged Turing test by 2029.

Melanie Mitchell

Professor of computer science at Portland State University and external professor at Santa Fe Institute, author of 'Artificial Intelligence: A Guide for Thinking Humans'. She has worked on adaptive complex systems, genetic algorithms, and the Copycat cognitive architecture.

John Holland

One of Melanie Mitchell's PhD advisors, a pioneer in genetic algorithms and complex adaptive systems.

Mitchell Kapor

A software entrepreneur who made a bet with Ray Kurzweil that a machine will not pass an expert-judged Turing test by 2029.

Nicholas Metropolis

A mathematician and physicist who was one of the scientists that started the Santa Fe Institute.

Arthur Samuel

A pioneer in AI known for his checker-playing program, which demonstrated the power of self-play in machine learning.

Roger Penrose

A physicist and mathematician who argued that Turing machines cannot produce intelligence, suggesting that intelligence requires continuous valued numbers and quantum mechanics.

Herbert Simon

A pioneer in AI who proposed the term 'complex information processing' instead of 'artificial intelligence'.

Yann LeCun

An AI researcher who believes that fundamental breakthroughs for AI, like unsupervised learning, will be built on top of deep learning.

Yoshua Bengio

An AI researcher who agrees with Melanie Mitchell that human cognitive biases are linked to learning, but emphasizes that value alignment is a problem even before hypothetical superintelligence, citing powerful companies.

Sean Carroll

A physicist mentioned as an external faculty member at the Santa Fe Institute.

John McCarthy

The computer scientist who coined the term 'artificial intelligence' but later regretted it, initially to distinguish it from cybernetics.

Nick Bostrom

A philosopher known for his work on existential risk from superintelligent AI, particularly his concept of the 'orthogonality hypothesis' and the 'paperclip maximizer' thought experiment.

Douglas Hofstadter

Melanie Mitchell's PhD advisor, a physicist, computer scientist, and author known for his work on analogy-making, concepts, and complex systems, particularly for his book 'Gödel, Escher, Bach'.

John Searle

Philosopher known for his distinction between strong AI (machines actually thinking) and weak AI (machines simulating thinking).

Gary Marcus

An AI researcher who advocates for a hybrid view of AI, combining deep learning with symbolic approaches.

Alan Turing

A pioneering computer scientist who conceived of the Turing test, a measure of machine intelligence, and the theoretical concept of the Turing machine.

Elon Musk

CEO of Tesla, who fundamentally believes that LiDAR is a 'crutch' and advocates for a vision-only approach to autonomous driving.

Marvin Minsky

A highly intelligent and sophisticated thinker in AI, who famously underestimated the difficulty of computer vision, assigning it as a summer project.

Douglas Lenat

The creator of the Cyc project, who dedicated his academic career to encoding common-sense knowledge, an approach Melanie Mitchell critiques as potentially flawed.

Stuart Russell

An AI researcher whose book 'Human Compatible' and associated op-ed argue for aligning AI values with human values to prevent existential threats.

George Cowan

A chemist from the Manhattan Project who was one of the scientists that started the Santa Fe Institute.

Murray Gell-Mann

A physicist who was one of the scientists that started the Santa Fe Institute.

Kenneth Arrow

A Nobel Prize-winning economist who was one of the scientists that started the Santa Fe Institute.

More from Lex Fridman

View all 546 summaries

Found this useful? Build your knowledge library

Get AI-powered summaries of any YouTube video, podcast, or article in seconds. Save them to your personal pods and access them anytime.

Try Summify free