Key Moments
Tomaso Poggio: Brains, Minds, and Machines | Lex Fridman Podcast #13
Key Moments
Tomaso Poggio discusses AI, intelligence, neuroscience, and the future of machines. Human intelligence is complex beyond current AI.
Key Insights
Einstein's genius stemmed from anti-conformist thinking and thought experiments, not just top-tier academic performance.
The problem of intelligence is considered the greatest scientific challenge, exceeding the origins of life or the universe.
Recent AI breakthroughs are heavily inspired by neuroscience, suggesting continued importance of biological brain research.
Biological brains and deep learning networks share compositional architectures, but biological systems learn with far less labeled data.
The exact relationship between hardware, software, and learning in the brain's cortex remains an open and fascinating question.
Understanding intelligence requires a multi-level approach, from chemical and biological to mathematical and algorithmic.
Worrying about AI safety is valuable, but current concerns about existential threats may be premature compared to immediate risks like nuclear weapons.
True understanding of a scene or language in AI is still a significant leap beyond current capabilities in low-level vision or speech recognition.
Ethics and consciousness in AI are complex philosophical and scientific challenges, with potential links to neuroscience and self-awareness.
Success in science and engineering hinges on insatiable curiosity, fun, collaboration, and embracing diverse perspectives, even critical ones.
INSIGHTS FROM INTELLECTUAL HEROES
Tomaso Poggio reflects on his childhood fascination with physics, particularly Einstein's theory of relativity. He highlights that Einstein's genius wasn't solely his academic prowess but his ability to think differently, utilizing thought experiments and anti-conformist perspectives. This suggests that genuine breakthroughs often come from challenging conventional wisdom, a lesson applicable not just in science but in many fields, including finance and technological innovation.
THE GRAND CHALLENGE OF INTELLIGENCE
Poggio posits that understanding intelligence is the paramount scientific problem. His initial motivation was to find a key to intelligence that could unlock solutions to all other complex problems, including those in physics. However, his focus shifted to human intelligence itself, driven by a deep curiosity about how our brains work, their limitations, and the potential for enhancement. This problem is seen as more fundamental than understanding the origins of life or the universe.
NEUROSCIENCE AS A CATALYST FOR AI PROGRESS
The conversation emphasizes the crucial role of neuroscience in recent AI advancements. Breakthroughs in areas like reinforcement learning (key to AlphaGo) and deep learning architectures have roots in biological brain research, including the work of Hubel and Wiesel on visual processing. Poggio believes this inspiration from neuroscience is likely to continue, even if not all future AI progress is directly tied to it, pushing for a deeper understanding of biological intelligence.
BIOLOGICAL VS. ARTIFICIAL NEURAL NETWORKS
While artificial neural networks simplify biological neurons, their layered, interconnected architecture is more brain-like than traditional AI models. A key difference lies in learning efficiency: deep learning requires vast amounts of labeled data, whereas humans, especially children, learn from very few examples. Poggio highlights this challenge, suggesting that future AI must bridge this gap, moving closer to the 'n-to-one' learning paradigm observed in nature.
THE NUANCES OF BRAIN ARCHITECTURE AND LEARNING
The brain exhibits both modularity, with specialized regions for functions like vision or language, and flexibility. While specific modules exist, the cortex appears surprisingly uniform in its fundamental hardware across different modalities. This raises questions about how much is hardwired by evolution versus learned through experience. Poggio suggests that evolution provides weak, plastic priors, allowing structures like face recognition areas to be imprinted by experience rather than being entirely predetermined.
COMPOSITIONALITY AND THE LIMITATIONS OF AI
Deep neural networks excel at problems with a compositional structure, where complex functions are built from simpler ones, analogous to how language is composed of syllables and words. This hierarchical organization, mirrored in both physics' local interactions and the brain's architecture, allows for efficient processing. However, Poggio notes that AI, particularly supervised methods, still faces the 'curse of dimensionality' and is far from true 'understanding' of scenes or language, a significant gap beyond current capabilities.
THE QUEST FOR UNSUPERVISED AND ETHICAL AI
While GANs offer novel ways to estimate probability densities and generate realistic images, Poggio expresses less enthusiasm for their role in general intelligence compared to some in the field. He stresses that 'no free lunch' applies, and current methods still heavily rely on data. The development of ethical AI requires understanding the neuroscience of ethics, which involves specific brain areas and even modifiable through stimulation, suggesting ethics itself is a learnable, potentially designable, aspect.
CONSCIOUSNESS, MORTALITY, AND THE FUTURE OF AI DEVELOPMENT
The nature of consciousness in AI remains a profound mystery, with debate on whether it's necessary for intelligence. Poggio believes consciousness might be required for a true Turing test, a view contrasting with some colleagues. He also touches on mortality's potential role in driving achievement, though not strictly necessary for consciousness. The next AI breakthroughs might emerge from understanding visual intelligence and self-awareness, potentially requiring solving aspects of the consciousness problem.
SUCCESS IN SCIENCE: CURIOSITY, FUN, AND COLLABORATION
Poggio identifies curiosity and having fun as essential for success in science and engineering. He stresses the importance of collaborating with like-minded, intelligent, and fun individuals, creating an environment that encourages enthusiasm and critical inquiry. The process of discovery is more enjoyable and potentially more fruitful when shared, highlighting that while individual ambition is important, collective exploration in a supportive setting is key to groundbreaking achievements.
Mentioned in This Episode
●Products
●Software & Apps
●Companies
●Organizations
●Concepts
●People Referenced
Common Questions
Tommaso Poggio was initially inspired by physics, particularly Einstein's theory of relativity, and saw the problem of intelligence as a grand challenge that, if solved, could help solve many other problems, effectively acting as a tool to expand human capabilities.
Topics
Mentioned in this video
A possibility Tommaso Poggio explored in his youth through physics, which he now considers unlikely, especially traveling back in time.
The problem where the number of units required for approximation grows exponentially with the dimensionality of the function, which deep networks aim to overcome.
A test proposed by Alan Turing to determine if a machine can exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human.
The field of creating intelligent machines, which Tommaso Poggio believes could eventually surpass human intelligence.
A thought experiment, exemplified by Einstein's work on the theory of relativity, which involved imagining communication with lights.
An application area where GANs can be very useful for generating realistic images.
A physics theory that fascinated Tommaso Poggio in his childhood and was discovered by Einstein.
A part of the brain that has a different anatomy and connectivity compared to the cortex, suggesting specialized functions.
Intelligence exhibited by machines that is comparable to human intelligence, a focus of MIT's courses and research discussed in the podcast.
A point of comparison for the existential threat of AI, with Poggio suggesting they should remain a higher priority concern.
The most developed part of the human brain, responsible for functions like vision, audition, motor control, and language, and whose underlying mechanism might be uniform across modalities.
A brain structure with distinct anatomy and connectivity, contributing to intelligence.
A structure where a function is made up of other functions, which makes deep neural networks more powerful than shallow networks in approximating complex mappings.
A theorem stating that a single hidden layer neural network can approximate any computable function, though the number of neurons required can be prohibitively large.
A traditional programming language for symbolic computation, contrasted with neural network models of thinking.
A logic programming language contrasted with the architecture of neural networks.
An AI system that defeated a Go champion, powered by reinforcement learning and deep learning, and inspired by neuroscience.
Functional Magnetic Resonance Imaging, a technique used to identify which parts of the brain are active during different tasks.
Generative Adversarial Networks, a technique for estimating probability densities, useful for realistic image generation but less enthusiastically viewed by Poggio for core intelligence problems.
A simple algorithm used to optimize artificial neural networks, which works surprisingly well despite over-parameterization.
A voice assistant, representing advancements in low-level speech recognition.
A dataset of over a million labeled images used for training deep learning models, highlighting the need for large amounts of labeled data.
A company involved in autonomous driving systems, with connections to researchers advised by Tommaso Poggio.
An institution where Einstein was one of five PhD students, and where Tommaso Poggio also studied.
National Science Foundation, which fully funds the Center for Brains, Minds, and Machines moonshot project on visual intelligence.
A center at MIT directed by Tommaso Poggio, focusing on understanding intelligence in biological and artificial neural networks.
An institution Christof Koch is associated with, relevant to brain science research.
Institution where Tommaso Poggio is a professor and directs the Center for Brains, Minds, and Machines.
A co-founder of computational vision, with whom Tommaso Poggio co-authored a paper on levels of understanding.
A researcher whose work on consciousness is mentioned in the context of developing theories about its degrees.
Host of the Lex Fridman Podcast and the Artificial Intelligence podcast, conducting the interview with Tommaso Poggio.
His neuroscience research with Torsten Wiesel in the 1960s inspired the architecture of artificial neural networks.
Professor at MIT and director of the Center for Brains, Minds, and Machines, with over 100,000 citations for his work on intelligence in biological and artificial neural networks.
An influential AI researcher and entrepreneur, co-founder of DeepMind, whom Tommaso Poggio has advised.
A childhood hero of Tommaso Poggio, whose genius in physics and the theory of relativity is discussed.
A researcher at Harvard who conducted experiments on baby monkeys, showing that face recognition areas develop based on early visual experience.
Author of 'The Denial of Death,' his ideas on mortality are brought up in relation to consciousness and intelligence.
Associated with the Allen Institute for Brain Science, and a former graduate student of Tommaso Poggio.
His work in the 1960s, along with David Hubel, laid the foundation for the architecture of layered artificial neural networks.
A physicist at MIT who discusses compositionality in physical systems and its relation to brain wiring.
An individual who has raised concerns about the existential threat of AI, compared to nuclear weapons.
Mentioned as an example of music preference that children might develop, illustrating the learning process that parents may not fully understand.
A colleague whose work demonstrates how stimulating specific brain areas with magnetic fields can alter ethical decisions.
A figure in early AI research whose work in the 1960s is linked to the development of reinforcement learning concepts.
Mentioned for his concerns about AI being more dangerous than nuclear weapons, a comparison Poggio finds misleading.
Mentioned for his Stanford commencement speech where he argued that having a finite life stimulates achievement.
A robotics expert whose estimate for AGI development is around 200 years, contrasting with Poggio's earlier mention of a century.
More from Lex Fridman
View all 505 summaries
154 minRick Beato: Greatest Guitarists of All Time, History & Future of Music | Lex Fridman Podcast #492
23 minKhabib vs Lex: Training with Khabib | FULL EXCLUSIVE FOOTAGE
196 minOpenClaw: The Viral AI Agent that Broke the Internet - Peter Steinberger | Lex Fridman Podcast #491
266 minState of AI in 2026: LLMs, Coding, Scaling Laws, China, Agents, GPUs, AGI | Lex Fridman Podcast #490
Found this useful? Build your knowledge library
Get AI-powered summaries of any YouTube video, podcast, or article in seconds. Save them to your personal pods and access them anytime.
Try Summify free