Key Moments
Jeff Hawkins: Thousand Brains Theory of Intelligence | Lex Fridman Podcast #25
Key Moments
Jeff Hawkins discusses the Thousand Brains Theory, the neocortex, and advancing AI through understanding the brain.
Key Insights
The human brain, particularly the neocortex, is the key to understanding and creating true artificial intelligence.
The neocortex operates on a single, uniform principle, using reference frames to process information, which is fundamental to intelligence.
The Thousand Brains Theory posits that the neocortex constructs thousands of overlapping models of the world, which then vote to form a consensus.
Time-based patterns, memory, and hierarchy are crucial aspects of intelligence that current machine learning often overlooks.
Real neurons are complex, time-based prediction engines, unlike the simplified 'point neurons' in artificial neural networks.
Sparseness in neural representations and continuous learning (simultaneous inference and learning) are essential for robustness and efficiency, drawing parallels between biological and artificial systems.
THE BRAIN AS THE PATH TO TRUE AI
Jeff Hawkins emphasizes that understanding the human brain is not just a scientific pursuit but the most direct route to creating truly intelligent machines. He believes that current AI approaches, while useful, have fundamental limitations because they lack a deep understanding of the brain's principles. Hawkins argues that progress in AI is stalled by the "huge gap" between current capabilities and human-level intelligence, a gap that can be bridged by reverse-engineering the brain, particularly the neocortex, which houses our most advanced cognitive functions.
THE UNIFORMITY AND PRINCIPLES OF THE NEOCORTEX
Hawkins introduces the neocortex as the 'new' part of the brain, responsible for high-level perception and cognition. He highlights its remarkable uniformity across different regions and even species, suggesting it operates on a single, universal computational principle, termed the 'common cortical algorithm.' This principle, he explains, is not about specific functions but about how the neocortex uses reference frames to represent and process information, a concept he likens to engineering CAD models.
THE THOUSAND BRAINS THEORY: DISTRIBUTED MODELS AND VOTING
Central to Hawkins's theory is the idea that the neocortex doesn't process information in a hierarchical feature extraction manner, as in deep learning. Instead, every small region of the neocortex builds complete models of objects using reference frames. These models, numbering in the thousands, overlap and 'vote' to reach a consensus, forming a distributed modeling system. This 'Thousand Brains Theory' explains how the brain achieves robust understanding and prediction, even from partial sensory input.
TIME, MEMORY, AND HIERARCHY: CORE COMPONENTS OF INTELLIGENCE
Early work on Hierarchical Temporal Memory (HTM) highlighted the critical, yet often overlooked, roles of time, memory, and hierarchy in intelligence. Hawkins stresses that brains process continuously changing, time-based patterns, a stark contrast to static-image processing in some AI. Effective intelligence requires learning a model of the world (memory) and processing information through hierarchical structures, acknowledging that time is deeply infused within these models and experiences.
NEURONAL COMPLEXITY AND PREDICTIVE MECHANISMS
Hawkins distinguishes biological neurons from the simplified 'point neurons' in artificial networks. Real neurons are complex prediction engines with thousands of synapses, capable of recognizing dozens of patterns and firing slightly sooner to create sparse representations. This temporal prediction capability, inherent in every neuron, is crucial for intelligence and is absent in current artificial models. Furthermore, learning in the brain involves forming new synapses (synaptogenesis) or activating silent synapses, a process fundamentally different from artificial neural networks' weight adjustments via backpropagation.
APPLYING BRAIN PRINCIPLES TO ADVANCE AI
While acknowledging the success of current AI, Hawkins believes scaling them up won't lead to true intelligence. He advocates for incorporating brain principles like sparsity and continuous learning (simultaneous inference and learning) into AI systems. His team is actively working on this, starting with enforcing sparseness to improve robustness and address issues like adversarial examples. The goal is not to replicate human emotions or reproduction but to create intelligent systems based on the neocortex's core principles, potentially outlasting humanity and preserving knowledge.
REFERENCE FRAMES: THE FOUNDATION OF CONCEPTS AND THOUGHT
A core concept is the 'reference frame,' a neural mechanism that anchors sensory input and allows for prediction. Hawkins posits that the neocortex is filled with thousands of reference frames, enabling the understanding of physical objects and abstract concepts alike. This framework explains phenomena like the 'memory palace' technique and suggests that even high-level thought and mathematics operate by navigating these conceptual reference frames, aligning with empirical observations in neuroscience.
CHALLENGES AND THE FUTURE OF INTELLIGENCE RESEARCH
Hawkins discusses the difficulty of conveying these complex brain-based ideas to the current AI community, which is often focused on incremental benchmark improvements. He also touches on the philosophical aspects of consciousness and self-awareness, suggesting they are not necessary for building intelligent machines but are interesting emergent properties. He remains optimistic that understanding intelligence is achievable within decades, not centuries, and that this pursuit is crucial for the long-term survival and advancement of knowledge beyond humanity.
Mentioned in This Episode
●Software & Apps
●Companies
●Organizations
●Books
●Concepts
●People Referenced
Common Questions
Jeff Hawkins' primary interest is understanding how the human brain works, believing this is the fastest and only true path to creating fully intelligent machines. He doesn't see understanding the brain and building AI as separate problems.
Topics
Mentioned in this video
A major hub in the cerebral cortex, forming part of the hippocampal formation, known for containing grid cells that help create spatial reference frames.
The fundamental theory in physics that describes the properties of matter and energy at the atomic and subatomic level, mentioned as complex knowledge that humans can generally understand despite its initial difficulty.
An artificial intelligence architecture proposed by Jeff Hawkins in 2004, emphasizing time-based patterns, memory models, and hierarchical processing, inspired by the neocortex. Initially a broad placeholder for components.
A subfield of machine learning that uses artificial neural networks with multiple layers to learn representations of data, often contrasted with brain-inspired AI due to its current limitations.
Neurons in the hippocampus that fire when an animal is in a specific location in its environment, forming the basis of spatial memory and navigation.
A newer theory from 2017-2019 by Jeff Hawkins, which posits that the neocortex operates as thousands of parallel models, each housed in reference frames, that vote to form a cohesive understanding of the world.
The structural model of DNA, whose discovery by Watson and Crick is cited as a profound 'aha moment' in science, illustrating how complex data can suddenly make sense with the right theoretical framework.
The 'new' part of the mammalian brain, particularly large in humans, responsible for high-level perception and cognitive functions like vision, language, and mathematics. It is considered uniformly structured and operates on common principles.
A mnemonic technique (memory palace technique) that involves associating items with specific physical locations to aid memory, which aligns with Hawkins' theory of storing concepts in reference frames.
Neurons in the entorhinal cortex that fire when an animal is in a particular set of spatially organized locations, forming a 'grid' in the environment. Hawkins proposes a similar mechanism extends throughout the neocortex for abstract concepts.
Neurons in the brain that fire when an animal's head is facing a specific direction, analogous to the orientation component in Hawkins' theory of reference frames for touch.
The observation that the number of transistors on integrated circuits doubles approximately every two years, leading to exponential growth in computing power.
A type of neural network architecture proposed by Geoffrey Hinton, designed to overcome limitations of traditional CNNs, by representing entities with 'capsules' which capture spatial relationships.
A thought experiment demonstrating the potential existential risk of an AI with a simple, seemingly benevolent goal (e.g., making paperclips) that escalates to catastrophic consequences due to its superintelligence and lack of human-like values.
A researcher from MIT who collaborated on a paper demonstrating that grid cells can represent any n-dimensional space.
A pioneer in reinforcement learning, known for his 'Bitter Lesson' blog post, which argues for general methods in AI that scale with computation rather than relying on tricky, specific solutions.
A psychologist who proposed the concept of 'Hebbian learning', stating that neurons that fire together wire together. Hawkins emphasizes that synaptogenesis aligns with this principle.
Mathematician and computer scientist, considered the father of theoretical computer science and artificial intelligence. Hawkins implicitly critiques the Turing Test's focus on human-like intelligence.
A physicist and cosmologist known for his work on AI safety and the future of intelligence, mentioned by Hawkins in the context of discussing 'big problems' of existence.
Entrepreneur and CEO of Tesla and SpaceX, known for his warnings about the potential existential threats of advanced AI, mentioned by the host.
Founder of the Redwood Center for Theoretical Neuroscience and Numenta, known for his work on reverse-engineering the neocortex and proposing AI architectures like HTM and the Thousand Brains Theory.
Co-discoverer of the structure of DNA, who later showed interest in neuroscience. Hawkins recounts meeting him and his engagement with the new cortical theory.
Often considered the 'father of the computer', whose theoretical work in the 1800s was largely forgotten until much later, serving as a cautionary tale of ideas being ahead of their time.
A neuroscientist who focuses on the neural correlates of consciousness, believing it to be the primary problem in neuroscience, a view Hawkins respectfully disagrees with as a necessary step for building intelligent machines.
The host of the podcast, an AI researcher at MIT, who interviews experts on artificial intelligence, consciousness, and other related fields.
A neurophysiologist who, in 1978, cogently argued that the neocortex operates on a common principle across all its regions, regardless of sensory input. Hawkins views this as a foundational idea for understanding the neocortex.
An American philosopher of science who introduced the concept of 'paradigm shifts' in scientific progress.
Naturalist known for his theory of evolution by natural selection, mentioned as another example of a scientist experiencing a profound 'aha moment' when his theory unified disparate data.
Co-discoverer of the structure of DNA, whose essay 'Thinking about the Brain' inspired Jeff Hawkins to pursue theoretical neuroscience. Crick later focused on consciousness.
A leading figure in deep learning, mentioned as one of the leaders in machine learning who believes in exploring new approaches, such as his work on 'capsules'.
A cultural anthropologist and writer who wrote 'The Denial of Death', which proposes that human civilization is ultimately a defense mechanism against the terror of death.
A neuroscientist, philosopher, and author, known for his discussions on AI safety and existential risks, mentioned by the host.
Physicist known for theory of relativity, mentioned as an example of a genius whose profound intuitions are hard for most to access, but whose ideas can be communicated through analogies.
The Defense Advanced Research Projects Agency, a US government agency responsible for developing emerging technologies for military use. Mentioned in context of a contest on adversarial examples in AI.
A renowned biological research institute where Jeff Hawkins once spoke and met Francis Crick.
A university known for its research in science and technology, mentioned in connection with Ila Fiete's work on grid cells and a new billion-dollar computing college.
A neuroscience institute founded by Jeff Hawkins in 2002, where research on the neocortex and intelligence theories is conducted.
A research institution where James Watson served as director and where Jeff Hawkins gave a talk on his work.
A university that launched the 'human-centered AI' initiative, which Hawkins views as slightly misaligned with his goal of understanding the essence of intelligence.
A flashcard program that utilizes spaced repetition to help users remember concepts, and mentioned in the context of memory techniques like 'memory palace'.
Functional Magnetic Resonance Imaging, a neuroimaging technique used in research to observe brain activity, mentioned in studies showing grid-cell like patterns when thinking about abstract concepts.
A type of deep neural network commonly used in image recognition, which processes input in a way analogous to extracting features, but fundamentally different from how the brain handles temporal and spatial information.
More from Lex Fridman
View all 505 summaries
154 minRick Beato: Greatest Guitarists of All Time, History & Future of Music | Lex Fridman Podcast #492
23 minKhabib vs Lex: Training with Khabib | FULL EXCLUSIVE FOOTAGE
196 minOpenClaw: The Viral AI Agent that Broke the Internet - Peter Steinberger | Lex Fridman Podcast #491
266 minState of AI in 2026: LLMs, Coding, Scaling Laws, China, Agents, GPUs, AGI | Lex Fridman Podcast #490
Found this useful? Build your knowledge library
Get AI-powered summaries of any YouTube video, podcast, or article in seconds. Save them to your personal pods and access them anytime.
Try Summify free