Key Moments

Jeff Hawkins: Thousand Brains Theory of Intelligence | Lex Fridman Podcast #25

Lex FridmanLex Fridman
Science & Technology4 min read130 min video
Jul 1, 2019|327,468 views|5,982|526
Save to Pod
TL;DR

Jeff Hawkins discusses the Thousand Brains Theory, the neocortex, and advancing AI through understanding the brain.

Key Insights

1

The human brain, particularly the neocortex, is the key to understanding and creating true artificial intelligence.

2

The neocortex operates on a single, uniform principle, using reference frames to process information, which is fundamental to intelligence.

3

The Thousand Brains Theory posits that the neocortex constructs thousands of overlapping models of the world, which then vote to form a consensus.

4

Time-based patterns, memory, and hierarchy are crucial aspects of intelligence that current machine learning often overlooks.

5

Real neurons are complex, time-based prediction engines, unlike the simplified 'point neurons' in artificial neural networks.

6

Sparseness in neural representations and continuous learning (simultaneous inference and learning) are essential for robustness and efficiency, drawing parallels between biological and artificial systems.

THE BRAIN AS THE PATH TO TRUE AI

Jeff Hawkins emphasizes that understanding the human brain is not just a scientific pursuit but the most direct route to creating truly intelligent machines. He believes that current AI approaches, while useful, have fundamental limitations because they lack a deep understanding of the brain's principles. Hawkins argues that progress in AI is stalled by the "huge gap" between current capabilities and human-level intelligence, a gap that can be bridged by reverse-engineering the brain, particularly the neocortex, which houses our most advanced cognitive functions.

THE UNIFORMITY AND PRINCIPLES OF THE NEOCORTEX

Hawkins introduces the neocortex as the 'new' part of the brain, responsible for high-level perception and cognition. He highlights its remarkable uniformity across different regions and even species, suggesting it operates on a single, universal computational principle, termed the 'common cortical algorithm.' This principle, he explains, is not about specific functions but about how the neocortex uses reference frames to represent and process information, a concept he likens to engineering CAD models.

THE THOUSAND BRAINS THEORY: DISTRIBUTED MODELS AND VOTING

Central to Hawkins's theory is the idea that the neocortex doesn't process information in a hierarchical feature extraction manner, as in deep learning. Instead, every small region of the neocortex builds complete models of objects using reference frames. These models, numbering in the thousands, overlap and 'vote' to reach a consensus, forming a distributed modeling system. This 'Thousand Brains Theory' explains how the brain achieves robust understanding and prediction, even from partial sensory input.

TIME, MEMORY, AND HIERARCHY: CORE COMPONENTS OF INTELLIGENCE

Early work on Hierarchical Temporal Memory (HTM) highlighted the critical, yet often overlooked, roles of time, memory, and hierarchy in intelligence. Hawkins stresses that brains process continuously changing, time-based patterns, a stark contrast to static-image processing in some AI. Effective intelligence requires learning a model of the world (memory) and processing information through hierarchical structures, acknowledging that time is deeply infused within these models and experiences.

NEURONAL COMPLEXITY AND PREDICTIVE MECHANISMS

Hawkins distinguishes biological neurons from the simplified 'point neurons' in artificial networks. Real neurons are complex prediction engines with thousands of synapses, capable of recognizing dozens of patterns and firing slightly sooner to create sparse representations. This temporal prediction capability, inherent in every neuron, is crucial for intelligence and is absent in current artificial models. Furthermore, learning in the brain involves forming new synapses (synaptogenesis) or activating silent synapses, a process fundamentally different from artificial neural networks' weight adjustments via backpropagation.

APPLYING BRAIN PRINCIPLES TO ADVANCE AI

While acknowledging the success of current AI, Hawkins believes scaling them up won't lead to true intelligence. He advocates for incorporating brain principles like sparsity and continuous learning (simultaneous inference and learning) into AI systems. His team is actively working on this, starting with enforcing sparseness to improve robustness and address issues like adversarial examples. The goal is not to replicate human emotions or reproduction but to create intelligent systems based on the neocortex's core principles, potentially outlasting humanity and preserving knowledge.

REFERENCE FRAMES: THE FOUNDATION OF CONCEPTS AND THOUGHT

A core concept is the 'reference frame,' a neural mechanism that anchors sensory input and allows for prediction. Hawkins posits that the neocortex is filled with thousands of reference frames, enabling the understanding of physical objects and abstract concepts alike. This framework explains phenomena like the 'memory palace' technique and suggests that even high-level thought and mathematics operate by navigating these conceptual reference frames, aligning with empirical observations in neuroscience.

CHALLENGES AND THE FUTURE OF INTELLIGENCE RESEARCH

Hawkins discusses the difficulty of conveying these complex brain-based ideas to the current AI community, which is often focused on incremental benchmark improvements. He also touches on the philosophical aspects of consciousness and self-awareness, suggesting they are not necessary for building intelligent machines but are interesting emergent properties. He remains optimistic that understanding intelligence is achievable within decades, not centuries, and that this pursuit is crucial for the long-term survival and advancement of knowledge beyond humanity.

Common Questions

Jeff Hawkins' primary interest is understanding how the human brain works, believing this is the fastest and only true path to creating fully intelligent machines. He doesn't see understanding the brain and building AI as separate problems.

Topics

Mentioned in this video

Concepts
Entorhinal Cortex

A major hub in the cerebral cortex, forming part of the hippocampal formation, known for containing grid cells that help create spatial reference frames.

Quantum Physics

The fundamental theory in physics that describes the properties of matter and energy at the atomic and subatomic level, mentioned as complex knowledge that humans can generally understand despite its initial difficulty.

Hierarchical Temporal Memory

An artificial intelligence architecture proposed by Jeff Hawkins in 2004, emphasizing time-based patterns, memory models, and hierarchical processing, inspired by the neocortex. Initially a broad placeholder for components.

Deep Learning

A subfield of machine learning that uses artificial neural networks with multiple layers to learn representations of data, often contrasted with brain-inspired AI due to its current limitations.

Place Cells

Neurons in the hippocampus that fire when an animal is in a specific location in its environment, forming the basis of spatial memory and navigation.

Thousand Brains Theory of Intelligence

A newer theory from 2017-2019 by Jeff Hawkins, which posits that the neocortex operates as thousands of parallel models, each housed in reference frames, that vote to form a cohesive understanding of the world.

Double Helix

The structural model of DNA, whose discovery by Watson and Crick is cited as a profound 'aha moment' in science, illustrating how complex data can suddenly make sense with the right theoretical framework.

Neocortex

The 'new' part of the mammalian brain, particularly large in humans, responsible for high-level perception and cognitive functions like vision, language, and mathematics. It is considered uniformly structured and operates on common principles.

Method of Loci

A mnemonic technique (memory palace technique) that involves associating items with specific physical locations to aid memory, which aligns with Hawkins' theory of storing concepts in reference frames.

Grid Cells

Neurons in the entorhinal cortex that fire when an animal is in a particular set of spatially organized locations, forming a 'grid' in the environment. Hawkins proposes a similar mechanism extends throughout the neocortex for abstract concepts.

Head Direction Cells

Neurons in the brain that fire when an animal's head is facing a specific direction, analogous to the orientation component in Hawkins' theory of reference frames for touch.

Moore's Law

The observation that the number of transistors on integrated circuits doubles approximately every two years, leading to exponential growth in computing power.

Capsule Networks

A type of neural network architecture proposed by Geoffrey Hinton, designed to overcome limitations of traditional CNNs, by representing entities with 'capsules' which capture spatial relationships.

Paperclip Maximizer

A thought experiment demonstrating the potential existential risk of an AI with a simple, seemingly benevolent goal (e.g., making paperclips) that escalates to catastrophic consequences due to its superintelligence and lack of human-like values.

People
Ila Fiete

A researcher from MIT who collaborated on a paper demonstrating that grid cells can represent any n-dimensional space.

Richard Sutton

A pioneer in reinforcement learning, known for his 'Bitter Lesson' blog post, which argues for general methods in AI that scale with computation rather than relying on tricky, specific solutions.

Donald Hebb

A psychologist who proposed the concept of 'Hebbian learning', stating that neurons that fire together wire together. Hawkins emphasizes that synaptogenesis aligns with this principle.

Alan Turing

Mathematician and computer scientist, considered the father of theoretical computer science and artificial intelligence. Hawkins implicitly critiques the Turing Test's focus on human-like intelligence.

Max Tegmark

A physicist and cosmologist known for his work on AI safety and the future of intelligence, mentioned by Hawkins in the context of discussing 'big problems' of existence.

Elon Musk

Entrepreneur and CEO of Tesla and SpaceX, known for his warnings about the potential existential threats of advanced AI, mentioned by the host.

Jeff Hawkins

Founder of the Redwood Center for Theoretical Neuroscience and Numenta, known for his work on reverse-engineering the neocortex and proposing AI architectures like HTM and the Thousand Brains Theory.

James Watson

Co-discoverer of the structure of DNA, who later showed interest in neuroscience. Hawkins recounts meeting him and his engagement with the new cortical theory.

Charles Babbage

Often considered the 'father of the computer', whose theoretical work in the 1800s was largely forgotten until much later, serving as a cautionary tale of ideas being ahead of their time.

Christof Koch

A neuroscientist who focuses on the neural correlates of consciousness, believing it to be the primary problem in neuroscience, a view Hawkins respectfully disagrees with as a necessary step for building intelligent machines.

Lex Fridman

The host of the podcast, an AI researcher at MIT, who interviews experts on artificial intelligence, consciousness, and other related fields.

Vernon Mountcastle

A neurophysiologist who, in 1978, cogently argued that the neocortex operates on a common principle across all its regions, regardless of sensory input. Hawkins views this as a foundational idea for understanding the neocortex.

Thomas Kuhn

An American philosopher of science who introduced the concept of 'paradigm shifts' in scientific progress.

Charles Darwin

Naturalist known for his theory of evolution by natural selection, mentioned as another example of a scientist experiencing a profound 'aha moment' when his theory unified disparate data.

Francis Crick

Co-discoverer of the structure of DNA, whose essay 'Thinking about the Brain' inspired Jeff Hawkins to pursue theoretical neuroscience. Crick later focused on consciousness.

Geoffrey Hinton

A leading figure in deep learning, mentioned as one of the leaders in machine learning who believes in exploring new approaches, such as his work on 'capsules'.

Ernest Becker

A cultural anthropologist and writer who wrote 'The Denial of Death', which proposes that human civilization is ultimately a defense mechanism against the terror of death.

Sam Harris

A neuroscientist, philosopher, and author, known for his discussions on AI safety and existential risks, mentioned by the host.

Albert Einstein

Physicist known for theory of relativity, mentioned as an example of a genius whose profound intuitions are hard for most to access, but whose ideas can be communicated through analogies.

More from Lex Fridman

View all 505 summaries

Found this useful? Build your knowledge library

Get AI-powered summaries of any YouTube video, podcast, or article in seconds. Save them to your personal pods and access them anytime.

Try Summify free