Key Moments

The Future of Intelligence: A Conversation with Jeff Hawkins (Episode #255)

Sam HarrisSam Harris
Science & Technology4 min read59 min video
Jul 9, 2021|67,164 views|1,161|389
Save to Pod
TL;DR

Jeff Hawkins and Sam Harris discuss intelligence, brain function, and AI risks.

Key Insights

1

Intelligence is defined as the neocortex's ability to build and use a model of the world for planning and action.

2

The brain constantly makes predictions, and these predictions are often processed within individual neurons, not accessible to consciousness.

3

The neocortex is organized into "cortical columns," each acting as a modeling engine, and the brain functions as a distributed system of these columns voting together.

4

Thought can be understood as movement within conceptual reference frames, analogous to physical movement in the real world.

5

Emerging AI systems are powerful pattern classifiers but lack true general intelligence, which requires principles derived from brain function.

6

Jeff Hawkins is optimistic about AI risk, believing that intelligent machines can be built without the inherent motivations and drives found in biological brains, thus mitigating the alignment problem.

A Unique Path to Neuroscience

Jeff Hawkins, with a background in electrical engineering and entrepreneurial success in handheld computing (Palm, Handspring), shares his unconventional journey into neuroscience. Driven by a deep curiosity about how the brain works, he founded Numenta and the Redwood Neuroscience Institute to pursue large-scale theories of cortical function, often self-funding due to difficulties in securing traditional research grants for ambitious theoretical work.

Defining Intelligence as World Modeling

Hawkins defines intelligence primarily through the function of the neocortex, which constitutes about 70% of the human brain. He posits that a core aspect of intelligence is the ability to learn and maintain an internal model of the world. This model enables recognition, action, and planning by allowing the brain to predict future states based on current input and internal representations.

The Predictive Nature of Neural Processing

A key insight is that the brain is continuously making predictions about sensory input, often unconsciously. These predictions are fundamental to how neurons function, with a significant portion of their synapses dedicated to preparing for expected patterns. When predictions are accurate, only a subset of neurons activate; however, prediction errors trigger broader neuronal activation, drawing attention to anomalies.

Cortical Columns and the Algorithm of the Neocortex

Drawing on Vernon Mountcastle's work, Hawkins explains the neocortex's organization into nearly identical 'cortical columns.' Each column is proposed to be a fundamental processing unit, executing a common algorithm. Despite their microscopic complexity, these columns are viewed as the core components for learning sensory-motor models and are thought to operate ubiquitously across different cortical areas, suggesting a unified principle of operation.

Reference Frames and Thought as Conceptual Movement

Reference frames are presented as crucial for understanding spatial relationships and organizing knowledge. Hawkins argues that every cortical column uses reference frames to build models of its input. Thought itself is described as movement through conceptual reference frames, an abstract form of movement that allows us to navigate and recall stored information, whether it's related to physical objects or abstract concepts like mathematics or politics.

The "Thousand Brains" Theory and Distributed Intelligence

The title of Hawkins' book, "A Thousand Brains," reflects the idea that the neocortex contains approximately 150,000 cortical columns, each acting as an independent modeling system. The brain's overall intelligence emerges from the collective 'voting' of these distributed systems. This distributed nature allows for robust sensory integration and complex processing, challenging the notion of a single, centralized intelligent agent within the brain.

Building Artificial General Intelligence

To achieve true Artificial General Intelligence (AGI), Hawkins suggests focusing on replicating the principles of the neocortex. Key requirements include embodiment (or a form of 'movement' in a representational space), the use of reference frames for organizing knowledge, and a distributed architecture. He believes that understanding how brains model the world provides the essential blueprint for creating intelligent machines.

Addressing AI Risk and the Alignment Problem

Hawkins expresses optimism regarding AI risk, particularly the 'alignment problem.' He differentiates the neocortex's world-modeling function from the motivational drives of older brain structures. He argues that intelligent machines can be built to possess intelligence without inherent desires or motivations, functioning more like a tool (e.g., a map) that can be used for various purposes, thereby avoiding the existential risks often associated with misaligned AI goals.

The Nature of Belief and Falsehood

The conversation touches upon cognitive biases like the "illusory truth effect," where the mere repeated exposure to a statement, even a false one, can increase belief. This phenomenon, particularly prevalent in language-based knowledge acquisition, highlights the challenge of distinguishing truth from falsehood and how our models of the world can become distorted. This also relates to how the brain processes propositions, often by attempting to model them as true before verification or falsification.

Challenges in Understanding Cognition

Sam Harris raises concerns about the sharp distinction Hawkins draws between reason and emotion, suggesting that these aspects may be more intertwined than believed. The complexity of human cognition, including the default acceptance of propositions and the subtle influence of biases, indicates that building truly aligned AI may be more challenging than simply mimicking neocortical structures, as the boundary between neutral modeling and motivated cognition can be blurry.

Building Intelligent Machines: Key Principles from Neuroscience

Practical takeaways from this episode

Do This

Replicate the principles of the neocortex's modeling system.
Ensure AI systems have embodiment or a way to move sensors in the world (physical or virtual).
Organize information using reference frames.
Design AI as a distributed intelligence system, not a single monolithic entity.
Focus on the neocortex's function, not necessarily replicating the entire brain's motivational systems.

Avoid This

Assume current AI systems are truly intelligent; they are primarily pattern classifiers.
Replicate the 'old parts' of the brain (like brainstem or limbic system) in AI if not necessary for intelligence itself.
Underestimate the importance of movement and embodiment (even virtual) for learning.
Overlook the potential for language to introduce false beliefs into AI models.
Succumb to the idea that intelligence is inherently tied to human emotions or motivations.

Common Questions

Jeff Hawkins defines intelligence as the ability to learn and use an internal model of the world. This model allows us to recognize our surroundings, act effectively, and plan for the future, essentially understanding the world through these learned representations.

Topics

Mentioned in this video

More from Sam Harris

View all 278 summaries

Found this useful? Build your knowledge library

Get AI-powered summaries of any YouTube video, podcast, or article in seconds. Save them to your personal pods and access them anytime.

Try Summify free