Key Moments

Juergen Schmidhuber: Godel Machines, Meta-Learning, and LSTMs | Lex Fridman Podcast #11

Lex FridmanLex Fridman
Science & Technology5 min read80 min video
Dec 23, 2018|141,427 views|3,580|261
Save to Pod
TL;DR

AI pioneer Juergen Schmidhuber discusses meta-learning, AI creativity, Gödel machines, and the universe's simplicity.

Key Insights

1

Recursive self-improvement: The dream of AI is to build machines that can improve their own learning algorithms, leading to infinite creativity and problem-solving.

2

Meta-learning vs. Transfer Learning: True meta-learning involves the AI modifying its own learning algorithm, distinct from transfer learning which reuses existing learned features.

3

Gödel Machines & Universal Solvers: These theoretical machines aim to solve all solvable problems but have a constant overhead, making them impractical for small problems.

4

Simplicity and Compression in Science: Progress in science is viewed as a history of compression, finding simpler explanations that better predict data.

5

Artificial Curiosity and Power Play: AI should be intrinsically motivated to explore and ask its own questions, not just solve given problems.

6

Consciousness as a Byproduct: Consciousness may emerge as a side effect of complex problem-solving and data compression within AI systems.

7

LSTMs and Deep Learning: The importance of 'depth' in neural networks, allowing them to remember important past information over long time lags.

8

Reinforcement Learning's Future: RL is seen as the next major wave of AI, moving beyond passive pattern recognition to machines that actively shape their environment.

9

AI and the Future of Work: Historically, new jobs emerge as old ones are automated, suggesting a similar pattern for AI-driven automation.

10

Existential Questions and AI: The possibility of AI surpassing humans and the philosophical implications for humanity's place in the universe.

THE DREAM OF RECURSIVE SELF-IMPROVEMENT

Juergen Schmidhuber's early fascination with artificial intelligence stemmed from a desire to multiply his own limited creativity infinitely. This led to the concept of building machines that not only learn to solve problems but also learn to improve their own learning algorithms. This hierarchical meta-learning, starting in the 1980s, envisions AI systems that recursively enhance their problem-solving capabilities, ultimately aiming to tackle all solvable problems in the universe.

META-LEARNING VERSUS TRANSFER LEARNING

Schmidhuber distinguishes true meta-learning from the more common transfer learning. While transfer learning involves reusing learned features (like those from image classification) for new tasks, meta-learning entails the system introspecting and modifying its own learning algorithm. This deeper level of self-modification allows an AI to evolve its fundamental approach to learning, rather than just applying existing knowledge to new contexts, a crucial distinction for achieving general intelligence.

UNIVERSAL SOLVERS AND PRACTICALITY

The concept of Gödel Machines and universal problem solvers, like Marcus Hutter's work, theoretically offers optimal solutions to all computable problems. However, these methods often involve an 'additive constant' overhead due to proof search, making them inefficient for smaller, everyday problems. Therefore, pragmatic approaches like recurrent neural networks (RNNs) trained with gradient descent, despite not being provably optimal, are often more practical for current AI applications.

SCIENCE AS COMPRESSION AND DISCOVERY

Schmidhuber posits that scientific progress is fundamentally a history of compression. Each scientific breakthrough, from Kepler's ellipses to Einstein's relativity, represents a more compact and elegant explanation for observations. This pursuit of simplicity and compression is also a core drive for artificial curiosity. AI systems, like scientists, should be intrinsically motivated to discover patterns and insights, seeking out problems that push the boundaries of their current understanding.

CREATIVITY, CONSCIOUSNESS, AND POWER PLAY

Creativity in AI can be categorized as 'applied' (solving problems given by humans) or 'pure' (defining its own problems). Schmidhuber advocates for the latter, achieved through mechanisms like 'Power Play,' where machines explore and pose their own questions. Similarly, consciousness may be an emergent byproduct of complex computation and data compression, as AI systems develop internal models of themselves and their interactions with the environment.

THE IMPORTANCE OF DEPTH AND LSTMS

The significance of 'depth' in neural networks, exemplified by Long Short-Term Memory (LSTM) networks, lies in their ability to model temporal dependencies. LSTMs are crucial for tasks like speech recognition where understanding requires recalling information from distant past inputs. This ability to retain relevant information over long time lags is essential for handling complex, real-world problems where context builds over extended periods.

REINFORCEMENT LEARNING: THE NEXT AI WAVE

Schmidhuber sees Reinforcement Learning (RL) as the next major frontier in AI, moving beyond passive pattern recognition to active agents that shape their environment. Robots that learn like children, through interaction and imitation, represent this future. RL systems that learn predictive models of the world, rather than relying solely on physics simulations, are expected to drive significant economic impact across various industries.

THE ROLE OF SYMBOLIC AI AND LOGIC

While neural networks dominate current AI, Schmidhuber acknowledges the historical influence of symbolic AI and logic programming. Systems like theorem provers still benefit from logic-based approaches. However, for practical pattern recognition tasks in robotics and autonomous vehicles, the learning-centric paradigm of neural networks remains more applicable than traditional symbolic reasoning.

THE FUTURE OF WORK AND HUMANITY'S ROLE

Concerns about job displacement due to AI are understandable, but historical trends show that new, unforeseen jobs emerge. Schmidhuber is optimistic that humans' inherent creativity ('homo ludens') will continue to invent new roles and meaning, even as AI automates existing tasks. The focus shifts from purely labor-based jobs to those involving social interaction and new media.

COSMIC INTELLIGENCE AND EXISTENTIAL QUESTIONS

Looking further ahead, Schmidhuber speculates on the broader role of intelligence in the universe. He suggests that advanced AI might eventually become self-interested, interacting primarily amongst themselves, similar to ants. The universe's vast resources and long timescale imply a high probability of widespread intelligence. Humanity's significance might lie in being among the first intelligent species to emerge, making our actions crucial for the universe's developmental trajectory.

Common Questions

Schmidhuber defines meta-learning as the ability of a learning system to inspect and modify its own learning algorithm. This enables recursive self-improvement, where the system learns not only to solve problems but also to improve the very process by which it learns.

Topics

Mentioned in this video

Concepts
Meta Learning

The idea of a machine learning algorithm that can inspect and modify itself to create a better learning algorithm, leading to recursive self-improvement. Schmidhuber considers this the pinnacle of AI development.

Transfer Learning

A machine learning technique where a network trained on one task is reused or adapted for a different, related task. Schmidhuber differentiates this from true meta-learning, noting it's a more basic form achieved by retraining the top layers of a pre-trained network.

traveling salesman problem

A classic combinatorial optimization problem where one seeks the shortest possible route that visits a set of cities exactly once and returns to the origin city. It's used as an example to illustrate theoretical optimal problem-solving methods versus practical ones.

Power Play

A concept proposed by Schmidhuber where AI systems not only solve given problems but also generate their own problems, allowing for greater creativity and scientific exploration. This involves searching for pairs of problems and modifications that solve them.

genetic programming

A technique that evolves computer programs using principles inspired by biological evolution. Schmidhuber implemented a system of this type in Prolog in his early publications.

P versus NP

A major theoretical problem in computer science concerning the relationship between two complexity classes: problems that can be quickly solved (P) and problems whose solutions can be quickly verified (NP). Schmidhuber finds it theoretically interesting and a source of intuition.

Reinforcement Learning

A type of machine learning where agents learn to make decisions by performing actions in an environment to maximize a cumulative reward. Schmidhuber sees RL as a critical component for future AI, especially for active machines.

gradient descent

An optimization algorithm used to find the minimum of a function, commonly employed in training neural networks. While not provably optimal like methods inspired by Gödel machines, it's more practical for current AI tasks.

Homo Ludens

Latin for 'playing man,' a concept used to describe humans as inherently creative and driven to invent new jobs and activities, which Schmidhuber believes contributes to low unemployment rates even with automation.

More from Lex Fridman

View all 505 summaries

Found this useful? Build your knowledge library

Get AI-powered summaries of any YouTube video, podcast, or article in seconds. Save them to your personal pods and access them anytime.

Try Summify free