Stuart Russell

researcherVerified via Wikidata

An AI researcher concerned about alignment issues, proposing a utility function for AI to perpetually approximate human desires.

Mentioned in 28 videos

Videos Mentioning Stuart Russell

AI and the New Face of Antisemitism (Ep. 453) FULL EPISODE

AI and the New Face of Antisemitism (Ep. 453) FULL EPISODE

Sam Harris

An AI researcher concerned about alignment issues, proposing a utility function for AI to perpetually approximate human desires.

Robin Hanson: Alien Civilizations, UFOs, and the Future of Humanity | Lex Fridman Podcast #292

Robin Hanson: Alien Civilizations, UFOs, and the Future of Humanity | Lex Fridman Podcast #292

Lex Fridman

Referenced regarding reinforcement learning, highlighting that simple algorithms combined with sufficient scale can be powerful in AI.

Chris Mason: Space Travel, Colonization, and Long-Term Survival in Space | Lex Fridman Podcast #283

Chris Mason: Space Travel, Colonization, and Long-Term Survival in Space | Lex Fridman Podcast #283

Lex Fridman

Computer scientist who proposes that AI systems should have self-doubt to avoid local optimums and ensure better decision-making.

Grimes: Music, AI, and the Future of Humanity | Lex Fridman Podcast #281

Grimes: Music, AI, and the Future of Humanity | Lex Fridman Podcast #281

Lex Fridman

AI researcher who proposes injecting uncertainty and humility into AI systems to ensure they doubt themselves as they become more intelligent.

Jay Bhattacharya: The Case Against Lockdowns | Lex Fridman Podcast #254

Jay Bhattacharya: The Case Against Lockdowns | Lex Fridman Podcast #254

Lex Fridman

AI researcher who suggests building a 'doubt module' into super-intelligent AI to prevent it from destroying humanity, reflecting the importance of humility.

Making Sense of Artificial Intelligence

Making Sense of Artificial Intelligence

Sam Harris

Professor of computer science, discusses the value alignment problem and AI safety.

Debating the Future of AI: A Conversation with Marc Andreessen (Episode #324)

Debating the Future of AI: A Conversation with Marc Andreessen (Episode #324)

Sam Harris

Author of a popular AI textbook, mentioned as a prominent figure with concerns about AI risks.

The REAL potential of generative AI

The REAL potential of generative AI

Y Combinator

Cited for an analogy comparing the potential arrival of AGI to an alien civilization landing on Earth and the urgent need to prepare.

What Do We Know About Our Minds?: A Conversation with Paul Bloom (Episode #317)

What Do We Know About Our Minds?: A Conversation with Paul Bloom (Episode #317)

Sam Harris

An AI safety expert, mentioned in the context of expectations about AI development and caution, contrasting with current rapid advancements.

Max Tegmark: The Case for Halting AI Development | Lex Fridman Podcast #371

Max Tegmark: The Case for Halting AI Development | Lex Fridman Podcast #371

Lex Fridman

A signatory of the open letter and an influential AI researcher at Berkeley, known for his work on benevolent AI and inverse reinforcement learning.

Max Tegmark: AI and Physics | Lex Fridman Podcast #155

Max Tegmark: AI and Physics | Lex Fridman Podcast #155

Lex Fridman

Professor from Berkeley and author of a best-selling AI textbook, cited as an outspoken worrier about AI existential risks, countering the 'Luddite' argument.

Peter Norvig: Artificial Intelligence: A Modern Approach | Lex Fridman Podcast #42

Peter Norvig: Artificial Intelligence: A Modern Approach | Lex Fridman Podcast #42

Lex Fridman

Co-author with Peter Norvig of the book 'Artificial Intelligence: A Modern Approach'.

The Trouble with AI: A Conversation with Stuart Russell and Gary Marcus (Episode #312)

The Trouble with AI: A Conversation with Stuart Russell and Gary Marcus (Episode #312)

Sam Harris

Professor of Computer Science at UC Berkeley, author of 'Artificial Intelligence: A Modern Approach' and 'Human Compatible: Artificial Intelligence and the Problem of Control'. He expresses significant concern about long-term AGI risks.

Jeff Hawkins: The Thousand Brains Theory of Intelligence | Lex Fridman Podcast #208

Jeff Hawkins: The Thousand Brains Theory of Intelligence | Lex Fridman Podcast #208

Lex Fridman

Mentioned alongside Elon Musk as someone who expresses worry about existential threats from AI.

Daphne Koller: Biomedicine and Machine Learning | Lex Fridman Podcast #93

Daphne Koller: Biomedicine and Machine Learning | Lex Fridman Podcast #93

Lex Fridman

AI researcher who believes intelligent systems should possess self-doubt to ensure human control.

Turing Test: Can Machines Think?

Turing Test: Can Machines Think?

Lex Fridman

A prominent AI researcher known for his work on AI safety and co-authoring 'Artificial Intelligence: A Modern Approach'.

Michael Littman: Reinforcement Learning and the Future of AI | Lex Fridman Podcast #144

Michael Littman: Reinforcement Learning and the Future of AI | Lex Fridman Podcast #144

Lex Fridman

AI researcher and author of 'Human Compatible: Artificial Intelligence and the Problem of Control,' which also influenced discussions on AI control problems.

How Much Does the Future Matter?: A Conversation with William MacAskill (Episode #292)

How Much Does the Future Matter?: A Conversation with William MacAskill (Episode #292)

Sam Harris

AI researcher whose analogy about machines knowing when they will arrive is used to illustrate the urgency of AI safety, and who advocates for integrating safety into AI development.

Anca Dragan: Human-Robot Interaction and Reward Engineering | Lex Fridman Podcast #81

Anca Dragan: Human-Robot Interaction and Reward Engineering | Lex Fridman Podcast #81

Lex Fridman

A prominent AI researcher and collaborator with Anca Dragan, who advocates for interpreting reward functions as good evidence of human preference, rather than rigid specifications.

Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9

Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9

Lex Fridman

Professor of Computer Science at UC Berkeley and co-author of 'Artificial Intelligence: A Modern Approach'. He discusses his early AI programming experiences, meta-reasoning, AI safety, and the potential risks of advanced AI.

Marcus Hutter: Universal Artificial Intelligence, AIXI, and AGI | Lex Fridman Podcast #75

Marcus Hutter: Universal Artificial Intelligence, AIXI, and AGI | Lex Fridman Podcast #75

Lex Fridman

Co-author of 'Artificial Intelligence: A Modern Approach'.

Eric Schmidt: Google | Lex Fridman Podcast #8

Eric Schmidt: Google | Lex Fridman Podcast #8

Lex Fridman

Expert who shares views similar to Elon Musk regarding the existential threat of AI.

The Future of Intelligence: A Conversation with Jeff Hawkins (Episode #255)

The Future of Intelligence: A Conversation with Jeff Hawkins (Episode #255)

Sam Harris

A researcher in AI whose views on AI risk inform Sam Harris's perspective.

Sergey Levine: Robotics and Machine Learning | Lex Fridman Podcast #108

Sergey Levine: Robotics and Machine Learning | Lex Fridman Podcast #108

Lex Fridman

A prominent AI researcher known for his concerns about AI alignment and ensuring AI systems align with human values.

Page 1 of 2Next