Get Summify PRO lifetime for one-time $179 today

Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI | Lex Fridman Podcast #419

Sam Altman discusses the future where compute becomes the most valuable asset and the race for AGI being a power struggle. He reflects on the tumultuous OpenAI board saga, describing it as a mix of chaos and support. Despite the ordeal, he appreciates the outpouring of love and learns valuable lessons about resilience and board structures.

Compute is gonna be the currency of the future. The road to AGI should be a giant power struggle.

Sam Altman reflects on the challenging but educational board restructuring process at OpenAI, emphasizing the importance of addressing power dynamics and organizational resilience. He recognizes the value in learning from high-stress situations to better prepare for future challenges, especially in the journey towards creating AGI.

Reflecting on board structures, power dynamics, and organizational resilience is crucial for building AGI in a more organized way.

Sam Altman delves into the complexities of board deliberations during high-pressure situations, acknowledging the human dynamics at play and the need for effective decision-making under stress. He highlights the necessity for a board and team capable of navigating pressure-cooker scenarios in the pursuit of AGI.

People understandably make suboptimal decisions under pressure. Operating effectively under pressure will be crucial for OpenAI's success.

The conversation shifts to the newly structured board at OpenAI, with Sam Altman discussing the improvements in board composition to enhance decision-making and operational efficiency. He addresses the need for transparency and accountability to the global community as the organization moves forward.

The new board aims to be more experienced and effective in guiding OpenAI towards its goals.

During a tense weekend, decisions were made on new board members for OpenAI, aiming to form a cohesive group with a mix of expertise and perspectives. The process was intense and crucial for the organization's future stability and growth.

Decisions on new board members were made in the heat of the moment over a rollercoaster weekend.

Selecting board members for OpenAI involved seeking varied expertise to handle governance and strategic decisions rather than just technical proficiency, emphasizing the need for diverse perspectives and skills within the board.

Board members are chosen for expertise in governance, thoughtfulness, and various skills, not just technical proficiency.

The intricacies of managing OpenAI extend beyond technological advancements. Engagement with the board focuses on addressing societal impacts and diverse viewpoints, highlighting the multifaceted nature of decision-making within the organization.

OpenAI's board deliberations encompass societal impacts and diverse perspectives beyond technical aspects.

Amidst challenging moments, Sam Altman reflects on the emotional toll of decision-making processes and reveals the struggles endured during tough times, including contemplating radical personal changes while navigating high-pressure situations.

Sam Altman candidly shares the emotional challenges faced during a turbulent period at OpenAI.

After a tumultuous weekend, Sam Altman swiftly transitions towards acceptance and excitement for new opportunities, showcasing resilience, adaptability, and a forward-looking mindset in the face of unexpected challenges.

Embracing change and new beginnings, Sam Altman swiftly moves forward from adversity with enthusiasm and a positive outlook.

Following a period of uncertainty, Sam Altman experiences a surprising turn of events as board members express regret and offer a chance for his return, leading to a profound internal deliberation on commitment and love for the company.

Board members express regret, leading to a profound internal deliberation for Sam Altman on his return to OpenAI.

During a turbulent weekend, Sam Altman and the OpenAI team faced uncertainty and instability amid external pressures. Despite the challenges, Altman recalls feeling loved despite the hardship, emphasizing that love prevailed over negative emotions.

The dominant emotion of the weekend was love, not hate.

Altman admires Mira Murati's leadership not only during moments of crisis but in the day-to-day routine, highlighting the significance of consistent decision-making and presence in shaping his view of effective leaders.

Leadership is about how people act on a boring Tuesday morning.

While a dramatic weekend unfolded at OpenAI, Altman stresses that the focus should shift to the organization's broader scope over the past seven years, emphasizing the importance of sustained efforts beyond isolated crises.

OpenAI is really about the other seven years.

Addressing Ilya's role and concerns regarding AGI, Altman highlights Ilya's dedication to AI safety and his deep insights into the societal impacts of AI advancements, stressing the importance of meticulous planning and consideration.

Ilya takes AGI and safety concerns very seriously.

Reflecting on lessons learned from a challenging experience, Altman acknowledges a shift in his trust dynamics, moving towards a more cautious approach due to unexpected events, emphasizing the need to balance trust and preparation for worst-case scenarios.

It definitely changed how I think about default trust of people.

Sam Altman discusses the balance between trust and cynicism in developing AGI, highlighting the importance of surrounding oneself with capable and ethical individuals to navigate power and decision-making in the field.

Are you worried about becoming a little too cynical? - I think I'm like the extreme opposite of a cynical person.

The conversation delves into Elon Musk's criticism of OpenAI, reflecting on the organization's evolution from a research lab to a technology powerhouse, triggering disagreements with Musk over strategic direction and control.

Our mutual friend Elon sued OpenAI. What is the essence of what he's criticizing?

Altman sheds light on the differing perspectives regarding Elon Musk's departure from OpenAI, citing conflicting visions on the organization's future and strategic alignment, ultimately leading to a parting of ways.

He thought OpenAI was gonna fail. He wanted total control to sort of turn it around.

Altman reflects on OpenAI's mission of democratizing AI tools, emphasizing the importance of providing powerful technology for free as a public good and fostering innovation through accessibility and inclusivity.

What does it mean to you at the time? What does it mean to you now?

The discussion touches on the implications of Elon Musk's proposed name change to 'ClosedAI,' showcasing the tension between open source principles and strategic dynamics within the AI community.

So he said change your name to ClosedAI and I'll drop the lawsuit.

The conversation concludes with a reflection on the nature of competition, lawsuits, and ethical conduct in the realm of technological innovation, acknowledging the complexities of navigating disagreements in a constructive manner.

I think this whole thing is like unbecoming of a builder.

Sam Altman expresses his disappointment in Elon Musk's decision to pursue lawsuits, emphasizing the desire for friendly competition and admiration for Musk's extraordinary building skills.

He should just make Grok beat GPT and then GPT beats Grok, and it's just a competition, and it's beautiful for everybody.

Transitioning to discussing Sora, Altman highlights the impressive advancements in modeling the world through AI, noting the continuous improvements in models like Sora, showcasing better understanding and representation of the world.

Fundamentally, these models are just getting better and that will keep happening.

Altman delves into the limitations of AI systems like Sora, acknowledging both the need for fundamental changes in approach and the potential for enhancements through scaling and better data.

I think there is something about the approach which just seems to feel different from how we think and learn and whatever.

A discussion begins about working with people on data labeling for models, emphasizing the self-supervised learning approach through internet-scale unlabeled data. The conversation delves into the vast amount of internet data available for such learning methods, sparking curiosity about its potential and the balance of releasing detailed information about the self-supervised process.

Do you have you considered opening it up a little more details?

The focus shifts to the prospect of leveraging language models to process visual data, highlighting the need for further exploration and development. Concerns arise about the risks associated with releasing advanced AI systems and the importance of ensuring operational efficiency before deployment.

Can the same magic of LLMs now start moving towards visual data?

The conversation navigates towards the ethical and economic aspects of training AI and the implications under copyright law. The discussion contemplates the significance of compensating data creators and the evolving models needed to address these complex issues.

Do people who create valuable data deserve to be compensated?

Exploration continues into the impact of AI on job roles, emphasizing a shift from job-centric replacements to task automation. The dialogue reflects on the progressive evolution of AI tools and their role in enhancing human capabilities and problem-solving at varying time scales.

What percent of tasks will AI do and over what time horizon?

The discussion unfolds around the integration of AI tools in content creation while highlighting the enduring human influence in creative processes. It delves into the symbiotic relationship between AI-driven tools and human ingenuity, envisioning a future where new tools transform creative industries.

Many videos will use AI tools in production, but they'll still be fundamentally driven by a person.

Sam Altman discusses the exponential growth of AI models like GPT-3, emphasizing the need to look beyond current capabilities to ensure a better future.

We need to remember that the tools we have now may seem lacking in the future.

Altman explores the potential of GPT-4 in creative brainstorming and handling longer horizon tasks, showcasing its unique capabilities beyond conventional uses.

GPT-4 serves as a creative brainstorming partner and aids in breaking down complex tasks.

The value of GPT-4 lies not only in its capabilities but also in iterative human interactions, demonstrating its effectiveness when collaborating on multi-step problems.

Iterative back-and-forth interactions enhance GPT-4's problem-solving abilities.

Altman notes the transformative impact of models like ChatGPT in shifting perceptions towards believing in AI advancements and the significance of post-training for models.

ChatGPT marked a turning point in building belief in AI progress.

The discussion delves into the importance of expanding context length in models like GPT-4 Turbo, envisioning a future with significantly broader contextual understanding.

The dream of expanding context to billions of tokens hints at a paradigm shift in AI technology.

Altman highlights the diverse applications of GPT-4, particularly focusing on how it becomes the go-to tool for various knowledge work tasks, demonstrating its versatility and reliability.

Users leverage GPT-4 as their default tool for a wide range of knowledge work tasks, showcasing its adaptability and effectiveness.

GPT-5, a powerful tool, is favored by many for various knowledge tasks, from coding to editing papers. Users find it inspirational and more nuanced than Wikipedia on well-covered topics, encouraging deeper thinking.

The most interesting to me is the people who just use it as the start of their workflow.

Concerns arise regarding GPT generating convincing but potentially fake content. The need for fact-checking is crucial, with planned improvements but continued vigilance required.

How do you ground it in truth?

Discussion shifts to journalism relying on tools like GPT-4, highlighting the importance of quality reporting over speed, urging a shift towards in-depth, balanced journalism.

Journalistic efforts that take days and weeks and rewards great in-depth journalism.

Exploration into AI evolution involves the concept of AI agents building a user's memories over time. The aspiration is for AI to enhance user experiences based on accumulated knowledge.

This is an early exploration.

Balancing AI effectiveness and user privacy becomes a key topic. The emphasis is on user choice in data retention, with transparency and control paramount to address privacy concerns.

The right answer there is just user choice.

Privacy considerations highlight the need for transparency and control over personal data stored by AI, allowing users to decide on the privacy-utility trade-off that suits them best.

But I think the answer is just like really easy user choice.

The conversation delves into the importance of trust, intuition, and the impact of distrust in high-stress environments. It highlights the value of learning from challenging situations and allowing love to energize personal growth.

And that's a concern, that's a human concern.

Discussion shifts towards the computational capabilities of GPT and the need for a paradigm shift in AI for slower, deeper thinking processes to address complex problems effectively.

I think there will be a new paradigm for that kind of thinking.

The intriguing conversation touches on the mysterious Q-Star project, hinting at secretive research endeavors without divulging details, sparking curiosity and speculation.

It's very mysterious, Sam.

Exploration of the iterative deployment strategy at OpenAI emphasizes the importance of gradual progress and preparedness in the AI landscape, focusing on thoughtful advancements over sudden shocks.

Our goal is not to have shock updates to the world.

The uncertain release date of GPT-5 prompts discussions on the iterative unveiling of AI models and the challenges in determining the optimal release strategy for future innovations.

But people tend to like to celebrate, people celebrate birthdays.

The discussion delves into the upcoming GPT-5 and the challenges involved in its development, encompassing aspects like computing power, technical innovation, and combining various elements into a cohesive platform. There's a focus on the distributed nature of innovation at OpenAI, multiplying medium-sized components to build a powerful whole.

We multiply 200 medium-sized things together into one giant thing.

The conversation shifts to the importance of zooming out to gain a broader perspective in problem-solving and innovation, emphasizing the value of understanding the entire ecosystem rather than just individual components. Sam shares insights on the evolving landscape of technology and the benefits of seeing connections across different frontiers.

It's sometimes useful to zoom out and look at the entire map.

The dialogue transitions to a discussion on the future of computing as a crucial global commodity, likening it to energy in terms of importance. The focus is on the increasing demand for computing power and the challenges associated with energy supply, data center construction, and chip manufacturing.

Compute is gonna be the currency of the future.

The conversation explores the potential of nuclear fusion as a solution to the energy puzzle, emphasizing the need for innovative approaches to nuclear reactor design. Sam advocates for overcoming public fear and misconceptions to propel advancements in nuclear fusion technology.

It's really sad to me how the history of that went and hope we get back to it in a meaningful way.

The discussion touches on the perceived risks associated with AI, including politicization and the potential for theatrical sensationalism. It highlights the importance of managing public perception and addressing concerns to ensure a balanced and informed dialogue around artificial intelligence.

AI is gonna have tremendously more good consequences than bad ones, but it is gonna have bad ones.

Sam Altman discusses the impact of AI risks, emphasizing that while AI can bring about tremendous good, it also poses bad consequences. He notes that our focus tends to gravitate towards dramatic risks over gradual but serious concerns like air pollution.

The ones that make a good climax scene of a movie carry much more weight with us than the ones that are very bad over a long period of time.

Altman and Lex Fridman delve into the importance of truth in understanding risks associated with AI, highlighting the need for AI to aid in seeing the truth and balancing the understanding of potential dangers and benefits.

Hopefully AI can help us see the truth of things to have a balance to understand what are the actual risks, the actual dangers of things in the world.

The conversation shifts to the implications of competition in the AI space, where Altman voices concerns about potential escalations leading to an arms race, expressing a preference for a safer quadrant in the journey towards AGI.

Short timelines, slow takeoff is the safest quadrant and the one I'd most like us to be in.

Altman and Lex Fridman touch upon the importance of collaboration in ensuring AI safety, expressing hopes for combined efforts rather than siloed approaches, highlighting the significance of unity in addressing AI risks.

Collaboration here, I think, is really beneficial for everybody on that front.

The discussion transitions to the evolving landscape of search engines, with Altman emphasizing the need for innovative approaches beyond merely replicating Google's search model, focusing on enhancing information retrieval and synthesis for user benefit.

The thing that's exciting to me is not that we can go build a better copy of Google Search, but that maybe there's just some much better way to help people find and act on and synthesize information.

Sam Altman shares concerns about ad-supported platforms in a world with AI, discussing the potential for AI to improve ad relevance. He reflects on Wikipedia's ad-free model, hinting at OpenAI's sustainable business model without ads.

AI will be better at showing the best kind of ads for things you actually need.

Altman delves into safety and bias issues in AI, referencing the recent Gemini 1.5 release drama. He emphasizes the importance of addressing bias within models to prevent ideological influences.

We work super hard not to introduce bias in AI.

The discussion shifts to creating clear guidelines for AI behavior to enhance transparency and ensure models align with desired principles. Altman stresses the need for unambiguous public expectations for AI.

Models should adhere to clear guidelines for expected responses.

Altman addresses ideological influences in tech companies, highlighting OpenAI's focus on AGI beliefs over cultural divisions. He emphasizes the company's dedication to safety and minimized involvement in cultural conflicts.

OpenAI prioritizes AGI beliefs over cultural wars.

Altman discusses the evolving emphasis on safety within OpenAI, stressing the collective responsibility of the company to consider safety aspects in all facets of AI development.

The whole company needs to focus on AI safety.

Looking ahead to GPT-5, Altman expresses excitement about its overall improvement and intelligence enhancement without trade-offs. He anticipates GPT-5's broad advancement across various domains.

GPT-5 is getting better across the board.

The leap from GPT-4 to GPT-5 is expected to bring overall improvement, rather than isolated advancements. The progress at GPT can be likened to a deep understanding beyond mere intelligence, where prompts are understood on a profound level.

It's getting better across the board. It's not just about intelligence; it's about understanding.

In the future, programming is envisioned to evolve towards natural language, potentially changing the skill sets required. The shift towards using natural language interfaces alongside coding tools is seen as a significant development.

Some people may entirely program in natural language, changing the nature of programming.

The discussion touches on humanoid robot development by OpenAI and the importance of embodied AI. The aspiration to integrate physical bots alongside AGI is highlighted for a more comprehensive impact beyond mere algorithms.

The goal is to have robots or physical world agents to complement AGI and enable real-world interactions.

Speculations about achieving AGI are approached cautiously, focusing on building advanced systems capable of remarkable feats by the end of the decade. The evolving capabilities of systems are seen as milestones in technological progress.

By the end of the decade, we expect to have highly capable systems that are truly remarkable.

AGI's impact is viewed through the lens of accelerating scientific discovery globally. Enhancing the rate of scientific breakthroughs is deemed vital for economic growth, emphasizing the importance of technological progress for societal advancement.

When a system significantly boosts scientific discovery rates, it marks a major achievement in the world.

Sam Altman expresses his frustration with skepticism towards science, highlighting the importance of scientific progress and the incredible possibilities it entails, including the development of Artificial General Intelligence (AGI).

I don't like the skepticism about science in recent years.

Altman discusses hypothetical interactions with AGI, emphasizing the complexities of what questions to ask and the limitations of expecting immediate profound answers from the first AGI.

It's surprisingly difficult to say what I would ask that first AGI.

Altman delves into the governance and power dynamics within OpenAI, advocating for distributed control to prevent any individual, including himself, from having ultimate authority over AGI.

I think you want a robust governance system.

The discussion shifts to the balance of power in AGI development, with Altman expressing the belief that no single person should wield control over AGI to ensure a responsible and collaborative approach.

Balance of power is a good thing for sure.

Altman addresses concerns about losing control of AGI, asserting that while it's not his primary worry currently, the importance of continuous dedication to AI safety and diverse challenges ahead.

It's not my top worry right now.

Sam Altman discusses the evolution of capitalization in text communication, highlighting how his online upbringing influenced his views. He reflects on the shifting norms of formal and informal writing styles, ultimately questioning the significance of capitalization in modern communication.

I think capitalization has gone down over time... it's sort of like a dumb construct that we capitalize the letter at the beginning of a sentence and of certain names and whatever.

The conversation shifts to the topic of simulated worlds generated by AI, sparking a discussion on the simulation hypothesis. Sam Altman acknowledges the potential impact of AI-generated realities on beliefs about living in a simulation, hinting at a transformative perspective on reality.

I think the fact that we can generate worlds should increase everyone's probability... openness to it... I was certain we would be able to do something like Sora at some point.

Sam Altman delves into the profound insights AI can offer, drawing parallels between AI advancements and psychedelic experiences. He envisions AI as a gateway to new realms of knowledge and perspectives, suggesting a shift in how people perceive the simulation hypothesis.

AI will serve as those kinds of gateways... to another wave sea reality... any version of the simulation hypothesis is maybe more likely than they thought before.

Excited about an upcoming journey to the Amazon jungle, Sam Altman expresses a mix of anticipation and apprehension due to the dangers of the natural environment. He reflects on the awe-inspiring intricacies of nature and the jungle's role as a prime example of the evolutionary machine.

It's the machine... the machine of nature... it's just like this system that just exists and renews itself.

Reflecting on the evolutionary machine that continuously renews itself, Sam Altman contemplates the complexity and beauty of human existence spawned from the depths of evolution. Expressing gratitude for the conversation, he dives into the possibility of intelligent alien civilizations, pondering the enigma of the Fermi Paradox and the nature of intelligence beyond conventional measures like IQ tests.

It's the machine. It makes you appreciate this human thing... created that and it's most clearly on display in the jungle. I deeply want to believe that the answer is yes.

Delving into the future of humanity, Sam finds inspiration in the trajectory of human progress, acknowledging past shortcomings while looking forward to a collectively forged better future. He discusses the concept of Artificial General Intelligence (AGI) as a collaborative societal scaffold rather than an individual genius, emphasizing the collective effort that propels advancements, instilling hope for what lies ahead.

Just the trajectory of it all that we're together pushing towards a better future... we all created that and that fills me with hope for the future.

In a contemplative moment, Sam reflects on mortality with a sense of curiosity and gratitude for life's experiences. Acknowledging the transient nature of existence, he expresses a profound appreciation for the wonders of life and human creations, signifying a humble acceptance of the unknown.

What an interesting time. But I would mostly just feel very grateful for my life... It's a pretty awesome life.