Key Moments
OpenAI Co-Founder on the AI Race, the Sam Altman Firing, and What Comes Next
Key Moments
OpenAI's co-founder reveals the company shifted from a nonprofit to a for-profit model to fund the immense compute required for AGI, and that the recent board turmoil stemmed from a lack of communication and differing values.
Key Insights
The original technical plan for OpenAI, formulated in 2015, focused on three steps: solving reinforcement learning, unsupervised learning, and gradually learning more complicated things.
OpenAI transitioned from a nonprofit to a for-profit entity in 2017 due to the realization that significant compute power and large data centers were necessary for AGI development, exceeding nonprofit fundraising capabilities.
The Dota project was instrumental in demonstrating that massive compute scaled with simple algorithms could achieve human-level performance in complex, unpredictable environments, even with a neural network comparable in size to an insect's brain.
In the November 2023 incident, Greg Brockman resigned the day he was informed of Sam Altman's firing and his own removal from the board, immediately planning to start a new company with Altman.
OpenAI utilizes a 'distillation' process to protect its models, making it harder for competitors to copy advancements by not revealing intermediate reasoning steps or unnecessary model components.
The critical challenge of compute scarcity means society must prioritize which problems to solve, with OpenAI balancing broad access (free tier of ChatGPT) against deep problem-solving (like potential cancer research data centers).
The AI mission as a life's calling
Greg Brockman's journey to co-founding OpenAI stemmed from a desire to dedicate his life to a mission he felt was crucial for humanity's future: advancing Artificial Intelligence. While he found Stripe an important company, he felt the problem they were solving wasn't his personal calling. The clarity of AI's potential impact led him to explore this path. His initial conversations with Sam Altman in 2015, even as Brockman was considering leaving Stripe, quickly solidified their shared interest in an AI venture. They recognized the immense challenge of starting a research lab in a field dominated by giants like DeepMind, but found no definitive reason why it was impossible, spurring them to action.
The Napa offsite and the founding vision
To overcome the 'symmetry problem' and solidify a founding team, Brockman organized an offsite in Napa for researchers he had identified. Before any official offers or structures were in place, this gathering became pivotal. The team, including luminaries like Ilya Sutskever, developed what Brockman describes as the technical blueprint that OpenAI has largely followed for the past decade: 1) solve reinforcement learning, 2) solve unsupervised learning, and 3) gradually learn more complicated things. This foundational meeting, fueled by shared ideas and a clear mission, set the stage for OpenAI's early endeavors.
The necessity of a for-profit pivot for AGI
By 2017, OpenAI began to understand the sheer scale of computational resources required to achieve AGI. The 'math on compute' revealed an insatiable need for powerful hardware and vast data centers. Brockman identified companies like Cerebras, whose unique hardware promised capabilities far beyond current projections. Fundraising for these immense requirements proved to be a significant hurdle for a nonprofit structure. Recognizing this limitation, key figures including Elon Musk, Sam Altman, and Ilya Sutskever agreed that transitioning to a for-profit entity was the only viable path to secure the necessary capital and resources to fulfill OpenAI's mission of building AGI.
Milestones: From Dota to GPT-4
Brockman recounts several key moments that signaled OpenAI's progress. The Dota project, initially an attempt to develop scalable reinforcement learning methods, unexpectedly demonstrated that massive compute with relatively simple algorithms could rival human intuition in complex, unrestricted environments. This success, achieved with a neural network size comparable to an insect's brain, raised the profound question of what could be achieved at a human brain scale. The GPT series, particularly the moment with GPT-4, brought a different kind of realization: the AI was so fluent and capable that differentiating it from AGI became increasingly difficult, blurring the lines of what was previously thought possible.
The Sam Altman firing and immediate resignation
In November 2023, Brockman learned of Sam Altman's unceremonious firing via a video call with the board, where he was also informed of his own removal from the board. Without substantial reasons provided, he immediately saw the situation as 'not right' and, after a brief conversation with his wife, resigned from OpenAI that same day. This act of solidarity, alongside other key collaborators, was the first step in planning a new venture with Altman and their core team.
Project Phoenix: Loyalty and a bid to return
Following their resignations, Brockman, Altman, and key personnel began sketching out plans for a new company. The overwhelming support from OpenAI employees, who overwhelmingly sided with Altman and Brockman, was a 'real honest surprise.' This loyalty manifested in a collective desire to stay together, even as competitors vied to poach talent. The group engaged in negotiations with the OpenAI board, holding out hope for a return. However, when the board appointed a new interim CEO, the employee exodus intensified, leading to what Brockman describes as 'real chaos' and a commitment to forging ahead with their new company, unexpectedly buoyed by a massive wave of support.
The profound impact of compute scarcity
The exponential growth and increasing complexity of AI models have created a critical bottleneck: compute. Brockman emphasizes that current fleets of GPUs number in the hundreds of thousands or millions, a scale far too small for the world's burgeoning AI ambitions. This scarcity forces difficult societal decisions about prioritizing compute allocation. OpenAI's strategy involves making technology widely available through a free tier, while also pursuing ambitious projects that require immense resources, like potential dedicated data centers for solving complex problems such as cancer. He notes that the company's early, heavily criticized investment in data centers has provided a significant advantage, which competitors now evidently lack.
Iterative deployment and societal resilience
OpenAI's approach to releasing AI technology is through 'iterative deployment.' Instead of a single, massive launch, they release intermediate versions, allowing society and the company to adapt and learn. This strategy, tested with GPT-3, revealed unexpected misuses like medical spam, which the team then addressed. Brockman emphasizes that this is not an excuse for recklessness but a practical method for real-world learning amidst unprecedented technological advancement. He also highlights the importance of building societal resilience to AI, drawing parallels to how society adapted to electricity and cars, by developing safety standards, infrastructure, and educational initiatives. This includes regulatory considerations for privacy, privilege, and ensuring broad distribution of AI's economic benefits.
AI as empowerment and the future of work
Addressing fears about job displacement, Brockman posits that AI will be a tool of empowerment and human agency, not just job replacement. While acknowledging the disruption, he stresses that AI will create new opportunities, enabling individuals to become builders and creators. He advises young people to focus on developing skills that leverage AI, seeing the future as one where people manage agents and become CEOs of AI corporations. While the transition to a 'computed economy' will be different, he believes AI will 'lift up everyone,' facilitating advancements in areas like personalized medicine, where every individual could have access to an AI doctor.
Mentioned in This Episode
●Products
●Software & Apps
●Companies
●Organizations
●People Referenced
Common Questions
OpenAI began in 2015 when co-founders Sam Altman and the speaker realized the potential and the need for an independent AI research lab. They gathered initial researchers and developed a technical plan focusing on reinforcement and unsupervised learning.
Topics
Mentioned in this video
Mentioned as the speaker's previous startup, where the problem being solved wasn't his core passion.
The central subject of the discussion, detailing its founding, mission, internal conflicts, and future trajectory.
Mentioned as a major competitor at the time of OpenAI's founding, possessing significant resources and talent.
A company building unique computing hardware that OpenAI considered for AGI development.
An AI-powered notepad for meetings, advertised as a tool to improve note-taking and focus.
An electrolyte drink advertised for maintaining focus and performance throughout the day.
Mentioned implicitly through Satya Nadella's role.
Implicitly involved through the discussion of GPUs and compute.
Key figure in the founding of OpenAI, involved in convincing the speaker to start an AI company, and later fired as CEO. The conversation revolves around his role in founding and the subsequent crisis.
Co-founder of OpenAI, who agreed with the decision to create a for-profit entity to achieve the mission.
Co-founder of OpenAI, a key collaborator in its founding and development. His departure was a difficult moment for the speaker. He is also mentioned as advocating for the company to come back together.
Mentioned as a potential early team member of OpenAI who ultimately went to Google Brain.
Mentioned as an early member who joined OpenAI and was instrumental in developing the technical plan.
CEO of Microsoft, discussed in the context of potentially supporting the new endeavor after Altman's firing.
A significant AI achievement by DeepMind, mentioned as a precursor to its dominance in the AI field.
A significant advancement in the GPT series, highlighting the difficulty in definitively defining when AGI is achieved.
Used for circulating a petition and caused a crash due to the high number of signatories after the firing incident.
Cited as an example of AI making development processes faster and as a tool that has saved lives.
Mentioned as a tool that has revolutionized software engineering and enables anyone to build apps.
Advertised as a tool for creating professional videos quickly, using realistic avatars and AI translation.
More from The Knowledge Project Podcast
View all 102 summaries
99 minThe Engineer Who Runs a $25B Company | Mario Harik
132 minJoe Liemandt: The Future of Education is Better Than You Think
1 minWhy Customers Can't Figure Out What You Sell | April Dunford
2 minRobinhood CEO Calls Out the Banking Industry's "Stupid Tax"
Found this useful? Build your knowledge library
Get AI-powered summaries of any YouTube video, podcast, or article in seconds. Save them to your personal pods and access them anytime.
Get Started Free