OpenAI Flip-Flops and '10% Chance of Outperforming Humans in Every Task by 2027' - 3K AI Researchers

AI ExplainedAI Explained
Science & Technology5 min read23 min video
Jan 12, 2024|146,635 views|4,033|781
Save to Pod

Key Moments

TL;DR

OpenAI shifts on engagement, AI researcher survey predicts superintelligence by 2047, and data battles loom.

Key Insights

1

OpenAI's GPT store initially planned to monetize through user engagement, a shift from their previous stance against maximizing screen time.

2

The GPT store has practical limitations, with many custom GPTs not significantly outperforming GPT-4 for specific tasks, though Consensus GPT is a notable exception for research.

3

OpenAI is developing GPT-4's memory capabilities, allowing it to retain information and preferences across conversations, potentially making AI more addictive and personalized.

4

A survey of 2778 AI researchers indicates a 10% chance of AI outperforming humans in all tasks by 2027 and a 50% chance by 2047.

5

There's a significant discrepancy in AI researcher predictions between achieving human-level machine intelligence (2047) and the full automation of labor (2100s), possibly due to framing or expected scientific disruption.

6

A majority of AI researchers (53%) believe in the possibility of an intelligence explosion within 5 years, leading to a rapid acceleration of technological progress.

7

The battle for quality AI training data is intensifying, with tech giants like OpenAI, Google, and Apple competing for publisher content, raising questions about the future of independent journalism.

OPENAI'S EVOLVING STRATEGY ON USER ENGAGEMENT

OpenAI's recent announcement regarding the GPT store revealed a pivot in their monetization strategy. Initially, the plan was to pay builders based on user engagement with their custom GPTs, a move that aligns with maximizing usage and screen time. This starkly contrasts with previous statements by Sam Altman, who expressed concerns about engagement-maximizing business models and even suggested that less product usage was preferable due to GPU limitations. This shift raises questions about the company's core principles concerning user attention and the potential for addictive AI applications.

THE GPT STORE'S PRACTICAL PERFORMANCE AND EXCEPTIONS

Early testing of the GPT store reveals mixed results for custom GPTs. Many custom GPTs, despite their advertised capabilities, failed to significantly outperform established models like GPT-4 for specific tasks such as precise word counts. However, the Consensus GPT emerged as a notable exception, providing genuine value by surfacing relevant links for follow-up research, thus surpassing GPT-4 for that particular use case. This indicates that while the store offers potential, many current offerings are not yet revolutionary.

ADVANCEMENTS IN GPT MEMORY AND PERSONALIZATION

A less prominent but potentially significant development from OpenAI involves GPT-4's ability to learn from and remember conversations. This feature, allowing users to reset or disable memory, could lead to more personalized and addictive AI experiences. By remembering user preferences, project details, and even personal information, AI could become a more integrated and indispensable assistant, blurring the lines between a tool and a personal companion. This capability raises ethical considerations about data privacy and the potential for emotional dependency on AI.

AI RESEARCHER SURVEY: TIMELINES FOR SUPERINTELLIGENCE AND AUTOMATION

A recent survey of 2778 AI researchers presents startling predictions about the future of artificial intelligence. The survey estimates a 10% chance of unassisted machines outperforming humans in every task by 2027, escalating to a 50% probability by 2047. This rapid projected advancement, particularly the 2027 timeline for universal task outperformance, suggests an accelerated pace of development compared to previous estimates. The survey highlights a significant concern regarding the potential for AI to surpass human capabilities across virtually all domains within the next few years.

DISCREPANCIES IN AUTOMATION PREDICTIONS AND THEIR IMPLICATIONS

A puzzling finding from the AI researcher survey is the vast difference between predictions for high-level machine intelligence (achieving all tasks better and more cheaply than humans by 2047) and the full automation of labor (predicted for the 2100s). This gap suggests that while AI might reach human-level cognitive capabilities relatively soon, the physical and logistical challenges of widespread automation—such as manufacturing advanced robots—are perceived as significantly more distant. Researchers themselves anticipate their field being fully automated around 2063, further highlighting this temporal disconnect.

THE GROWING CONCERN OVER AI SAFETY AND INTELLIGENCE EXPLOSIONS

The survey underscores a significant demand for increased AI safety research, with 70% of respondents believing it should be prioritized more. Furthermore, a concerning majority (53%) of AI researchers anticipate an intelligence explosion within five years, where AI accelerates its own development at an exponential rate. This feedback loop could lead to a 'proto-Singularity,' introducing unprecedented and rapid technological change. A substantial 86% of researchers also expressed worry about the proliferation of deepfakes, indicating a widespread awareness of AI's potential negative societal impacts.

THE STRUGGLE FOR CONTROL OVER TRAINING DATA

The competition for high-quality data is intensifying among major tech players like OpenAI, Google, and Apple. These companies are actively negotiating with publishers for content to train their AI models. While OpenAI and Google offer substantial annual fees, Apple is reportedly pursuing broader rights to use content for any future AI product development, including imitating journalistic styles or creating personalized news models. This intense data acquisition race raises critical questions about the future financial sustainability of independent journalism and the potential for news aggregation platforms to dominate information dissemination.

RECONCILING OPENAI'S GOALS WITH JOURNALISM'S FUTURE

OpenAI's pursuit of AGI and superintelligence, aiming to capture and redistribute vast wealth, presents a paradox when juxtaposed with their stated goal of supporting a sustainable future for journalism. The potential emergence of trillion-dollar AI economies could overshadow or subsume independent media outlets. Sam Altman's vision of wealth redistribution, while potentially beneficial, needs clearer articulation regarding its impact on sectors like journalism, especially when AI models might eventually perform tasks traditionally done by human journalists, blurring the lines between informational tools and job replacements.

EMERGING DEBATES ON AI AS TOOLS VERSUS REPLACEMENTS

The ongoing discussion around whether AI serves as a tool or a replacement for human labor is a central theme. While advancements in AI capabilities make them better tools, they also move closer to replacing human workers. OpenAI's stated objective of building superintelligence and its potential to automate economically valuable work highlights this tension. Figures like Andrej Karpathy advocate for 'intelligence amplification'—tools that empower humans—rather than superintelligent entities designed to replace them, sparking an important debate about the ultimate purpose and alignment of AI development.

AI Researcher Predictions on Superintelligence and Automation Timelines

Data extracted from this episode

AI Milestone50% Chance Timeline (Current Survey)10% Chance Timeline (by 2027)Previous 50% Chance (Surveyed Last Year)
Machines outperforming humans in every task204720272060
Full automation of all human jobs2100sN/A (Implied later)N/A (Implied later)

Confidence in Intelligence Explosion Feedback Loop

Data extracted from this episode

LikelihoodPercentage of AI Researchers (2023)
Even chance24%
Likely20%
Quite likely9%
Total believing it's possible/likely53%

AI Researcher Views on Deepfakes

Data extracted from this episode

Concern LevelPercentage of Researchers
At least a substantial concern86%
Not a substantial concern14%

Prioritization of AI Safety Research

Data extracted from this episode

OpinionPercentage of Respondents
Should be prioritized more70%

Common Questions

OpenAI plans to monetize GPTs through user engagement, paying builders based on how much their GPTs are used. This is a shift from previous stances that avoided maximizing engagement.

Topics

Mentioned in this video

More from AI Explained

View all 41 summaries

Found this useful? Build your knowledge library

Get AI-powered summaries of any YouTube video, podcast, or article in seconds. Save them to your personal pods and access them anytime.

Try Summify free