Key Moments
Anthropic’s $30B Ramp, Mythos Doomsday, OpenClaw Ankled, Iran War Ceasefire, Israel's Influence
Key Moments
Anthropic's new AI model, Mythos, can find decade-old security vulnerabilities, prompting a 100-day testing period. While some see it as fear-mongering, others argue it's a necessary precaution before widespread release.
Key Insights
Anthropic's Mythos AI model autonomously found thousands of vulnerabilities, including 27-year-old bugs in critical infrastructure software like OpenBSD and 16-year-old flaws in FFmpeg.
Anthropic has established 'Project Glass Wing,' an AI-driven cyber coalition involving major tech companies like Apple, Microsoft, and Google, to identify and fix software vulnerabilities over 100 days.
Brad Gerstner highlights Anthropic's revenue ramp as the fastest ever, reaching a $30 billion run rate, with over a thousand enterprises paying more than $1 million annually.
Peter Steinberger of OpenClaw claims Anthropic is 'anchoring' its competitor by restricting $200/month subscriptions for use with OpenClaw and directing users to expensive APIs, a move potentially seen as anti-competitive bundling.
Anthropic's revenue run rate has achieved unprecedented growth, reaching $30 billion annually by April, a significant increase from $1 billion at the end of 2024.
Naftali Bennett, former Israeli Prime Minister, tweeted concerns about Israel's waning popularity in the US, suggesting a need to address the situation and improve its image, potentially by supporting a ceasefire.
Mythos: A powerful AI model with significant security implications
Anthropic has withheld its new AI model, Mythos, citing its advanced capabilities in identifying security vulnerabilities. The model reportedly discovered thousands of flaws, including those missed for decades, such as a 27-year-old vulnerability in OpenBSD and a 16-year-old bug in FFmpeg. This discovery has led Anthropic to initiate 'Project Glass Wing,' a 100-day initiative involving major tech companies to use AI to find and fix these vulnerabilities before they can be exploited. This cautious approach contrasts with the typical 'move fast and break things' mantra of Silicon Valley, with experts like Brad Gerstner praising Anthropic's responsible disclosure. However, others, like Chamath Palihapitiya, express skepticism, viewing it as potential theater or a marketing tactic, drawing parallels to previous AI model releases that also stoked fears which did not materialize.
The debate over AI's potential for harm versus benefit
The discussion around Mythos highlights a broader debate on the dual nature of advanced AI. While the model's ability to find vulnerabilities could bolster cybersecurity, it also raises concerns about its potential misuse in offensive cyber warfare. Some panelists, like Jason, argue that the capabilities demonstrated by Mythos are more 'on the legitimate side,' providing a crucial one-time period for companies to identify and patch dormant vulnerabilities. Conversely, Chamath Palihapitiya remains skeptical, pointing to Anthropic's history of 'scare tactics' and arguing that sophisticated hackers might already possess similar capabilities using existing models like Opus. He suggests that the scale of discovered vulnerabilities, if as dire as presented, would necessitate shutting down the internet for years to patch, which he deems an unlikely outcome given market pressures.
Anthropic's explosive revenue growth and market dominance
Anthropic is experiencing an unprecedented revenue ramp, reaching an annual run rate of $30 billion, with over 1,000 enterprise customers each spending more than $1 million annually. This growth trajectory is described as the fastest ever seen in technology. Investors like Brad Gerstner highlight that this success comes despite OpenAI's initial consumer-focused dominance with ChatGPT. Anthropic's strategic focus on core AI capabilities, particularly in coding, is seen as a key driver. This rapid revenue growth underscores the immense market demand for advanced AI solutions, suggesting a 'near-infinite Total Addressable Market' (TAM) as intelligence becomes a critical tool for labor augmentation and potential replacement across industries.
OpenClaw faces competitive pressure from Anthropic's agent technology
The conversation shifts to the competitive landscape for AI agents, with OpenClaw, a prominent open-source project, facing significant challenges. Peter Steinberger, the founder of OpenClaw, claims Anthropic has effectively 'anchored' his project by restricting the use of its $200 monthly subscriptions for OpenClaw users, forcing them onto more expensive API usage. This move, coupled with Anthropic's announcement of its own agent technology, is seen by some as an anti-competitive strategy, potentially leveraging Anthropic's dominant market share in AI coding. The debate arises whether this constitutes price dumping or bundling, especially if Anthropic's own agent harness is offered under different terms than third-party integrations.
The disruptive potential of open-source AI and smaller models
A significant portion of the discussion revolves around the disruptive force of open-source AI models and the rise of smaller, more efficient language models (SLMs). While Anthropic and OpenAI focus on frontier models, projects like Ridges AI, built on Bittensor's subnet, are demonstrating rapid progress with community contributions and significantly lower costs. There's a strong argument that open-source solutions, potentially combined with crypto economics for decentralized training, could democratize AI development and challenge the capital-intensive approach of major players. However, the panelists also acknowledge that enterprises may remain hesitant to outsource mission-critical codebases to open-source projects, preferring the security and support of established providers.
Geopolitical tensions and the potential for de-escalation in the Middle East
The discussion turns to the Iran war and a recent two-week ceasefire. While details are scarce, talks are underway in Islamabad, Pakistan, involving US officials. Former President Trump's social media posts, including threats of 'civilization's' demise, are noted alongside his agreement to a ceasefire conditional on Iran opening a strait. Israel's response, described as bombing Lebanon, is contrasted with potential governmental talks instigated by Netanyahu. Panelists express mixed views, with some praising the ceasefire as a crucial de-escalation and others expressing concern about potential Israeli influence on US foreign policy.
Israel's influence on US foreign policy and rising antisemitism
Concerns are raised about the extent of Israeli influence on US foreign policy, particularly concerning the ongoing conflicts. Some Jewish Americans reportedly feel that Prime Minister Netanyahu's actions are not serving the best interests of the Jewish diaspora and are contributing to rising antisemitism. Former Israeli Prime Minister Naftali Bennett has publicly expressed concern about Israel's declining popularity in the US, suggesting a need for strategic adjustments. The panelists hope for an off-ramp that benefits both economic and geopolitical stability, and that Israel will prioritize maintaining its relationship with the United States.
The evolving landscape of AI value capture and enterprise adoption
The conversation explores where value is being captured in the AI stack, from chips to hyperscalers and now to model providers like Anthropic and OpenAI. The rapid revenue growth and AI's impact on industries like coding suggest that AI is no longer just a speculative bubble but a tangible driver of economic value. The long-term impact on enterprise software companies, the potential for AI to solve long-standing tech debt, and the continued evolution of AI agents are all areas of active speculation. The panelists also touch upon the significant investments in compute power and the implications for profitability, with some suggesting that current revenue growth might even lead to unexpected profitability due to efficient operations and compute constraints.
Mentioned in This Episode
●Products
●Software & Apps
●Companies
●Concepts
●People Referenced
AI Development and Geopolitical Briefing
Practical takeaways from this episode
Do This
Avoid This
Common Questions
Mythos is Anthropic's newest AI model, which they are withholding because it autonomously found thousands of software vulnerabilities, including old exploits missed by security audits, making it too dangerous for public release.
Topics
Mentioned in this video
Mentioned as a philosopher whose ideas about letting go and detachment are appealing and offer a roadmap for life.
CEO of Anthropic, who stated their AI model is as good as a professional human at identifying bugs and can chain vulnerabilities to create sophisticated exploits.
Previously a guest on the program, who discussed the relationship between governments and AI capabilities, relevant to Anthropic's decision on withholding Mythos.
Used as an analogy for Anthropic's exceptional performance, indicating they are 'shooting the lights out' with their AI models.
The founder of OpenCLAW, noted for creating the project that launched the AI agent era, and whose access to Anthropic services was reportedly cut off.
His team at X is credited with developing an impressive auto-translate feature that enhances cross-border understanding.
Mentioned as someone who, along with others, expressed concerns about an AI bubble at the start of the year.
Mentioned as someone who has expressed concerns mirroring the hand-ringing about the potential consequences of a war with Iran.
A former Israeli Prime Minister who tweeted about concerning poll numbers showing Israel's declining popularity in the US, urging action to improve it.
Mentioned as a consultant heading to Islamabad for talks related to the Iran situation, indicating involvement in Middle East diplomacy.
Mentioned as a VP who is part of the team heading to Pakistan for talks on a peace deal, and who had previously warned about the risks of a war with Iran.
Previously interviewed on the podcast, he provided pushback on the idea that US foreign policy is being driven by Netanyahu.
The company that developed the Mythos AI model, which they are withholding due to its potential dangers in identifying thousands of software vulnerabilities.
A company that is expected to release its first Blackwell-trained model, Spud, and is likely to adopt similar sandboxing and defensive alliance strategies as Anthropic.
A groundbreaking open-source agent project initiated by Peter Steinberger; Anthropic is accused of cutting off access and creating a competing product.
Identified as a company with a fortress balance sheet that will likely focus on compute advantages to compete in the AI space.
Mentioned as a company that will likely have a fortress balance sheet by June and is a key player in the AI landscape.
Mentioned alongside Palantir as companies whose combined revenue added in a single month rivals Anthropic's massive growth, highlighting Anthropic's rapid expansion.
Mentioned as a hyperscaler or partner through which Anthropic distributes some revenue, involving commission payments.
Mentioned as a company whose revenue growth trajectory is being outpaced by Anthropic, indicating the scale of Anthropic's recent expansion.
Mentioned as a major enterprise software company whose future is uncertain in the wake of AI advancements and potential consolidation.
A company whose experience with Anthropic's models is cited to demonstrate the value and demand for advanced AI capabilities.
Cited as the first company to reach a multi-trillion dollar valuation due to AI, representing the early value capture at the chip layer.
Mentioned as an enterprise software company whose market position might be affected by AI, raising questions about value capture at different stack layers.
A traditional enterprise software company whose role in the AI era is being questioned, alongside others like Salesforce and HubSpot.
Anthropic's newest AI model, described as highly dangerous due to its ability to find thousands of software vulnerabilities, including old exploits missed by security audits.
An operating system that reportedly had a 27-year-old vulnerability discovered by Anthropic's Mythos model, missed by security audits.
A software library where a 16-year-old bug was found by Anthropic's Mythos model, missed by automated tools after millions of scans.
The first Blackwell trained model from OpenAI, mentioned as part of the emerging class of AGI models that require careful release strategies.
Mentioned as a tool used to inquire about Anthropic's past patterns of using scare tactics in product marketing.
A previous OpenAI model (1.5 billion parameters) that was similarly presented as potentially dangerous in 2019, but ultimately became a 'nothing burger'.
A model that could potentially be used by sophisticated hackers to find similar vulnerabilities without needing Mythos.
Anthropic's AI model, which has reportedly incorporated features copied from OpenCLAW, leading to accusations of anti-competitive behavior.
An open-source agent released on February 25th, mentioned as one of the competitors vying to succeed in the AI agent space.
Alibaba's AI model, upon which a new agent is being developed.
Amazon's voice assistant, which is preparing for a new, less 'dumb' version, indicating a trend towards more advanced AI assistants.
Apple's voice assistant, also preparing for a new version aimed at being more capable and less 'dumb'.
Used as an analogy for Open Source's potential to be a disruptive force in the large language model market, similar to Android's impact on mobile.
An example of an open-source project deeply integrated into enterprises, illustrating the potential for other open-source AI projects to gain similar adoption.
An open-source project that has achieved significant enterprise adoption, showcasing the trend that AI developers are following.
Mentioned as a deep enterprise-adopted open-source project, similar to what is expected for AI models.
An open-source database that has become a staple in enterprise environments, highlighting the successful integration of open-source technologies.
A company that is hiring many new assistants, likely due to increased demand after being mentioned positively on the podcast.
Listed among Gulf States that are optimistic about geopolitical shifts and Iran's potential integration into regional cooperation.
Mentioned as a Gulf State exhibiting hope and optimism regarding potential geopolitical transformations, including Iran's role.
Cited as a Gulf State that is optimistic about impending geopolitical shifts and the potential inclusion of Iran in regional dialogues.
Mentioned as one of several Gulf States that are hopeful and optimistic about potential geopolitical transformations and bringing Iran into the fold.
More from All-In Podcast
View all 401 summaries
62 minJosh Shapiro on Trump, Iran War Chaos, Israel's Failure, the Economy, and 2028 Race
70 minThe State of Modern War: Palantir & Anduril Execs on Drones, AI, and the End of Traditional Warfare
81 minSpaceX IPO, Iran War Fallout, Quantum Bitcoin Hack, The Space Opportunity
81 minAnthropic's Generational Run, OpenAI Panics, AI Moats, Meta Loses Major Lawsuits
Found this useful? Build your knowledge library
Get AI-powered summaries of any YouTube video, podcast, or article in seconds. Save them to your personal pods and access them anytime.
Get Started Free