Key Moments

Anthropic’s $30B Ramp, Mythos Doomsday, OpenClaw Ankled, Iran War Ceasefire, Israel's Influence

All-In PodcastAll-In Podcast
Entertainment6 min read90 min video
Apr 10, 2026|123,074 views|2,994|387
Save to Pod
TL;DR

Anthropic's new AI model, Mythos, can find decade-old security vulnerabilities, prompting a 100-day testing period. While some see it as fear-mongering, others argue it's a necessary precaution before widespread release.

Key Insights

1

Anthropic's Mythos AI model autonomously found thousands of vulnerabilities, including 27-year-old bugs in critical infrastructure software like OpenBSD and 16-year-old flaws in FFmpeg.

2

Anthropic has established 'Project Glass Wing,' an AI-driven cyber coalition involving major tech companies like Apple, Microsoft, and Google, to identify and fix software vulnerabilities over 100 days.

3

Brad Gerstner highlights Anthropic's revenue ramp as the fastest ever, reaching a $30 billion run rate, with over a thousand enterprises paying more than $1 million annually.

4

Peter Steinberger of OpenClaw claims Anthropic is 'anchoring' its competitor by restricting $200/month subscriptions for use with OpenClaw and directing users to expensive APIs, a move potentially seen as anti-competitive bundling.

5

Anthropic's revenue run rate has achieved unprecedented growth, reaching $30 billion annually by April, a significant increase from $1 billion at the end of 2024.

6

Naftali Bennett, former Israeli Prime Minister, tweeted concerns about Israel's waning popularity in the US, suggesting a need to address the situation and improve its image, potentially by supporting a ceasefire.

Mythos: A powerful AI model with significant security implications

Anthropic has withheld its new AI model, Mythos, citing its advanced capabilities in identifying security vulnerabilities. The model reportedly discovered thousands of flaws, including those missed for decades, such as a 27-year-old vulnerability in OpenBSD and a 16-year-old bug in FFmpeg. This discovery has led Anthropic to initiate 'Project Glass Wing,' a 100-day initiative involving major tech companies to use AI to find and fix these vulnerabilities before they can be exploited. This cautious approach contrasts with the typical 'move fast and break things' mantra of Silicon Valley, with experts like Brad Gerstner praising Anthropic's responsible disclosure. However, others, like Chamath Palihapitiya, express skepticism, viewing it as potential theater or a marketing tactic, drawing parallels to previous AI model releases that also stoked fears which did not materialize.

The debate over AI's potential for harm versus benefit

The discussion around Mythos highlights a broader debate on the dual nature of advanced AI. While the model's ability to find vulnerabilities could bolster cybersecurity, it also raises concerns about its potential misuse in offensive cyber warfare. Some panelists, like Jason, argue that the capabilities demonstrated by Mythos are more 'on the legitimate side,' providing a crucial one-time period for companies to identify and patch dormant vulnerabilities. Conversely, Chamath Palihapitiya remains skeptical, pointing to Anthropic's history of 'scare tactics' and arguing that sophisticated hackers might already possess similar capabilities using existing models like Opus. He suggests that the scale of discovered vulnerabilities, if as dire as presented, would necessitate shutting down the internet for years to patch, which he deems an unlikely outcome given market pressures.

Anthropic's explosive revenue growth and market dominance

Anthropic is experiencing an unprecedented revenue ramp, reaching an annual run rate of $30 billion, with over 1,000 enterprise customers each spending more than $1 million annually. This growth trajectory is described as the fastest ever seen in technology. Investors like Brad Gerstner highlight that this success comes despite OpenAI's initial consumer-focused dominance with ChatGPT. Anthropic's strategic focus on core AI capabilities, particularly in coding, is seen as a key driver. This rapid revenue growth underscores the immense market demand for advanced AI solutions, suggesting a 'near-infinite Total Addressable Market' (TAM) as intelligence becomes a critical tool for labor augmentation and potential replacement across industries.

OpenClaw faces competitive pressure from Anthropic's agent technology

The conversation shifts to the competitive landscape for AI agents, with OpenClaw, a prominent open-source project, facing significant challenges. Peter Steinberger, the founder of OpenClaw, claims Anthropic has effectively 'anchored' his project by restricting the use of its $200 monthly subscriptions for OpenClaw users, forcing them onto more expensive API usage. This move, coupled with Anthropic's announcement of its own agent technology, is seen by some as an anti-competitive strategy, potentially leveraging Anthropic's dominant market share in AI coding. The debate arises whether this constitutes price dumping or bundling, especially if Anthropic's own agent harness is offered under different terms than third-party integrations.

The disruptive potential of open-source AI and smaller models

A significant portion of the discussion revolves around the disruptive force of open-source AI models and the rise of smaller, more efficient language models (SLMs). While Anthropic and OpenAI focus on frontier models, projects like Ridges AI, built on Bittensor's subnet, are demonstrating rapid progress with community contributions and significantly lower costs. There's a strong argument that open-source solutions, potentially combined with crypto economics for decentralized training, could democratize AI development and challenge the capital-intensive approach of major players. However, the panelists also acknowledge that enterprises may remain hesitant to outsource mission-critical codebases to open-source projects, preferring the security and support of established providers.

Geopolitical tensions and the potential for de-escalation in the Middle East

The discussion turns to the Iran war and a recent two-week ceasefire. While details are scarce, talks are underway in Islamabad, Pakistan, involving US officials. Former President Trump's social media posts, including threats of 'civilization's' demise, are noted alongside his agreement to a ceasefire conditional on Iran opening a strait. Israel's response, described as bombing Lebanon, is contrasted with potential governmental talks instigated by Netanyahu. Panelists express mixed views, with some praising the ceasefire as a crucial de-escalation and others expressing concern about potential Israeli influence on US foreign policy.

Israel's influence on US foreign policy and rising antisemitism

Concerns are raised about the extent of Israeli influence on US foreign policy, particularly concerning the ongoing conflicts. Some Jewish Americans reportedly feel that Prime Minister Netanyahu's actions are not serving the best interests of the Jewish diaspora and are contributing to rising antisemitism. Former Israeli Prime Minister Naftali Bennett has publicly expressed concern about Israel's declining popularity in the US, suggesting a need for strategic adjustments. The panelists hope for an off-ramp that benefits both economic and geopolitical stability, and that Israel will prioritize maintaining its relationship with the United States.

The evolving landscape of AI value capture and enterprise adoption

The conversation explores where value is being captured in the AI stack, from chips to hyperscalers and now to model providers like Anthropic and OpenAI. The rapid revenue growth and AI's impact on industries like coding suggest that AI is no longer just a speculative bubble but a tangible driver of economic value. The long-term impact on enterprise software companies, the potential for AI to solve long-standing tech debt, and the continued evolution of AI agents are all areas of active speculation. The panelists also touch upon the significant investments in compute power and the implications for profitability, with some suggesting that current revenue growth might even lead to unexpected profitability due to efficient operations and compute constraints.

AI Development and Geopolitical Briefing

Practical takeaways from this episode

Do This

Consider the potential risks and responsible release strategies for advanced AI models like Mythos.
Recognize the competitive dynamics between AI labs like Anthropic and OpenAI.
Investigate disruptive AI technologies, including open-source and crypto-based projects.
Monitor the rapid revenue growth and market adoption of AI technologies.
Support initiatives that promote clear communication and collaboration in AI development.
Engage with new communication tools like X's auto-translate for better global understanding.
Stay informed about geopolitical developments and their impact on markets and international relations.
Advocate for pragmatic approaches to AI regulation and safety.

Avoid This

Dismiss the potential dangers of powerful AI models.
Underestimate the impact of open-source movements in AI.
Ignore the financial implications and revenue ramps in the AI sector.
Engage in overly simplistic or polarized discussions about AI's future.
Allow geopolitical tensions to dictate policy without careful consideration.
Rely solely on traditional methods without exploring AI-driven solutions.
Underestimate the pace of technological change and its market implications.

Common Questions

Mythos is Anthropic's newest AI model, which they are withholding because it autonomously found thousands of software vulnerabilities, including old exploits missed by security audits, making it too dangerous for public release.

Topics

Mentioned in this video

People
Elisha Long

Mentioned as a philosopher whose ideas about letting go and detachment are appealing and offer a roadmap for life.

Dario Amodei

CEO of Anthropic, who stated their AI model is as good as a professional human at identifying bugs and can chain vulnerabilities to create sophisticated exploits.

Emil Michael

Previously a guest on the program, who discussed the relationship between governments and AI capabilities, relevant to Anthropic's decision on withholding Mythos.

Steph Curry

Used as an analogy for Anthropic's exceptional performance, indicating they are 'shooting the lights out' with their AI models.

Peter Steinberger

The founder of OpenCLAW, noted for creating the project that launched the AI agent era, and whose access to Anthropic services was reportedly cut off.

Elon Musk

His team at X is credited with developing an impressive auto-translate feature that enhances cross-border understanding.

Scott Galloway

Mentioned as someone who, along with others, expressed concerns about an AI bubble at the start of the year.

Tucker Carlson

Mentioned as someone who has expressed concerns mirroring the hand-ringing about the potential consequences of a war with Iran.

Naftali Bennett

A former Israeli Prime Minister who tweeted about concerning poll numbers showing Israel's declining popularity in the US, urging action to improve it.

Jared Kushner

Mentioned as a consultant heading to Islamabad for talks related to the Iran situation, indicating involvement in Middle East diplomacy.

JD Vance

Mentioned as a VP who is part of the team heading to Pakistan for talks on a peace deal, and who had previously warned about the risks of a war with Iran.

Josh Shapiro

Previously interviewed on the podcast, he provided pushback on the idea that US foreign policy is being driven by Netanyahu.

Companies
Anthropic

The company that developed the Mythos AI model, which they are withholding due to its potential dangers in identifying thousands of software vulnerabilities.

OpenAI

A company that is expected to release its first Blackwell-trained model, Spud, and is likely to adopt similar sandboxing and defensive alliance strategies as Anthropic.

OpenClaw

A groundbreaking open-source agent project initiated by Peter Steinberger; Anthropic is accused of cutting off access and creating a competing product.

Meta

Identified as a company with a fortress balance sheet that will likely focus on compute advantages to compete in the AI space.

SpaceX

Mentioned as a company that will likely have a fortress balance sheet by June and is a key player in the AI landscape.

Databricks

Mentioned alongside Palantir as companies whose combined revenue added in a single month rivals Anthropic's massive growth, highlighting Anthropic's rapid expansion.

Snowflake

Mentioned as a hyperscaler or partner through which Anthropic distributes some revenue, involving commission payments.

Palantir

Mentioned as a company whose revenue growth trajectory is being outpaced by Anthropic, indicating the scale of Anthropic's recent expansion.

Salesforce

Mentioned as a major enterprise software company whose future is uncertain in the wake of AI advancements and potential consolidation.

Altimeter

A company whose experience with Anthropic's models is cited to demonstrate the value and demand for advanced AI capabilities.

NVIDIA

Cited as the first company to reach a multi-trillion dollar valuation due to AI, representing the early value capture at the chip layer.

HubSpot

Mentioned as an enterprise software company whose market position might be affected by AI, raising questions about value capture at different stack layers.

Oracle

A traditional enterprise software company whose role in the AI era is being questioned, alongside others like Salesforce and HubSpot.

Software & Apps
Mythos

Anthropic's newest AI model, described as highly dangerous due to its ability to find thousands of software vulnerabilities, including old exploits missed by security audits.

OpenBSD

An operating system that reportedly had a 27-year-old vulnerability discovered by Anthropic's Mythos model, missed by security audits.

ffmpeg

A software library where a 16-year-old bug was found by Anthropic's Mythos model, missed by automated tools after millions of scans.

Spud

The first Blackwell trained model from OpenAI, mentioned as part of the emerging class of AGI models that require careful release strategies.

Grok

Mentioned as a tool used to inquire about Anthropic's past patterns of using scare tactics in product marketing.

GPT-2

A previous OpenAI model (1.5 billion parameters) that was similarly presented as potentially dangerous in 2019, but ultimately became a 'nothing burger'.

Opus

A model that could potentially be used by sophisticated hackers to find similar vulnerabilities without needing Mythos.

Claude

Anthropic's AI model, which has reportedly incorporated features copied from OpenCLAW, leading to accusations of anti-competitive behavior.

Hermes Agent

An open-source agent released on February 25th, mentioned as one of the competitors vying to succeed in the AI agent space.

Quinn

Alibaba's AI model, upon which a new agent is being developed.

Alexa

Amazon's voice assistant, which is preparing for a new, less 'dumb' version, indicating a trend towards more advanced AI assistants.

Siri

Apple's voice assistant, also preparing for a new version aimed at being more capable and less 'dumb'.

Android

Used as an analogy for Open Source's potential to be a disruptive force in the large language model market, similar to Android's impact on mobile.

Linux

An example of an open-source project deeply integrated into enterprises, illustrating the potential for other open-source AI projects to gain similar adoption.

Kubernetes

An open-source project that has achieved significant enterprise adoption, showcasing the trend that AI developers are following.

Apache

Mentioned as a deep enterprise-adopted open-source project, similar to what is expected for AI models.

PostgreSQL

An open-source database that has become a staple in enterprise environments, highlighting the successful integration of open-source technologies.

Athena

A company that is hiring many new assistants, likely due to increased demand after being mentioned positively on the podcast.

More from All-In Podcast

View all 401 summaries

Found this useful? Build your knowledge library

Get AI-powered summaries of any YouTube video, podcast, or article in seconds. Save them to your personal pods and access them anytime.

Get Started Free