Key Moments

Ask These Questions Before Starting An AI Startup

Y CombinatorY Combinator
Science & Technology6 min read41 min video
Oct 7, 2025|47,509 views|942|52
Save to Pod
TL;DR

Founders must plan for AGI arrival in 2-3 years, not just 6 months, as AI commoditizes software and empowers the buy-side, necessitating a re-evaluation of startup strategy, product, and team building.

Key Insights

1

Founders should plan strategies around the high likelihood of AGI arriving in the next 2-3 years, not just the capabilities of the next six months.

2

The buy-side (enterprises) will also be armed with AGI/strong agents, accelerating their adoption cycles and potentially reducing their need to purchase external SaaS products.

3

Software commoditization could lead enterprises to build all software in-house using tools like 'prompt to cloud code,' or alternatively, AI could raise the quality bar for exceptional applications.

4

Trust will be a major theme, especially concerning AI agents that might need to access backend systems, and the trustworthiness of the companies building these agents.

5

While current AI startups may focus on short-term gains (6-18 months), long-term defensibility in a post-AGI world is crucial, possibly found in tackling hard problems like infrastructure, energy, manufacturing, and chips.

6

The shift in company structure towards smaller, potentially semi-automated teams reduces traditional human-based guardrails against bad actors, necessitating new trust mechanisms.

The profound uncertainty of the AI era

The speaker, Jordan Fisher, expresses a deep sense of confusion and uncertainty about the future, a feeling he contrasts with his past career advantage of predicting technological trends. He argues that this confusion is actually a sign of an interesting, rapidly evolving moment. He observes that while founders are often told to focus, they must actually focus on *everything*—from hiring to product to fundraising. This ability to juggle myriad responsibilities positions founders to grapple with the biggest societal question: what to do about AI. He emphasizes that the current AI landscape is so fast-moving that stopping to ask good questions is more critical than ever.

Planning beyond the immediate horizon

Fisher challenges the common startup advice of planning product roadmaps based on AI capabilities anticipated over the next six months. Instead, he strongly advocates for a two-year planning horizon, based on the "extreme likelihood" of Artificial General Intelligence (AGI) emerging within the next few years. This means founders must consider how AGI will fundamentally alter every aspect of their business, from strategy and product development to team building and go-to-market. He stresses that while the exact timeline is uncertain, ignoring this potential shift and failing to plan even a little bit for its impact on all operational facets is a disservice to the founder's role.

The evolving enterprise buy-side and software commoditization

A significant shift is expected on the enterprise 'buy-side.' Traditionally, slow enterprise adoption cycles were seen as a buffer for AI startups. However, Fisher posits that enterprises will themselves become armed with AGI and strong agents, enabling them to make faster buying decisions and accelerate their own adoption of AI tools. This means incumbents will also benefit greatly from AI, not just startups. This leads to a critical question: will software become entirely commoditized? Enterprises might opt to build all their software in-house, using AI tools to generate custom solutions on demand, potentially bypassing SaaS providers altogether. Conversely, AI could also drive a higher quality bar for exceptional applications, creating a bifurcation.

The challenge of an on-demand software future

Fisher explores the concept of 'code on demand,' where software is generated dynamically for users. This could extend beyond generative UI to backend functionalities, requiring AI to operate at the database or system level. This raises significant trust issues: can users and companies truly trust AI to perform critical, on-demand tasks without errors or unintended consequences? While generative UI is a visible change, allowing AI to alter core behaviors necessitates a deep level of assurance in the AI's reliability and security, a level not yet achieved.

Retrofitting versus building AI-native

Startups face a strategic decision: should they retrofit existing products with AI capabilities, leveraging existing distribution, or build entirely new AI-native products from scratch? Fisher suggests that while the 'startup mentality' favors building anew, retrofitting might prove advantageous due to the established distribution channels of existing companies. This decision could be vertical-specific. Similarly, for teams, the question arises whether AI-native teams will have an advantage over established companies that are downsize or optimizing with AI. The definition of an 'AI-native' team itself will likely evolve rapidly.

The paramount importance of trust

Trust emerges as a central theme, especially concerning AI agents. For agents to operate on demand, particularly those requiring access to sensitive data or backend systems, a high degree of trust in the AI's control mechanisms and decision-making is essential. This trust extends beyond the AI models themselves to the companies building them. In a world of potentially smaller, semi-automated teams, the traditional human guardrails—whistleblowers, ethical concerns from employees—may diminish, making it easier for bad actors or misaligned corporate interests to influence agent behavior. This necessitates new methods for instilling and verifying trust.

Rethinking company structures and ethical guardrails

As teams become smaller and more automated, the traditional company structure, with its inherent diversity of people and potential for internal dissent, may weaken as a source of ethical oversight. Fisher raises the concern that a single individual within a highly automated company could make decisions with significant negative impacts, without broader oversight. This lack of human accountability could make startups appear riskier to enterprises, already wary of smaller companies' agility in doing 'wrong things.' New 'guardrails' are needed, potentially including AI-powered auditing systems that can operate with less bias and greater transparency. The idea of companies agreeing to binding, ongoing audits, perhaps by neutral AI arbiters, is proposed as a future mechanism for building trust.

The economic imperative for alignment and defensibility

Fisher highlights an economic pressure driving progress in AI alignment: the need for 'long-horizon agents' that can operate reliably over extended periods. This economic viability requires a degree of trust that these agents won't go 'off the rails.' Beyond alignment, the question of defensibility in a post-AGI world is critical. With AI potentially commoditizing many functions, startups must identify durable advantages. This might involve tackling inherently 'hard problems' that AI currently struggles with, such as complex physical systems (semiconductor manufacturing), infrastructure, energy, or advanced robotics. Simply optimizing for short-term ARR with the intention to flip a company might not be viable in the long run; true defensibility will likely lie in solving problems that remain difficult even after AGI's widespread adoption.

Data advantage shifts and industry-specific needs

Historically, custom data sets provided a significant advantage for AI startups. However, the rise of powerful, general-purpose LLMs has diminished this edge, making it often more beneficial to leverage existing models. Fisher questions whether this holds true for all industries. He suggests that specialized fields like material science, or highly proprietary industries like semiconductor manufacturing (TSMC, ASML), where tacit knowledge is closely guarded and hasn't 'bled out' onto the internet, might still offer defensible moats for companies with deep, custom data. Frontier LLMs currently lack the specialized knowledge to build cutting-edge semiconductor fabs, indicating potential areas for niche advantage.

The future of human-AI interaction and societal values

The conversation touches upon AI's impact on societal values, particularly concerning the motivation behind building AI companies. While the initial Silicon Valley ethos was about 'changing the world,' Fisher observes a shift towards 'how do we make money off of this?' even as awareness of AI's profound societal implications grows. He argues that this moment is perhaps the last opportunity to build products and companies that not only delight users but also contribute positively to society, well-being, and long-term mental health. He urges founders to consider 'what society needs' alongside 'what people want,' suggesting that building for societal good can also lead to demand and success.

Common Questions

The rapid advancement of AI necessitates a fundamental rethinking of startup strategy, product roadmaps, and team building. Founders should plan for AGI potentially arriving in the next 2-3 years, considering near-term AI capabilities and long-term societal shifts.

Topics

Mentioned in this video

More from Y Combinator

View all 576 summaries

Found this useful? Build your knowledge library

Get AI-powered summaries of any YouTube video, podcast, or article in seconds. Save them to your personal pods and access them anytime.

Try Summify free