Andrej Karpathy: Software Is Changing (Again)

Y CombinatorY Combinator
Science & Technology3 min read40 min video
Jun 19, 2025|2,379,615 views|56,322|1,140
Save to Pod

Key Moments

TL;DR

Software is evolving rapidly with LLMs, creating new programming paradigms and opportunities.

Key Insights

1

Software has undergone fundamental shifts, evolving from Software 1.0 (explicit code) to Software 2.0 (neural networks) and now to Software 3.0 (LLMs programmed via natural language).

2

LLMs can be viewed as a new form of operating system, offering capabilities akin to utilities and fabs, but with unique properties like being programmable in English.

3

The current era of LLMs is analogous to the 1960s in computing, characterized by expensive compute, centralized cloud access, and the emergence of time-sharing.

4

LLMs exhibit emergent human-like psychology, possessing vast knowledge but also cognitive deficits such as hallucinations and jagged intelligence, requiring careful interaction.

5

The development of LLM-powered applications should focus on 'partial autonomy' with custom GUIs, 'autonomy sliders,' and efficient generation-verification loops, rather than fully autonomous agents.

6

The natural language interface of LLMs lowers the barrier to entry for programming, enabling 'vibe coding' and creating new classes of developers.

SOFTWARE'S RAPID EVOLUTION

Software development has experienced significant, rapid changes in recent years, shifting fundamentally from its traditional paradigms. Historically, software development remained relatively stable for about 70 years. However, the emergence of deep learning introduced Software 2.0, where neural network weights, tuned through data and optimization, effectively programmed the system rather than explicit code. Now, with Large Language Models (LLMs), we are entering Software 3.0, a paradigm where natural language prompts serve as programs, creating an entirely new way to interact with and command computational systems.

LLMS AS THE NEW OPERATING SYSTEM

LLMs are increasingly resembling operating systems, acting as central orchestrators of compute and memory. They can be analogized to utilities for their accessible API-based service model and to fabs due to the immense capital expenditure required for their training. The ecosystem is mirroring traditional OS landscapes with closed-source providers and open-source alternatives. This new computing era is akin to the 1960s, characterized by expensive, centralized compute accessed via time-sharing, with clients interacting remotely.

THE PSYCHOLOGY OF LLMS: SUPERPOWERS AND DEFICITS

LLMs can be understood as 'people spirits'—stochastic simulations of humans trained on vast amounts of internet text. They possess encyclopedic knowledge and near-perfect memory, yet also exhibit significant cognitive deficits. These include hallucinations, jagged intelligence where they excel in some areas but make basic errors in others, and retrograde amnesia due to fixed context windows that don't natively consolidate long-term learning without explicit prompting. Understanding these dual capabilities and limitations is crucial for effective interaction.

EMERGING LLM APPLICATIONS AND PARTIAL AUTONOMY

The most promising LLM applications are 'partial autonomy apps,' which integrate LLM capabilities into traditional interfaces. Tools like Cursor and Perplexity exemplify this by combining human-controlled GUIs with LLM assistance for tasks like coding or research. These apps feature efficient context management, orchestration of multiple LLM calls, application-specific GUIs for auditing, and an 'autonomy slider' allowing users to control the level of AI assistance. The focus is on enabling a fast generation-verification loop where humans supervise and audit AI-generated output.

THE POWER OF NATURAL LANGUAGE PROGRAMMING

A revolutionary aspect of Software 3.0 is programming in natural language, such as English. This lowers the barrier to entry significantly, transforming many individuals into potential programmers through 'vibe coding.' This phenomenon allows for rapid prototyping of custom applications, as demonstrated by the creation of simple iOS apps or a menu-generating web application using minimal traditional coding knowledge. While the core LLM interaction may be simple, making these prototypes real-world applications involving DevOps, authentication, and deployment remains a complex challenge.

BUILDING FOR AGENTS AND ADAPTING INFRASTRUCTURE

The rise of LLMs necessitates building software infrastructure that agents, as new digital information manipulators, can easily interact with. This involves creating agent-friendly documentation formats (like markdown), implementing protocols for direct agent communication, and developing tools that convert existing data into LLM-readable formats. Meeting LLMs halfway by adapting our digital infrastructure will be essential for unlocking their full potential, enabling them to efficiently access and process information, and facilitating the ongoing evolution of software systems.

Navigating the New Software Landscape

Practical takeaways from this episode

Do This

Be fluent in Software 1.0 (code), 2.0 (neural nets), and 3.0 (LLMs).
Leverage LLM-powered applications like Cursor or Perplexity for enhanced workflows.
Utilize GUIs for auditing AI work and speeding up the verification process.
Experiment with the 'autonomy slider' to tune AI's involvement in tasks.
Write clear, concrete prompts to increase the likelihood of successful AI verification.
Adapt documentation to be LLM-friendly (e.g., using Markdown).
Meet LLMs halfway by adjusting infrastructure and making data accessible.
Focus on building partial autonomy products rather than just flashy agent demos.

Avoid This

Don't rely solely on interacting directly with the base LLM like a command line.
Don't underestimate the limitations and 'cognitive deficits' of LLMs (hallucinations, amnesia).
Don't get overly excited about fully autonomous agents without considering human oversight.
Don't expect LLMs to natively consolidate knowledge like humans; manage context windows actively.
Avoid vague prompts that lead to verification failures and wasted cycles.
Don't neglect the importance of user interface (GUI) for auditing AI outputs.
Don't assume traditional software interfaces are suitable for LLM interaction.

Common Questions

The video discusses three paradigms: Software 1.0, which is traditional computer code; Software 2.0, which refers to neural networks and their weights; and Software 3.0, encompassing large language models (LLMs) programmed via natural language prompts.

Topics

Mentioned in this video

More from Y Combinator

View all 103 summaries

Found this useful? Build your knowledge library

Get AI-powered summaries of any YouTube video, podcast, or article in seconds. Save them to your personal pods and access them anytime.

Try Summify free