Key Moments

How To Get The Most Out Of Vibe Coding | Startup School

Y CombinatorY Combinator
Science & Technology7 min read17 min video
Apr 25, 2025|341,632 views|8,639|314
Save to Pod
TL;DR

AI coding tools can now build functional software, but treating them as collaborators rather than magic wands and meticulously planning each step is crucial for success. Expect rapid evolution, making continuous experimentation key.

Key Insights

1

Product managers and designers are increasingly implementing new ideas directly in code using AI tools like Lovable, bypassing traditional mock-up software.

2

When AI gets stuck, pasting code and the error message directly into the LLM's standalone UI, rather than through an integrated IDE, can often yield a solution.

3

Test cases should be hand-crafted and serve as 'strong guard rules' for LLMs to follow, ensuring code meets specific functional requirements before AI generation.

4

AI can act as a DevOps engineer, configuring DNS servers and setting up hosting, or as a designer, generating and resizing favicon images, significantly accelerating non-coding tasks.

5

LLMs show a strong preference for well-established conventions and abundant training data, leading to better performance in languages like Ruby on Rails compared to less common ones like Rust or Elixir.

6

Inputting instructions via voice, such as with Aqua, allows for speeds around 140 words per minute, nearly double typical typing speeds, with AI tolerating minor grammatical errors.

Embrace AI as a Collaborative Partner, Not a One-Shot Solution

The core principle of effective AI coding, or 'vibe coding,' is to view the Large Language Model (LLM) as a collaborator rather than an autonomous developer capable of producing entire products in one go. While AI is rapidly approaching this capability, current best practices involve a more iterative and guided approach. Tom Blomfield, a partner at Y Combinator, emphasizes that getting the best results requires a practice akin to prompt engineering from a couple of years ago, where continuous learning and adaptation are key. This collaborative process is not about replacing software engineering but enhancing it by leveraging AI as a powerful tool. Many founders are finding success by treating the AI as a different kind of programming language, where detailed, contextualized instructions are paramount for achieving desired outcomes. The emphasis is on a partnership where the human guides, plans, and verifies, while the AI executes and suggests, leading to accelerated development cycles.

Strategic planning prevents AI 'rabbit holes'

Before diving into code, invest significant time in developing a comprehensive plan with the AI. This plan, ideally documented in a markdown file within the project, serves as a roadmap for the entire development process. It should detail the scope, architecture, and individual features to be implemented. Crucially, refine this plan by removing or deferring features deemed too complex or out of scope. The AI should then implement the project section by section, with each completed section being checked, tested, and committed to version control. If the AI appears to be stuck in a loop, repeatedly generating code that doesn't work, or if you find yourself constantly copy-pasting error messages, it's a signal to step back. Prompt the LLM to analyze why it's failing, whether due to insufficient context or inherent limitations. This structured approach, breaking down complex tasks into manageable steps, prevents the accumulation of 'bad code' and ensures a more robust final product.

Leveraging version control and tests for robust development

Version control, specifically Git, is indispensable when working with AI coding tools. Treat Git as your primary safety net, ensuring you start each new feature from a clean slate, allowing for easy reversion to a known working state if the AI generates erroneous code. Avoid the temptation to repeatedly prompt the AI to fix an issue, as this often leads to layers of suboptimal code. Instead, if a solution is found after multiple prompts, reset the codebase and feed that *clean solution* back to the AI for implementation. Similarly, writing comprehensive tests is critical. While AI can generate unit tests, prioritize high-level integration tests that simulate user interactions to ensure end-to-end functionality. These tests act as a crucial safeguard against AI inadvertently altering unrelated logic, catching regressions early and enabling a quick reset to a stable state. This disciplined use of version control and automated testing mirrors professional software development practices and is vital for managing AI-generated code.

AI excels beyond code generation

The utility of LLMs extends far beyond writing code. They can function effectively as DevOps engineers, assisting with tasks like DNS configuration and server setup via command-line tools. For instance, Claude Sonnet 3.7 was used to configure DNS and set up Heroku hosting, accelerating progress significantly. AI can also serve as a designer, generating initial assets like favicons and then creating scripts to resize them into various necessary formats across different platforms. This versatility means AI can handle a wide range of preparatory and supporting tasks, freeing up developers to focus on core logic and architecture. Even for experienced programmers, using AI as a teacher, walking through code implementations line by line, offers a more efficient learning method than sifting through Stack Overflow.

Bug fixing and debugging with AI

When encountering bugs, the first step is to copy and paste the exact error message directly into the LLM, whether from server logs or browser consoles. Often, this is sufficient for the AI to diagnose and propose a fix without needing further explanation. This direct input is expected to become more integrated into coding tools soon, moving beyond the 'human as a copypaste machine' paradigm. For more complex issues, ask the AI to brainstorm and present several potential causes before attempting any code changes. Crucially, after each failed bug-fixing attempt, reset the codebase rather than accumulating layers of 'junk code.' Adding logging is also recommended during the debugging process. If one model struggles, switching to a different LLM (e.g., from Claude to OpenAI or Gemini) can sometimes yield better results, as different models have varying strengths.

Structuring for AI: Modularity and Documentation

Effective AI collaboration benefits from a structured codebase. Promoting modularity and keeping files small is essential, mirroring best practices for human developers. This approach simplifies understanding and maintenance for both humans and LLMs. A potential architectural shift towards modular or service-based designs with clear API boundaries is anticipated, making codebases easier for AI to navigate and modify without unforeseen side effects. Furthermore, providing the LLM with local access to relevant API documentation by downloading it and placing it within the project folder can significantly improve accuracy. Instructions within the AI's configuration (like Cursor rules or Windsurf rules) should direct it to consult these local docs before implementing features, leading to more reliable code generation. Founders have reported success using hundreds of lines of instructions in these files to greatly enhance AI effectiveness.

Choosing the right tech stack and interaction methods

The performance of AI coding tools can be influenced by the chosen technology stack. Frameworks with established conventions and a large volume of high-quality training data online, such as Ruby on Rails, tend to yield better results. Languages with less extensive training data, like Rust or Elixir, may present more challenges, though this is likely to change rapidly. Beyond code, interaction methods are evolving. Screenshots can be invaluable for demonstrating UI bugs or importing design inspiration. Voice input, through tools like Aqua, allows for input speeds around 140 words per minute, with AI’s tolerance for minor transcription errors making it a highly efficient communication channel. These novel interaction paradigms accelerate the feedback loop and integrate AI more seamlessly into the development workflow.

Continuous experimentation and refactoring

The landscape of AI coding tools is evolving at an unprecedented pace, with state-of-the-art capabilities changing weekly. Continuous experimentation with new models and techniques is therefore vital. Different LLMs excel at specific tasks: some might be better at debugging, others at long-term planning or code implementation. For instance, Gemini might currently lead in codebase indexing and planning, while Sonnet 3.7 might be preferred for direct code changes. GPT-4.1 might require further iteration or different prompting strategies. Regularly refactoring code, especially once tests are implemented and passing, leverages AI's ability to identify repetitive patterns and suggest improvements. This ongoing process of testing, refactoring, and experimenting ensures developers stay at the forefront of AI-assisted software development.

Vibe Coding Best Practices

Practical takeaways from this episode

Do This

Think of AI as a new programming language; program with detailed language prompts.
Start with meticulously crafted test cases before generating code.
Spend significant time on scope and architecture with the LLM before offloading to coding tools.
When encountering bugs, paste the error message into the LLM directly.
Use version control (Git) religiously; reset and start fresh if the AI goes off track.
Write high-level integration tests that simulate user interaction.
Download and locally store API documentation, instructing the LLM to reference it.
Handle complex functionality as standalone projects or reference implementations first.
Use screenshots to demonstrate UI bugs or import design inspiration.
Refactor frequently, especially after tests are implemented.
Continuously experiment with new AI models and techniques.

Avoid This

Don't expect LLMs to oneshot entire complex products.
Don't accumulate layers of bad code by repeatedly prompting for fixes without resetting.
Don't solely rely on UI modifications without considering backend logic implications.
Don't start coding immediately; first, create a comprehensive plan with the LLM.
Don't trust AI-generated revert functionality implicitly; use Git.
Don't make multiple attempts at bug fixes without resetting.
Don't let LLMs 'free run' in the codebase without a clear understanding of the goal.
Don't ignore potential LLM rabbit holes or funky regenerating code.
Don't let LLMs make unnecessary changes to unrelated logic without robust tests.

Common Questions

Vibe coding refers to using AI models as a primary tool for software development, akin to prompt engineering. Instead of writing code directly, you program by providing detailed natural language instructions to the AI, leveraging its capabilities to generate, debug, and even design code.

Topics

Mentioned in this video

Software & Apps
Claude

An AI assistant mentioned for configuring DNS servers and resizing images, demonstrating its use beyond just coding.

Gemini

An AI model noted as being best for whole codebase indexing and creating implementation plans.

Cursor

An AI-powered code editor that allows users to run AI tools directly within the IDE. Mentioned as a tool to use alongside others.

Aqua

A YC company's tool that allows voice input for interacting with coding tools, transcribing speech to text at a rapid pace.

Claude Code

An AI coding tool mentioned as an option for those who have prior coding experience.

Rust

A programming language for which friends of the speaker had less success using AI, attributed to a lack of online training data compared to Ruby on Rails.

Elixir

A programming language similar to Rust, where friends of the speaker experienced less success with AI due to limited online training data.

ChatGPT

An AI model mentioned for creating a favicon image for a website.

Lovable

A tool suggested for beginners that provides a visual interface for trying out UIs directly in code. It struggled with precise backend logic modifications.

Ruby on Rails

A web application framework that the speaker found AI performed exceptionally well with, attributed to its established conventions and large amount of online training data.

Windsurf

An AI coding tool that takes longer to process but can be used for more complex tasks while another tool handles simpler ones.

Heroku

A cloud platform mentioned as a service that an LLM was used to set up hosting for.

Git

A version control system recommended for its ability to revert to known working versions and maintain a clean codebase.

More from Y Combinator

View all 562 summaries

Found this useful? Build your knowledge library

Get AI-powered summaries of any YouTube video, podcast, or article in seconds. Save them to your personal pods and access them anytime.

Try Summify free