Key Moments

To Regulate AI Effectively, Focus on How It’s Used

a16za16z
Gaming5 min read46 min video
Jan 20, 2026|194 views|5
Save to Pod
TL;DR

Regulate AI by use, not by building; open source, startups, and geopolitics hinge on usage.

Key Insights

1

Open source is a core driver of future AI innovation—regulatory uncertainty risks weakening it and pushing the ecosystem toward Chinese open-source models.

2

Effective AI policy should regulate use and behavior, not the act of developing or building models, to avoid loopholes and overbroad controls.

3

The Biden executive order signaled a shift toward restricting computing power and questioning open weights, challenging traditional software regulation.

4

A marginal-risk, evidence-based regulatory approach—with input from academia, VCs, and industry—helps craft practical policies that don’t stifle innovation.

5

Startups face a chilling effect from regulatory uncertainty, which can slow funding, hiring, and product development, advantaging well-resourced incumbents.

6

Enforcement can evolve to target bad uses rather than bad code, drawing on lessons from cybersecurity and encryption to balance safety with innovation.

OPEN SOURCE VITALITY AND REGULATORY UNCERTAINTY

Open source remains a foundational engine of AI progress, trusted by hobbyists, researchers, and startups alike. Yet regulatory ambiguity is chilling the release of strong open-source models in the United States, driving developers toward Chinese offerings that could set the competitive baseline. This isn’t merely a regulatory nuisance; it reshapes the innovation funnel, risking a future where the next generation of developers learns from or relies on foreign models. The result is a destabilizing dynamic for US leadership in AI and a potential loss of critical safety and transparency benefits that open ecosystems provide.

FOCUS ON USE, NOT DEVELOPMENT

A central premise here is that the most effective policy targets use and misuses rather than trying to regulate the abstract act of model development. The transcript argues that there is no single, stable definition for AI as technology evolves, so crafting rules around ‘development’ creates loopholes. Instead, lawmakers should specify permissible and prohibited uses, behaviors, and outcomes. This aligns with longstanding regulatory traditions that regulate actions and harms (like malware transmission) rather than attempting to tightly regulate the inventive process itself.

THE BIDEN ORDER: A REGULATORY SHIFT AND ITS LIMITS

The Biden administration’s executive order marked a notable shift by nodding toward restricting certain powerful computations and raising questions about open weights. It signaled skepticism toward conventional open-source development approaches, raising concerns about governing the model layer rather than use cases. The discussion frames this as a departure from decades of software regulation, which focused on use. The risk, the speakers warn, is misregulation that undercuts innovation while failing to curb the真正 bad uses, underscoring the need for a measured, use-focused policy.

MARGINAL RISK AND EVIDENCE-BASED POLICY

A recurring theme is the need to identify marginal risks—what actually changes as AI grows more capable—before imposing rules. Experts emphasize evidence-based policymaking, noting that many open questions about risk remain research problems. Without this groundwork, policy can misallocate resources, chill legitimate innovation, or fail to address the real harms. The open question of marginal risk is presented as a guidepost for where regulation will be both effective and proportionate, particularly in a field evolving faster than typical regulatory cycles.

DEVELOPERS, USE CASES, AND THE REGULATION DILEMMA

The conversation presses developers and policymakers to distinguish development from deployment. If a new methodology or product is dangerous, it should be restricted through prohibitions on the harmful method itself, not by prohibiting the entire development process. The debate also touches on how to regulate entities that both develop and deploy AI, suggesting that policy should adapt to deployment contexts while preserving the ability for responsible innovation in development settings.

ENFORCEMENT LESSONS FROM CYBERSECURITY

Historical parallels from cybersecurity, malware, and encryption show that harms can be deterred by focusing on bad activity rather than trying to police the underlying code. The analogy to malware—where transmission is criminal but creation can have legitimate uses—highlights the difficulty of cleanly separating good and bad uses at the model layer. Rather than prohibiting core capabilities, policy can target illicit actions, leveraging established enforcement tools to deter and punish misuse.

OPEN SOURCE DOMINANCE AND GEOPOLITICS

The panelists stress a geopolitical angle: the US remains ahead with proprietary models, but China dominates the open-source space. Regulatory chill, copyright concerns, and liability fears drive US companies to withhold strong open models, accelerating China’s lead in open-source AI. This has implications beyond industry—affecting soft power, information freedom, and global AI adoption. The takeaway is that policy choices in the US can influence not only innovation but also strategic leverage in a tech-driven world.

INNOVATION EQUILIBRIUM: SAFETY AND PROGRESS

A core theme is balancing risk with opportunity. The speakers push back against alarmist, precautionary approaches that suppress innovation, arguing for an equilibrium informed by real-world trade-offs. They acknowledge genuine risks but insist that prudent, evidence-based measures—particularly those focusing on use and behavior—are more likely to protect the public without freezing progress. The goal is to preserve medical, economic, and technological breakthroughs while maintaining safety standards, rather than embracing a blanket, worst-case precaution.

PRECEDENTS AND LEARNING FROM HISTORY

Historical regulatory precedents in the internet era, encryption debates, and social media show the pitfalls of overcorrecting early. Europe’s early AI regulation is cited as an example of missteps that hindered adoption and innovation. The discussion stresses that precedents matter: we should adapt proven, open regulatory frameworks to AI, prioritizing evidence and marginal risk rather than chasing an imagined, static risk profile that could undercut competitiveness and global influence.

STARTUPS, FUNDING, AND THE CHILLING EFFECT

For startups, uncertainty is existential. The conversation notes that regulatory ambiguity delays or derails funding, hiring, and product development. Large incumbents may weather the storm due to resources, but new entrants struggle under layered state and federal rules. The risk is a widening gap where innovation migrates to well-resourced entities, slowing the emergence of new products and services that could benefit consumers and spur economic growth.

FEDERAL-STATE DYNAMICS AND GENERAL-LAW FRAMEWORKS

Policy proposals should emphasize general, technology-neutral laws that address use-based harms and gaps in existing statutes. The federal-state balance will be key, as states exercise consumer-protection authorities while federal rules may address overarching national interests. The speakers advocate for a framework that can evolve with technology, focusing on real-world misuse rather than attempting to lock down the unpredictable, rapidly changing landscape of AI development.

TOWARD EFFECTIVE AI POLICY: PRACTICAL TAKEAWAYS

The closing synthesis calls for clear, proportionate, and evidence-based policy that focuses on use and behavior, not abstract development, and that invites diverse perspectives from academia, venture capital, and industry. It emphasizes the need for concrete gaps in existing law to be filled with general-use restrictions, while preserving U.S. leadership in innovation. The practical aim is to foster responsible AI deployment without stifling invention, ensuring the policy framework adapts to rapid technological change.

Common Questions

The speakers argue that focusing regulation on how AI is used creates effective controls without stifling innovation in model development. They emphasize that there is no single, stable definition of AI, so targeting use avoids premature constraints on evolving technologies. Timestamp: 65

Topics

Mentioned in this video

More from a16z Deep Dives

View all 38 summaries

Found this useful? Build your knowledge library

Get AI-powered summaries of any YouTube video, podcast, or article in seconds. Save them to your personal pods and access them anytime.

Try Summify free