AI Dev 25 x NYC | Tyler Slaton: Build User Facing Agentic Applications with AG UI

DeepLearning.AIDeepLearning.AI
Education3 min read28 min video
Dec 5, 2025|2,759 views|52|1
Save to Pod

Key Moments

TL;DR

Building agentic apps with AG-UI: Streamlined user interaction and generative UI patterns.

Key Insights

1

Agentic applications break the traditional request-response model, requiring streaming and handling complex inputs/outputs for good user experience.

2

The AG-UI protocol facilitates seamless communication between agentic backends and frontends, offering a standardized event-based approach.

3

Generative UI encompasses static (mapping data to components), open-ended (iframes/HTML), and declarative (structured specs) patterns for agent-generated interfaces.

4

State management is crucial for agentic applications, allowing for memory, persistence, and bidirectional updates between the agent and the user interface.

5

Future trends include the rise of voice agents, agent steering for autonomous tasks, and self-improving agents through reinforcement learning from human feedback (RLHF).

6

CopilotKit aims to support various generative UI patterns and advance agent-user interactivity through protocols like AG-UI.

THE EVOLVING LANDSCAPE OF AGENTIC APPLICATIONS

Agentic applications represent a significant departure from traditional web paradigms, moving beyond simple request-response interactions. They are characterized by long-running processes that necessitate streaming for a fluid user experience. These applications must also handle a diverse range of inputs and outputs, including structured data, unstructured text, tool calls, and even complex state updates. Furthermore, agentic systems often involve intricate composition, requiring seamless handoffs between sub-agents and keeping the user informed throughout these processes. This complexity underscores the need for new protocols and patterns to effectively build and manage these advanced AI-driven systems.

INTRODUCING THE AG-UI PROTOCOL

To address the challenges of building agentic applications, CopilotKit developed the AG-UI protocol. This event-based protocol is specifically designed to connect agentic backends with frontends, enabling fluid and interactive user experiences. AG-UI complements existing protocols like MCP (Model Context Protocol) and A2A (Agent-to-Agent) by focusing on the crucial agent-user interaction layer. It provides a standardized set of 16 events optimized for this communication, transport-agnostic, and supporting a client-server architecture where agents act as servers and frontends like React act as clients.

UNDERSTANDING GENERATIVE UI PATTERNS

Generative UI refers to interfaces or components generated by an AI agent and transmitted to the user. Three primary patterns exist: static UI involves mapping agent-generated data to pre-existing UI components, offering deep control and enabling the visualization of tool calls and agent states. Open-ended UI allows agents to return raw HTML or iframes, offering maximum flexibility but with potential challenges in styling, security, and performance, particularly in non-web environments. Declarative UI utilizes semi-open specifications, like JSON schemas, for designing constrained UIs, striking a balance between static and open-ended approaches by mapping agent-generated structures to component libraries.

THE ROLE OF STATE IN AGENTIC INTERACTIONS

While LLMs are famously stateless, agentic applications are inherently stateful. This state acts as an abstraction layer, enabling crucial functionalities such as memory, message history, and persistence. State can encompass not only conversational data but also structured information like charts, user preferences, or dynamic content. CopilotKit's V2 interfaces facilitate bidirectional state management, allowing both the agent and the frontend to read and update state. This enables sophisticated collaborative experiences, where users can directly interact with and modify agent-generated content, fostering a more dynamic and responsive application.

DEMONSTRATING AGENTIC CAPABILITIES WITH A TO-DO APP

A practical demonstration showcased a to-do application built with CopilotKit, highlighting key agentic features. The agent could add and manage to-do items, respond to queries about task status, and even perform front-end actions like changing the application's theme by updating React state. Notably, the demo illustrated a 'human-in-the-loop' scenario where the agent requested user permission before executing a significant action, such as marking all tasks as complete. This exemplifies how agents can integrate seamlessly with user workflows and learn from explicit user consent and interaction.

THE HORIZON: VOICE, STEERING, AND SELF-IMPROVEMENT

Looking ahead, CopilotKit anticipates 2026 as the year of voice agents, leveraging the intuitiveness of voice interaction and protocols like WebRTC for seamless integration. Agent steering is identified as critical for autonomous agents to maintain focus and achieve desired outcomes by preventing mid-run deviations. Perhaps most significantly, the focus is on self-improving agents powered by reinforcement learning from human feedback (RLHF). By analyzing signals from human edits, approvals, and denials, agents can continuously learn and refine their performance, leading to increasingly capable and intelligent AI systems.

Building Agentic Applications with Copilot Kit & AGUI

Practical takeaways from this episode

Do This

Focus on streaming for good UX in agentic applications.
Consider agent composition and handoffs for complex processes.
Understand the differences between MCP, A2A, and AGUI protocols.
Leverage AGUI's event-based architecture for agent-UI communication.
Map generated data to existing components for Static Generative UI.
Utilize the `use_agent` hook for programmatic control and generative UI in headless applications.
Embrace AGUI's support for Static, Open-ended, and Declarative Generative UI.
Implement state management for memory, persistence, and user preferences in agents.
Consider voice modality for more intuitive and continuous agent interaction.
Implement agent steering to keep autonomous agents on task.
Utilize signals from human edits and feedback for self-improving agents (RLHF).

Avoid This

Do not break the request-response paradigm without considering streaming.
Avoid building agentic applications without a focus on streaming for UX.
Do not underestimate complexity; agents require specialized handling.
Be aware of high coupling between backend and frontend for static UI if teams don't coordinate.
Avoid relying solely on open-ended UI which can lead to styling and security issues.
Do not ignore the constraints of declarative specs; ensure they fit needs.
Do not assume LLMs are stateful; agents abstract stateless LLMs into stateful systems.

Common Questions

Copilot Kit is an open-source framework for building AI co-pilots and user-facing agentic applications. It facilitates agent-user interactivity and is used by a wide range of organizations.

Topics

Mentioned in this video

More from DeepLearningAI

View all 65 summaries

Found this useful? Build your knowledge library

Get AI-powered summaries of any YouTube video, podcast, or article in seconds. Save them to your personal pods and access them anytime.

Try Summify free