AI Dev 25 x NYC | Nyah Macklin: How to Structure Context to Make Your Agents Smarter
Key Moments
Context engineering with knowledge graphs makes AI agents smarter by structuring information for complex reasoning.
Key Insights
AI agents often fail at complex tasks due to a lack of structured context, not model limitations.
Context engineering systematically provides agents with relevant information in the correct format and time.
Knowledge graphs encode facts and relationships, enabling multi-hop reasoning that simple vector search often misses.
Graph-based retrieval (Graph RAG) significantly improves accuracy and reduces resolution time compared to traditional RAG.
Techniques like RAG + hybrid search, memory management, and tool/function calling can enhance agent context.
Structuring data into knowledge graphs transforms tabular or network data into a navigable network of information for agents.
THE PROBLEM WITH GENERIC AGENTS
Real-world problems, like responding to a security incident with 'production is down,' highlight the limitations of generic AI agents. When faced with such alerts, a common agent might provide a basic, contextless checklist (e.g., 'Isolate the host'). This generic advice is insufficient as it doesn't specify which systems or services are critical, nor does it account for the nuances of an unfolding attack. Such a lack of specific, actionable context wastes valuable time and can lead to detrimental outcomes, underscoring the need for more intelligent, context-aware agents.
CONTEXT ENGINEERING DEFINED
Context engineering is a discipline focused on systematically providing AI models with all necessary information, tools, and instructions. Unlike prompt engineering, which manipulates language for desired outputs, context engineering builds dynamic systems that assemble complete and structured context for each AI invocation. This approach is crucial for AI engineers, shifting the focus from clever wording to comprehensive context design. Key areas for improving context relevance include RAG with hybrid search, memory management, structuring context, and leveraging tools/function calling.
ADVANCING CONTEXT RETRIEVAL WITH KNOWLEDGE GRAPHS
Traditional Retrieval Augmented Generation (RAG) with vector databases is effective for single-hop questions but can retrieve off-target information, leading to hallucinations. Hybrid approaches combine vector semantics with symbolic filters or re-ranking to improve relevance. Memory management techniques like sliding windows or summarization help agents retain information over long sessions but risk losing older, crucial details. Structuring context, such as placing important information at the beginning of prompts and using consistent formats, also aids agent performance by ensuring models pay attention to key data.
THE POWER OF KNOWLEDGE GRAPH AUGMENTED CONTEXT
Knowledge graphs uniquely encode facts and their relationships, enabling sophisticated multi-hop reasoning for AI agents. Unlike text chunks, graphs represent a network of connections that agents can traverse to link information and gather related context missed by simple vector searches. This relational structure allows agents to understand not just what is relevant to a query, but also what else is connected and necessary for a comprehensive answer, moving beyond simple entity recognition to complex relationship understanding.
ARCHITECTURAL INTEGRATION OF KNOWLEDGE GRAPHS
Integrating knowledge graphs into agent architectures typically involves building the graph on top of existing data platforms (like data lakes or relational databases) to connect structured and unstructured data semantically. GenAI applications and orchestration layers then access this data contextually. An agent accessing the knowledge graph through protocols like MCP (Model Context Protocol) benefits from the graph's schema, which provides rich context and can even obviate the need for vector search due to increased accuracy and domain knowledge. This approach allows agents to reason more effectively by understanding explicit relationships.
BENEFITS AND REAL-WORLD IMPACT OF GRAPH RAG
Using knowledge graphs as context significantly enhances AI agents. Graph-based RAG consistently outperforms traditional RAG in accuracy across various benchmarks, as demonstrated by an 18% accuracy improvement in agent memory and a 90% reduction in latency cited from Zepai. LinkedIn saw ticket resolution times drop from 40 to 15 hours using graph RAG. In specific use cases like identifying asset managers vulnerable to a lithium shortage or determining a patient's care plan for emphysema, graph-based approaches provide concrete, traceable, and precise information compared to generic or incomplete responses from traditional methods.
ENABLING REASONING AND EXPLANABILITY
While vector or keyword search retrieves fragments and entity modeling identifies object types, it's the explicit modeling of relationships within knowledge graphs that allows AI agents to reason. For instance, in contract analysis, a graph can answer complex questions like identifying US companies with perpetual non-compete clauses and involved parties. This capability is crucial for complex queries. Knowledge graphs improve accuracy, enhance explanability by making relationships explicit, and future-proof AI systems by providing a robust, structured foundation for advanced reasoning and decision-making.
RESEARCH AND RESOURCES FOR FURTHER LEARNING
Several seminal research papers and resources highlight the efficacy of knowledge graph augmented context. A study on customer service question answering found a 28.6% decrease in per-issue resolution time by integrating knowledge graphs into RAG pipelines. Microsoft Research also published benefits of graph RAG over traditional RAG for query-focused summarization. Further exploration into techniques like 'covert neighborhood context' as a property for retrieval is also documented. These resources, often found at dedicated websites like graphrag.com/appendices, provide valuable insights for practitioners looking to implement these advanced context engineering strategies.
Mentioned in This Episode
●Software & Apps
●Companies
●Organizations
●Studies Cited
●People Referenced
Common Questions
Context engineering is a discipline focused on systematically providing AI models with all the relevant information, tools, and instructions in the correct format and at the right time to accomplish a specific task. It's about building dynamic systems that assemble structured context for each LLM invocation.
Topics
Mentioned in this video
The team behind a white paper that discusses the benefits of using graph and graph RAG over traditional RAG for query-focused summarization with LLMs.
Senior engineer and developer advocate for artificial intelligence, focusing on graph theory, graph algorithms, and context engineering. Considers ethics, governance, safety, and responsibility in AI.
An organization where graph RAG for AI agent memory was found to improve accuracy by over 18% and reduce latency by 90%.
An LLM orchestration layer mentioned for building genai applications and accessing data contextually.
A seminal white paper testing graph RAG for customer service question answering, which resulted in a 28.6% decrease in per-issue resolution time.
More from DeepLearningAI
View all 65 summaries
1 minThe #1 Skill Employers Want in 2026
1 minThe truth about tech layoffs and AI..
2 minBuild and Train an LLM with JAX
1 minWhat should you learn next? #AI #deeplearning
Found this useful? Build your knowledge library
Get AI-powered summaries of any YouTube video, podcast, or article in seconds. Save them to your personal pods and access them anytime.
Try Summify free