AI Dev 25 x NYC | Stefano Pasquali: Building Trustworthy AI for Finance
Key Moments
Building trustworthy AI in finance requires a sovereign architecture combining knowledge graphs, LLMs, and agentic reasoning for transparency and accountability.
Key Insights
The majority of AI pilot projects in finance fail due to a lack of trust in AI outcomes for mission-critical applications.
A shift from AI hype to maturity necessitates treating data and governance as first-class citizens alongside LLMs.
Sovereign AI architecture, integrating knowledge graphs, LLMs, and agentic reasoning, is crucial for financial AI transparency and auditability.
Financial AI must prioritize regulation and risk management, demanding explainable and controllable AI models.
LLMs are a tool, not the sole solution; traditional ML models and knowledge graphs are essential components for robust financial AI.
Internalized, integrated AI platforms are vital for finance, ensuring security, compliance, and control over mission-critical use cases.
THE CHALLENGE OF AI ADOPTION IN FINANCE
The financial industry faces significant hurdles in adopting AI, with less than 10% of pilot projects reaching full-scale production. This low adoption rate stems primarily from a lack of trust in AI-generated outcomes for mission-critical applications. Past approaches have been hampered by vendor fatigue, the cost of large-scale models, and a tendency towards 'shadow AI' without proper governance. This highlights a critical need to move beyond the hype and embrace maturity, treating data and governance with the same, if not greater, importance as LLMs themselves.
THE VISION FOR SOVEREIGN AI ARCHITECTURE
Drawing from a childhood fascination with physics and graph theory, the speaker proposes a 'Sovereign AI' architecture. This unified framework combines knowledge graphs, Large Language Models (LLMs), and agentic reasoning to create a system that is transparent, auditable, and trustworthy. The goal is to build an ecosystem where innovation meets accountability, moving beyond mere prediction to the ability to prove and explain AI-driven decisions, which is paramount in a highly regulated industry like finance.
INTEGRATING KEY COMPONENTS FOR TRUST
The proposed architecture emphasizes a multi-pillar approach. Knowledge graphs are central for unifying structured and unstructured data, while agentic reasoning enables complex workflows and decision-making. Traditional machine learning models and existing APIs remain crucial, with LLMs acting as a powerful application layer rather than a complete replacement. This composite approach acknowledges that financial tasks like relative value calculations cannot solely rely on LLMs, necessitating integration with established analytical tools and data sources.
GOVERNANCE AS A FIRST-CLASS CITIZEN
A core tenet of this vision is a robust governance layer, akin to an 'MRI machine' for AI processes. This involves meticulously scanning every step, from data retrieval and reasoning to potential hallucinations and prompt injection. By scoring each process element and providing transparency, users gain a quantifiable understanding of their trust in the AI's output. This allows for policy implementation, such as automating execution for highly trusted recommendations.
THE IMPORTANCE OF INTERNALIZED PLATFORMS
The speaker argues against relying solely on centralized, external models for mission-critical financial use cases. Instead, a fully internalized, integrated AI environment is advocated. This approach ensures security, control, and compliance, allowing operations to continue even without external connectivity. While acknowledging the value of external tools for specific tasks, the focus for high-stakes financial decisions remains on platforms that leverage internal know-how and adhere to regulatory obligations.
ADVANCEMENTS IN KNOWLEDGE GRAPHS AND GOVERNANCE
Significant progress is being made in leveraging knowledge graphs, which are becoming more accessible with LLMs, though quality control remains key. New methodologies are being developed to clean and boost graph quality, enabling them to extract features for advanced analytics and govern LLM behavior. Furthermore, specific 'domain guard' models are being trained for governance tasks, such as model surveillance, aiming to create a powerful ecosystem for ensuringAI reliability and safety in financial applications.
FUTURE DIRECTIONS: CAUSAL REASONING AND AGENTIC PRODUCTIVITY
Future development is focused on next-generation analytics that move beyond correlation to causal reasoning. By merging knowledge graphs with LLM-based reasoning, the aim is to understand and trace the 'why' behind AI-derived insights. The ultimate goal is to enhance trader and portfolio manager productivity through multi-agent systems that provide a trusted, interpretable, and traceable environment for data retrieval, API interaction, and trade memo generation.
Mentioned in This Episode
●Companies
●Organizations
●Concepts
Common Questions
The primary challenge is the lack of trust in AI outcomes for mission-critical applications, leading to low production scalability and significant 'PC fatigue' from unfulfilled pilots.
Topics
Mentioned in this video
More from DeepLearningAI
View all 65 summaries
1 minThe #1 Skill Employers Want in 2026
1 minThe truth about tech layoffs and AI..
2 minBuild and Train an LLM with JAX
1 minWhat should you learn next? #AI #deeplearning
Found this useful? Build your knowledge library
Get AI-powered summaries of any YouTube video, podcast, or article in seconds. Save them to your personal pods and access them anytime.
Try Summify free