Key Moments

NVIDIA CEO Jensen Huang GTC 2026 Full Keynote

Y
Yahoo Finance
News & Politics4 min read141 min video
Mar 16, 2026|59,581 views|952|71
Save to Pod
TL;DR

NVIDIA's GTC 2026: AI factories, "tokens as commodity," new architectures, & agent revolution.

Key Insights

1

Launched new AI factory concept and "tokens" as the new commodity, driving immense compute demand.

2

Introduced next-generation architectures (Grace Blackwell, Vera Rubin) and hybrid AI processing with Groq.

3

Revolutionized AI with 'Open Claw' as the 'operating system' for agentic AI, emphasizing security and enterprise readiness.

4

Expanded AI's reach into physical robotics and autonomous vehicles with advanced simulation and foundational models.

5

Emphasized NVIDIA's vertically integrated yet horizontally open approach, bolstered by extensive CUDA X libraries and an expanding ecosystem.

6

Highlighted the shift from training to inference as the primary driver of AI computing demand and revenue.

THE RISE OF TOKENS AND AI FACTORIES

Jensen Huang opened GTC 2026 by framing intelligence creation around "tokens" as the fundamental building blocks of AI. He introduced the concept of 'AI Factories' – a new type of industrialized production for these tokens. This vision positions tokens as the new commodity, driving an unprecedented demand for compute power and signaling a paradigm shift in how computing is consumed and monetized. The entire industry is now focused on optimizing these token production factories.

ARCHITECTURAL INNOVATIONS AND HYBRID PROCESSING

NVIDIA unveiled its next-generation computing platforms, including the Grace Blackwell and Vera Rubin systems, engineered for extreme performance and efficiency in AI workloads. A significant development was the integration with Groq's technology, creating a hybrid processing approach. This combines NVIDIA's strengths in high-throughput computation with Groq's specialized inference capabilities, particularly for critical, low-latency tasks like coding and agentic AI operations, aiming to push performance boundaries further.

OPEN CLAW: THE OPERATING SYSTEM FOR AGENTS

The GTC keynote introduced 'Open Claw,' presented as the operating system for agentic AI. This open-source framework standardizes the creation and deployment of AI agents, enabling them to perceive, reason, and act across digital and physical domains. NVIDIA is heavily investing in this ecosystem, offering a reference design 'Nemo Claw' for enterprise readiness, security, and privacy, positioning it as the next major platform shift comparable to Linux or the internet.

PHYSICAL AI, ROBOTICS, AND AUTONOMOUS SYSTEMS

NVIDIA is extending its AI revolution into the physical world with significant advancements in robotics and autonomous vehicles. Leveraging sophisticated simulation tools like Isaac Lab and Newton, the company is enabling the training of physically embodied AI. The integration of foundational models and the 'Alpamo' platform is pushing autonomous driving towards its 'ChatGPT moment,' with partnerships announced for robo-taxi deployment, marking a new era of physical AI.

VERTICAL INTEGRATION AND HORIZONTAL OPENNESS

Huang reiterated NVIDIA's strategy as vertically integrated yet horizontally open. This approach involves deep understanding and optimization across hardware, software libraries (like CUDA X), and application domains, ensuring accelerated computing is tailored for every industry. Simultaneously, NVIDIA maintains an open ecosystem, integrating its technologies with cloud providers, system makers, and software partners to make its platforms accessible and universally applicable.

THE INFERENCE INFLECTION AND THE FUTURE OF COMPUTING

A central theme was the 'inference inflection,' highlighting that AI's primary computational demand has shifted from training to inference. This shift drives the exponential growth in compute orders, with NVIDIA forecasting over $1 trillion in demand through 2027. The focus is on maximizing 'tokens per watt,' ensuring data centers, now considered token factories, operate at peak efficiency to meet this escalating demand and unlock new revenue streams.

ADVANCEMENTS IN DATA PROCESSING AND CONNECTIVITY

NVIDIA introduced foundational libraries like QDF for structured data and QVS for unstructured (vector) data, accelerating enterprise data processing. The company is also innovating in connectivity with technologies like NVLink 72 and Spectrum X co-packaged optics, crucial for building massive AI supercomputers. A new class of data center CPUs and storage solutions are being developed to handle the intense demands of AI agents and generative models.

THE AI FACTORY ECOSYSTEM AND DIGITAL TWINS

To manage the complexity of building and operating AI factories, NVIDIA launched the DSX platform. This leverages Omniverse for creating digital twins of AI factories, enabling simulation, design, and dynamic power management. Collaborations with partners like Siemens and PTC are crucial for this ecosystem, ensuring optimal energy efficiency and maximum token throughput by minimizing wasted power and optimizing operations across the entire infrastructure.

OPEN MODELS AND SOVEREIGN AI INITIATIVES

NVIDIA is fostering a diverse AI ecosystem through its 'Open Models' initiative, offering millions of open-source models across various domains like language, biology, and physics. They are also actively working with countries to build 'sovereign AI' capabilities, customizing foundational models like Neomotron to meet specific regional needs. This democratizes AI development and allows for specialized intelligence tailored to unique industry requirements.

THE CLOUD AND ENTERPRISE ADOPTION

NVIDIA's strategy heavily involves deep integration with all major cloud service providers (AWS, Azure, Google Cloud), acting as a customer acquisition engine by enabling accelerated workloads on their platforms. In the enterprise, the shift is towards agentic systems that will transform traditional IT into 'Agentic as a Service' companies, with every software company needing an 'Open Claw strategy' to leverage AI agents for enhanced productivity and customer offerings.

NVIDIA GTC 2026 Key Takeaways for AI Factory Optimization

Practical takeaways from this episode

Do This

Leverage NVIDIA's CUDA-X libraries for domain-specific acceleration across all AI lifecycle phases.
Prioritize accelerated computing platforms to overcome Moore's Law limitations and continuously reduce computing costs.
Adopt agentic systems and an Open-Claw strategy for significant productivity gains in enterprise IT.
Utilize NVIDIA's reference designs like NEMO Claw for secure and private deployment of AI agents in corporate networks.
Invest in NVIDIA's comprehensive AI infrastructure, including Grace Blackwell and Vera Rubin systems, for optimal token factory effectiveness.
Integrate Groq LPUs with Vera Rubin for workloads requiring extremely high-speed, low-latency token generation.
Explore NVIDIA's Open Models initiative to fine-tune world-class foundation models for specialized domains and sovereign AI needs.
Employ Omniverse and NVIDIA DSX for virtual design, simulation, and operation of gigawatt-scale AI factories to maximize throughput and energy efficiency.
Prepare for physical AI and robotics rollout, utilizing NVIDIA's simulation platforms like Isaac Lab and Newton for synthetic data generation and policy training.

Avoid This

Rely solely on traditional CPU-based data processing systems as they cannot keep pace with AI demands.
Ignore the token economics and the need for optimal performance per watt in your AI data centers, as it directly impacts revenue.
Underestimate the security and privacy implications of deploying agentic systems in corporate networks without proper safeguards like Open Shell and NEMO Claw.
Assume a single AI model can serve every industry; customize models for domain-specific and sovereign AI requirements.
Neglect the continuous advancement of software and algorithms; continuous optimization is key to long-term cost reduction and performance gains.

NVIDIA Inference Performance per Watt Evolution (Hopper vs. Blackwell)

Data extracted from this episode

ArchitecturePerformance vs. Expected (Moore's Law Basis)Tokens per Watt (relative)Cost per TokenKey Innovations
Hopper H200 (previous generation)1.5x (expected)BaselineHigherFP8 Transformer Engine, MVLink4
Grace Blackwell NVLink 72 (current generation)35-50x (actual, vs. 1.5x expected)35x higher (or 50x)Lowest in the worldMVLink72, MV FP4, Dynamo, Tensor RTLM

AI Model Tiers and Revenue Potential per Million Tokens

Data extracted from this episode

TierThroughput/SpeedInput Context LengthPrice per Million TokensRevenue Benefit (relative)
Free TierHigh Throughput, Low Speed100,000 tokens$0Attracts more customers
First TierMedium Throughput, Medium SpeedIncreased$3Baseline monetization
Next TierHigher Throughput, Higher SpeedLarger$6Increased Value
High TierHigh PerformanceMillions of tokens$45Significant Monetization
Premium Tier (Future)Incredibly High SpeedVery Long Input (research)$150Maximized Value for critical paths

NVIDIA AI Architectures and Compute Growth (2016-Future)

Data extracted from this episode

YearArchitecture/SystemKey FeatureCompute (Teraflops/Exaflops)GPUs/NodesCompute Gain (vs. 2016)
2016DGX-1 (Pascal)First computer for deep learning170 teraflops8 GPUs, 1 NVLink1x
VoltaNVLink SwitchAll-to-all bandwidthN/A16 GPUsN/A
2020DGX A100 SuperPODScale-up/scale-out, NVLink 3N/AN/AN/A
HopperFP8 Transformer EngineGenerative AI era, NVLink 4N/AN/AN/A
BlackwellNVLink 72AI supercomputing system architectureN/A72 GPUsN/A
Vera RubinAgentic AI, NVLink 72CPU, storage, networking, security3.6 exaflops72 GPUs40 million times more compute (in 10 years)

Common Questions

NVIDIA operates as a platform company with vertically integrated development (from chips to algorithms) and horizontally open integration (working with diverse partners). Their strategy focuses on accelerating domain-specific applications across various industries using their CUDA-X libraries and a growing installed base of GPUs.

Topics

Mentioned in this video

Software & Apps
CUDA-X

NVIDIA's platform encompassing numerous libraries and algorithms for accelerated computing, celebrating its 20th anniversary. It's integrated into every ecosystem.

SQL

A declarative language to query data, invented by IBM, forming a foundation of modern enterprise computing.

Bing Search

Microsoft's search engine, which NVIDIA helps accelerate as part of their partnership with Azure.

PyTorch

An open-source machine learning framework, for which NVIDIA is the only accelerator in the world that is 'incredible' on it.

CUopt

A CUDA-X library for decision optimization.

CDSS

A CUDA-X library mentioned for direct sparse solvers.

GitHub Copilot

An AI code completion tool, mentioned as 'Codex' in the transcript, assisting software engineers at NVIDIA.

Velox

A high-performance C++ open-source data processing library mentioned for structured data.

Google Cloud

Google's suite of cloud computing services, mentioned for BigQuery and Vertex AI acceleration, as well as a partnership with Snapchat.

QDF

NVIDIA's foundational library for accelerating structured data processing (data frames), integrated into platforms like IBM Watson X.

CUAero

A CUDA-X library for computational aerodynamics.

CULitho

A CUDA-X library for computational lithography.

Parabricks

A CUDA-X library for genomics, accelerating genomic analysis.

Cursor

An AI-powered code editor, used by NVIDIA software engineers for assistance in coding.

Tensor RTLM

A new algorithm mentioned alongside Dynamo as part of NVIDIA's efforts to optimize AI performance.

Pandas

A data analysis and manipulation library for Python, mentioned as a platform for handling structured data.

IBM Watson X Data

IBM's platform, with its SQL engines being accelerated by NVIDIA GPU computing libraries, showcasing significant speedup and cost reduction for enterprises like Nestle.

Warp

An NVIDIA software for differentiable physics, co-developed with Disney and DeepMind for robotics simulation.

PTC Windchill PLM

A Product Lifecycle Management (PLM) software managing SIM-ready assets from NVIDIA and equipment manufacturers in the DSX platform.

Siemens Star-CCM+

A leading simulation tool used in the NVIDIA DSX platform for testing external thermals of AI factories.

Alpamo

NVIDIA's AI for autonomous vehicles, enabling reasoning and safe operation across scenarios.

LangChain

A framework for developing applications powered by language models, noted for a billion downloads for creating custom agents, joining the Neimotron coalition.

DLSS 5

NVIDIA's next-generation graphics technology, using neuro rendering to fuse 3D graphics with generative AI for realistic and controllable content.

Amazon EMR

A cloud big data platform by Amazon Web Services, mentioned for processing structured data.

BigQuery

Google Cloud's serverless data warehouse, accelerated by NVIDIA for Google Cloud customers.

AWS

NVIDIA's first cloud partner, accelerating services like EMR, SageMaker, and Bedrock, and facilitating the deployment of OpenAI on AWS.

Aerial

An NVIDIA platform or library for AI RAN (Radio Access Network), significant for telecommunications.

Claude

An agentic model which revolutionized software engineering by its ability to read files, code, compile, test, evaluate, and iterate.

Kubernetes

An open-source system for automating deployment, scaling, and management of containerized applications, enabling mobile cloud, compared to Open-Claw.

NIMOTRON

NVIDIA's reasoning models for language, visual understanding, RAG, safety, and speech, part of its Open Models initiative.

Microsoft / Azure

Microsoft's cloud computing service, mentioned for its Fabric platform and in partnership with NVIDIA for confidential computing and AI foundry acceleration.

QVS

NVIDIA's foundational library for accelerating unstructured data processing (vector stores, semantic data, AI data).

Dell AI Data Platform

A platform integrating NVIDIA's QDF and QVS libraries to accelerate data processing for the AI era.

Vert.ex AI

Google's unified machine learning platform, which NVIDIA accelerates.

Palantir Ontology Platform

A brand new type of AI platform created in partnership with NVIDIA and Dell, capable of on-premise and air-gapped deployments.

CUEquivariance

A CUDA-X library for geometry-aware neural networks.

Open-Claw

An open-source project by Peter Steinberger, described as the 'operating system of agent computers' or 'personal agents,' revolutionizing enterprise IT.

NEMO Claw

NVIDIA's reference design for an enterprise-ready, secure, and private Open-Claw stack with policy guardrails and a privacy router.

CUDA Deep Neural Network

One of the most important libraries created by NVIDIA, which revolutionized artificial intelligence and caused the 'big bang' of modern AI.

ChatGPT

An AI chatbot by OpenAI that started the generative AI era, capable of understanding, perceiving, translating, and generating unique content.

Relexes

Mentioned as an important internal AI consumption workload, shifting from traditional recommender systems to deep learning and large language models.

Omniverse

NVIDIA's platform designed to hold the world's digital twins, enabling virtual design and simulation of AI factories.

Dassault Systèmes 3DEXPERIENCE

A platform for model-based systems engineering, used in conjunction with NVIDIA DSX for AI factory design.

Procore

A construction management software, used to virtually commission AI factories through NVIDIA DSX to ensure accelerated construction time.

Linux

An open-source operating system, compared to Open-Claw for its foundational impact on computing.

Bioneo

NVIDIA's open models for biology, chemistry, and molecular design.

Perplexity AI

A multimodal agentic system (AI search engine) recommended for its quality, joining the Neimotron coalition.

NVIDIA DSX

NVIDIA's new AI factory platform, an Omniverse digital twin blueprint for designing and operating AI factories for maximum token throughput, resilience, and energy efficiency.

Earth-2

NVIDIA's models for weather and climate forecasting, rooted in AI physics.

Open Shell

Technology integrated into Open-Claw to make it enterprise-secure and private-capable for sensitive corporate networks.

Isaac Lab

NVIDIA's open-source platform for robot training and evaluation in simulation, used by various companies for synthetic data generation and policy training.

Products
GeForce

NVIDIA's consumer GPU brand, which pioneered programmable shaders 25 years ago and laid the groundwork for CUDA.

RTX

NVIDIA's architecture for modern computer graphics, introduced 8-10 years ago, that fused programmable shading with hardware ray tracing and AI.

NVIDIA A100

An NVIDIA GPU supercomputer, the first of which was installed at Azure, leading to the partnership with OpenAI.

Hopper

NVIDIA's previous GPU architecture (e.g., H200), which revolutionized computing with its FP8 Transformer engine and MVLink 4, but is now superseded by Blackwell and Rubin.

DGX Cloud

An NVIDIA supercomputing capability, representing billions of dollars in investment, used to optimize kernels and the complete stack for inference.

Spectrum X

NVIDIA's platform for co-packaged optics (CPO) Ethernet switches, increasing energy efficiency and resilience in AI factories.

LP40

A new LPU (Language Processing Unit) chip, part of the Fineman generation, developed by NVIDIA and Groq team together.

NVIDIA Thor

NVIDIA's automotive superchip, mentioned as being radiation-approved for space applications like satellites.

NVLink-72

A re-architected system by NVIDIA, integrating 72 GPUs, which was a giant bet but delivered significant improvements in inference performance and energy efficiency.

Hopper GPU

The first GPU with the FP8 Transformer engine, which launched the generative AI era, using NVLink 4 and Bluefield 3 DPUs.

Rubin

NVIDIA's future GPU architecture, projected to generate five times more revenue than Blackwell for AI factories.

Rosa

A new CPU, short for Rosslyn, part of the Fineman generation, connecting with Bluefield 5 and SuperNIC CX10.

DGX A100 SuperPOD

The first GPU supercomputer combining scale-up and scale-out architecture, using NVLink 3 and Quantum InfiniBand.

SuperNIC CX10

NVIDIA's next-generation SuperNIC, connecting with Bluefield 5 and the new Rosa CPU.

Vera Rubin Space 1

A new computer being developed by NVIDIA and partners to build data centers in space, designed to handle cooling challenges in a vacuum.

IBM System/360

Introduced by IBM 60 years ago, it was the first modern platform for general-purpose computing, launching the computing era.

DGX-1

Introduced in 2016, the world's first computer designed for deep learning, featuring eight Pascal GPUs connected with first-generation NVLink.

Groq LP30

Groq's LPU chip, which is integrated with Vera Rubin systems, manufactured by Samsung, and in full production, offering high-speed token acceleration.

Kyber Rack

A new rack system for Reuben Ultra, enabling connection of 144 GPUs in one NVLink domain, designed for vertical integration.

Kyber CPO

NVIDIA's next-generation scaling solution for Kyber racks, utilizing co-packaged optics for scale-up, alongside copper options.

Fineman

NVIDIA's future GPU architecture after Rubin, featuring a new GPU, a new LPU (LP40), and a new CPU (Rosa).

Bluefield 5

NVIDIA's next-generation Data Processing Unit, connecting the new Rosa CPU with the SuperNIC CX10.

Dynamo

A new operating system for AI factories, invented by NVIDIA, that enables the disaggregation of inference workloads between different processors.

Bluefield 3 DPU

NVIDIA's Data Processing Unit, integrated into the Hopper and Blackwell architectures for improved networking and security.

Grace CPU

A brand new CPU designed by NVIDIA for extremely high single-threaded performance, data output, data processing, and energy efficiency, using LPDDR5.

People
Alex Krizhevsky

Pioneer in deep learning, whose work, enabled by GeForce, demonstrated the GPU's potential for accelerating deep learning.

Andrew Ng

Prominent AI researcher, whose work was enabled by GeForce GPUs in accelerating deep learning.

Geoffrey Hinton

Considered the 'Godfather of AI' and a pioneer in deep learning, whose work was enabled by GeForce GPUs.

Jensen Huang

Founder and CEO of NVIDIA, giving the keynote speech at GTC. He emphasizes NVIDIA's strategy and technological advancements.

Ilya Sutskever

Co-founder of OpenAI and a pioneer in deep learning, whose work was enabled by GeForce GPUs.

Blackwell

NVIDIA's next-generation GPU architecture, mentioned with Grace Blackwell NVLink 72, delivering 35-50x performance per watt improvement for inference.

Andre Karpathy

Mentioned in the Open-Claw video clip as launching 'research,' which seems to refer to a variant or application of agentic systems.

Isaac Newton

NVIDIA's extensible and GPU-accelerated differentiable physics simulation, used by Disney and other robotics developers.

Grace Blackwell

NVIDIA's supercomputing system, combining Grace CPUs and Blackwell GPUs with NVLink 72, representing a huge leap in AI performance and efficiency.

Dylan Patel

Analyst from SemiAnalysis who accused Jensen Huang of 'sandbagging' NVIDIA's Grace Blackwell's performance, finding it to be even better than claimed.

Vera Rubin

NVIDIA's most advanced AI supercomputing platform, architected for agentic AI, featuring NVLink 72, 3.6 exaflops of compute, and five rack-scale computers.

Peter Steinberger

Creator of Open-Claw, an open-source project for AI agents that rapidly became the most popular open-source project.

Companies
CrowdStrike

Mentioned as a customer and developer using NVIDIA technologies integrated into cloud services.

Anthropic

An AI safety and research company, partnering with NVIDIA, producing models that benefit from NVIDIA's confidential computing.

Oracle

Cloud infrastructure provider where NVIDIA was their first AI customer, and now a key partner for AI cloud deployments.

Puma

Mentioned as a customer and developer using NVIDIA technologies integrated into cloud services.

Synopsis

An electronic design automation (EDA) company, a partner of NVIDIA that uses their acceleration for EDA and CA workflows.

Palantir

A software company specializing in big data analytics, partnering with NVIDIA and Dell to create the Palantir Ontology Platform.

L'Oreal

A large established company mentioned as part of NVIDIA's diverse ecosystem of partners.

IBM

The inventor of SQL and System/360, partnering with NVIDIA to accelerate Watson X Data SQL engines with GPU computing libraries.

Dell

A world-leading computer systems and storage provider, partnered with NVIDIA to create the Dell AI Data Platform.

Snapchat

Social media company that reduced its computing cost by nearly 80% by using NVIDIA accelerated Google Cloud services.

Base Ten

Mentioned as a customer and developer using NVIDIA technologies integrated into cloud services.

Fireworks.ai

An inference service provider that experienced a 7x increase in token speeds (from 700 to 5,000 tokens/second) after updating to NVIDIA's optimized software.

Mellanox

A company that joined NVIDIA, contributing InfiniBand technology for scaling up and scaling out GPU supercomputers.

NVIDIA

A platform company with three main platforms: CUDA-X, Systems, and AI Factories. They are vertically integrated and horizontally open, developing chips, systems, and software libraries for AI acceleration across numerous industries.

Nestle

A global company that uses accelerated Watson X Data running on NVIDIA GPUs to refresh its supply chain data mart five times faster at 83% lower cost.

Amazon

A consequential company from previous computing platform shifts, also a major cloud partner for NVIDIA.

Hyundai

A South Korean multinational automotive manufacturer, one of four new partners for NVIDIA's robotaxi-ready platform.

Mercedes-Benz

An automotive manufacturer already partnering with NVIDIA for robotaxi-ready platforms.

Universal Robots

A collaborative robotics company working with NVIDIA to implement physical AI models into manufacturing lines.

Snowflake

A cloud-based data warehousing company, one of the platforms processing data frames.

NTT DATA

Global IT services company mentioned as a user of the Dell AI Data Platform, experiencing huge speedups.

Toyota

A large established automotive company, now a partner in NVIDIA's self-driving car platform.

Meta Platforms

A consequential company from previous computing platform shifts, also a partner using NVIDIA's AI compute.

Foxconn

Fine-tunes Groot models in Isaac Lab for their robotics applications.

Databricks

A company providing a data and AI platform, mentioned as processing data frames.

Salesforce

Mentioned as a customer and developer using NVIDIA technologies integrated into cloud services.

OpenAI

A leading AI research and deployment company, whose compute-constrained models will be accelerated by NVIDIA on AWS and Azure, and who started the generative AI era with ChatGPT.

Coreweave

Described as the world's first AI native cloud, a company specifically built to host GPUs for AI clouds, partnering with NVIDIA.

JPMorgan Chase

A large established financial services company mentioned as part of NVIDIA's diverse ecosystem of partners.

Groq

A company with deterministic data flow processors (LPUs), whose technology was acquired and integrated into NVIDIA's Vera Rubin systems to enhance low-latency inference.

Samsung

Manufacturer of the Groq LP30 chip, thanked by Jensen Huang for their production efforts.

Motors

An automotive manufacturer already partnering with NVIDIA for robotaxi-ready platforms.

ABB

A robotics company working with NVIDIA to implement physical AI models into manufacturing lines.

T-Mobile

A telecommunications company partnering with NVIDIA for Aerial AI RAN, transforming radio towers into robotics radio towers.

Walmart

A large established company mentioned as part of NVIDIA's diverse ecosystem of partners.

Google

A consequential company from previous computing platform shifts, also a major cloud partner for NVIDIA.

Cadence Design Systems

A company providing Reality for internal thermal simulation in the NVIDIA DSX platform.

Black Forest Labs

An imaging company joining NVIDIA's Neimotron coalition for sovereign AI, implying a focus on domain-specific models.

Mistral AI

An AI company mentioned as part of the Neimotron coalition, producing incredible models.

Sarvam AI

An AI company from India, identified as 'Reflection Sarv' in the transcript, joining the Neimotron coalition.

BYD

A Chinese multinational manufacturing company, one of four new partners for NVIDIA's robotaxi-ready platform.

Noble Machines

Uses Isaac Lab for training and data generation.

TSMC

Taiwan Semiconductor Manufacturing Company, with whom NVIDIA invented the process technology for co-packaged optics (CPO) used in Spectrum X switches.

Jacobs Solutions

An engineering firm that brings data into their custom Omniverse app to finalize AI factory designs.

Thinking Machine

A company mentioned alongside 'Mirror Morardi's Lab' as joining the Neimotron coalition.

Nissan

A Japanese multinational automobile manufacturer, one of four new partners for NVIDIA's robotaxi-ready platform.

Zeekr

A Chinese electric vehicle brand, identified as 'Ji' in the transcript, one of four new partners for NVIDIA's robotaxi-ready platform.

Uber

Ride-sharing company partnering with NVIDIA to deploy robotaxi-ready vehicles into their network across multiple cities.

KUKA

A German manufacturer of industrial robots, working with NVIDIA to integrate physical AI models.

Paratus AI

Trains their operating room assistant robot in NVIDIA Isaac Lab, multiplying their data with NVIDIA Cosmos World models.

Humanoid

Uses Isaac Lab to train whole-body control and manipulation policies for humanoid robots.

Hexagon Robotics

Uses Isaac Lab for training and data generation for their robots.

Skilled AI

Uses Isaac Lab and Cosmos to generate post-training data for their skilled AI brain, hardening models with reinforcement learning.

DeepMind

An AI research lab, co-developed the Newton solver on NVIDIA Warp, enabling realistic physics for character robots.

Found this useful? Build your knowledge library

Get AI-powered summaries of any YouTube video, podcast, or article in seconds. Save them to your personal pods and access them anytime.

Try Summify free