CUDA

CUDA

NvidiaVerified via Wikidata

parallel computing platform and programming model

Mentioned in 33 videos
Founded
2007
Developer
Nvidia

What podcasters actually say about CUDA.

33 mentions, no marketing. Save them all to a pod and ask any question.

Get Started Free

Videos Mentioning CUDA

E156: Ivy League antisemitism, macro, SaaS recovery, Gemini, Figma deal delay + big Friedberg update

E156: Ivy League antisemitism, macro, SaaS recovery, Gemini, Figma deal delay + big Friedberg update

All-In Podcast

NVIDIA's proprietary parallel computing platform, criticized for creating a walled garden that stifles innovation and commoditization in AI.

Personal benchmarks vs HumanEval - with Nicholas Carlini of DeepMind

Personal benchmarks vs HumanEval - with Nicholas Carlini of DeepMind

Latent Space

A parallel computing platform and API model created by Nvidia, mentioned as part of a complex debugging scenario.

E131: 2024 Fantasy President picks, debt ceiling agreement, Dollar dominance & more

E131: 2024 Fantasy President picks, debt ceiling agreement, Dollar dominance & more

All-In Podcast

NVIDIA's SDK becoming the de facto software layer for AI, contributing to a potential hardware lock-in and monopoly.

Jensen Huang: Nvidia's Future, Physical AI, Rise of the Agent, Inference Explosion, AI PR Crisis

Jensen Huang: Nvidia's Future, Physical AI, Rise of the Agent, Inference Explosion, AI PR Crisis

All-In Podcast

NVIDIA's parallel computing platform and API, considered nearly insurmountable as a strategic advantage and essential for building AI infrastructure.

The Engineering Unlocks Behind DeepSeek | YC Decoded

The Engineering Unlocks Behind DeepSeek | YC Decoded

Y Combinator

NVIDIA's parallel computing platform and programming model, mentioned as part of NVIDIA's integrated hardware and software solution for AI training.

Jensen Huang: NVIDIA - The $4 Trillion Company & the AI Revolution | Lex Fridman Podcast #494

Jensen Huang: NVIDIA - The $4 Trillion Company & the AI Revolution | Lex Fridman Podcast #494

Lex Fridman

NVIDIA's parallel computing platform and programming model, which became the foundation for deep learning. Its strategic placement on GeForce GPUs was a critical, high-risk decision that consumed profits but built an essential installed base.

Stanford CS336 Language Modeling from Scratch | Spring 2026 | Lecture 2: PyTorch (einops)

Stanford CS336 Language Modeling from Scratch | Spring 2026 | Lecture 2: PyTorch (einops)

Stanford Online

NVIDIA's parallel computing platform and API, used here to synchronize GPU operations during benchmarking.

Stanford CS336 Language Modeling from Scratch | Spring 2026 | Lecture 5: GPUs, TPUs

Stanford CS336 Language Modeling from Scratch | Spring 2026 | Lecture 5: GPUs, TPUs

Stanford Online

A parallel computing platform and programming model developed by NVIDIA for general computing on graphical processing units.

Stanford CS336 Language Modeling from Scratch | Spring 2026 | Lecture 6: Kernels, Triton, XLA

Stanford CS336 Language Modeling from Scratch | Spring 2026 | Lecture 6: Kernels, Triton, XLA

Stanford Online

NVIDIA's parallel computing platform and programming model, originally developed for writing kernels. Triton is presented as an alternative with higher-level abstractions.

PreviousPage 2 of 2