BF16
Concept
A 16-bit floating-point format mentioned in comparison to NVIDIA's TF32.
Mentioned in 4 videos
Save the 4 videos on BF16 to your own pod.
Sign up free to keep building your knowledge base on BF16 as more episodes are added.
Videos Mentioning BF16

DeepSeek V3, SGLang, and the state of Open Model Inference in 2025 (Quantization, MoEs, Pricing)
Latent Space
A default precision format (Bfloat16) often used for training LLMs, contrasted with FP8 which requires specific kernel implementation for inference.

⚡️Accelerators @ 3x NVIDIA H200 perf, Made in the USA - Thomas Sohmers + Mitesh Agrawal, Positron AI
Latent Space
A 16-bit floating-point format mentioned in comparison to NVIDIA's TF32.

Stanford CS336 Language Modeling from Scratch | Spring 2026 | Lecture 2: PyTorch (einops)
Stanford Online
BFloat16 format developed in 2018, balancing FP16's memory efficiency with FP32's dynamic range, often a sweet spot for deep learning.

Stanford CS336 Language Modeling from Scratch | Spring 2026 | Lecture 5: GPUs, TPUs
Stanford Online
BFloat16, a 16-bit floating-point format that allows for reduced precision computation and memory movement.