ResNet
Residual neural network architecture used as a real example of deep nets where conv -> BatchNorm -> ReLU motifs repeat; referenced to show typical placement of BatchNorm.
Common Themes
Videos Mentioning ResNet

Building makemore Part 3: Activations & Gradients, BatchNorm
Andrej Karpathy
Residual neural network architecture used as a real example of deep nets where conv -> BatchNorm -> ReLU motifs repeat; referenced to show typical placement of BatchNorm.

Jim Keller: The Future of Computing, AI, Life, and Consciousness | Lex Fridman Podcast #162
Lex Fridman
A residual neural network architecture, mentioned as showing steady improvements over prior architectures like AlexNet.

The End of Finetuning — with Jeremy Howard of Fast.ai
Latent Space
A paper on ResNets visualizing its loss surface with and without skip connections is cited as an example of the type of work needed to understand model learning dynamics.

MIT 6.S093: Introduction to Human-Centered Artificial Intelligence (AI)
Lex Fridman
A deep residual network architecture used in computer vision tasks like image recognition, discussed in the context of ensemble systems and error reduction.

Deep Learning Basics: Introduction and Overview
Lex Fridman
A very deep convolutional neural network architecture known for its residual blocks, used as a common starting point for transfer learning.

MIT 6.S094: Computer Vision
Lex Fridman
An architecture that introduced residual blocks, enabling much deeper networks and easier training, and achieved state-of-the-art performance.

MIT 6.S094: Deep Reinforcement Learning
Lex Fridman
Residual Networks, an architecture that significantly improved performance on large-scale image recognition tasks like ImageNet and was adopted for AlphaGo Zero.

MIT 6.S094: Deep Learning
Lex Fridman
A neural network architecture that, in 2015, was the first to exceed human-level performance on the ImageNet challenge.

MIT 6.S094: Recurrent Neural Networks for Steering Through Time
Lex Fridman
A residual neural network architecture, mentioned as a pre-trained network suitable for transfer learning, particularly for visual tasks.