Adversarial examples
Concept
Inputs to machine learning models that are intentionally designed by an attacker to cause the model to make a mistake, highlighted as a limitation of generalization in deep learning.
Mentioned in 1 video
Inputs to machine learning models that are intentionally designed by an attacker to cause the model to make a mistake, highlighted as a limitation of generalization in deep learning.