3D 视角和光照的微小分布变化欺骗了 CNN 和 Transformer

Small in-distribution changes in 3D perspective and lighting fool both CNNs and Transformers

Spandan Madan, Tomotake Sasaki, Tzu-Mao Li, Xavier Boix, Hanspeter Pfister

Neural networks are susceptible to small transformations including 2D rotations and shifts, image crops, and even changes in object colors. This is often attributed to biases in the training dataset, and the lack of 2D shift-invariance due to not respecting the sampling theorem. In this paper, we challenge this hypothesis by training and testing on unbiased datasets, and showing that networks are brittle to both small 3D perspective changes and lighting variations which cannot be explained by dataset bias or lack of shift-invariance. To find these in-distribution errors, we introduce an evolution strategies (ES) based approach, which we call CMA-Search. Despite training with a large-scale (0.5 million images), unbiased dataset of camera and light variations, in over 71% cases CMA-Search can find camera parameters in the vicinity of a correctly classified image which lead to in-distribution misclassifications with < 3.6% change in parameters. With lighting changes, CMA-Search finds misclassifications in 33% cases with < 11.6% change in parameters. Finally, we extend this method to find misclassifications in the vicinity of ImageNet images for both ResNet and OpenAI's CLIP model.

Subjects: Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG)

你可能感兴趣的:(3D 视角和光照的微小分布变化欺骗了 CNN 和 Transformer)