Image classifiers can not be made robust to small perturbations

7 Dec 2021  ·  Zheng Dai, David K. Gifford ·

The sensitivity of image classifiers to small perturbations in the input is often viewed as a defect of their construction. We demonstrate that this sensitivity is a fundamental property of classifiers. For any arbitrary classifier over the set of $n$-by-$n$ images, we show that for all but one class it is possible to change the classification of all but a tiny fraction of the images in that class with a perturbation of size $O(n^{1/\max{(p,1)}})$ when measured in any $p$-norm for $p \geq 0$. We then discuss how this phenomenon relates to human visual perception and the potential implications for the design considerations of computer vision systems.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here