no code implementations • ICLR 2019 • Wonjun Yoon, Jisuk Park, Daeshik Kim
Existing neural networks are vulnerable to "adversarial examples"---created by adding maliciously designed small perturbations in inputs to induce a misclassification by the networks.