2000 character limit reached
Detecting Adversarial Examples through Nonlinear Dimensionality Reduction (1904.13094v2)
Published 30 Apr 2019 in cs.LG, cs.CR, and stat.ML
Abstract: Deep neural networks are vulnerable to adversarial examples, i.e., carefully-perturbed inputs aimed to mislead classification. This work proposes a detection method based on combining non-linear dimensionality reduction and density estimation techniques. Our empirical findings show that the proposed approach is able to effectively detect adversarial examples crafted by non-adaptive attackers, i.e., not specifically tuned to bypass the detection method. Given our promising results, we plan to extend our analysis to adaptive attackers in future work.
- Francesco Crecchi (4 papers)
- Davide Bacciu (107 papers)
- Battista Biggio (81 papers)