Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Detecting Adversarial Examples through Nonlinear Dimensionality Reduction (1904.13094v2)

Published 30 Apr 2019 in cs.LG, cs.CR, and stat.ML

Abstract: Deep neural networks are vulnerable to adversarial examples, i.e., carefully-perturbed inputs aimed to mislead classification. This work proposes a detection method based on combining non-linear dimensionality reduction and density estimation techniques. Our empirical findings show that the proposed approach is able to effectively detect adversarial examples crafted by non-adaptive attackers, i.e., not specifically tuned to bypass the detection method. Given our promising results, we plan to extend our analysis to adaptive attackers in future work.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Francesco Crecchi (4 papers)
  2. Davide Bacciu (107 papers)
  3. Battista Biggio (81 papers)
Citations (10)

Summary

We haven't generated a summary for this paper yet.