Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Adversarial Examples Exist in Two-Layer ReLU Networks for Low Dimensional Linear Subspaces (2303.00783v2)

Published 1 Mar 2023 in cs.LG, cs.CR, cs.NE, and stat.ML

Abstract: Despite a great deal of research, it is still not well-understood why trained neural networks are highly vulnerable to adversarial examples. In this work we focus on two-layer neural networks trained using data which lie on a low dimensional linear subspace. We show that standard gradient methods lead to non-robust neural networks, namely, networks which have large gradients in directions orthogonal to the data subspace, and are susceptible to small adversarial $L_2$-perturbations in these directions. Moreover, we show that decreasing the initialization scale of the training algorithm, or adding $L_2$ regularization, can make the trained network more robust to adversarial perturbations orthogonal to the data.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Odelia Melamed (4 papers)
  2. Gilad Yehudai (26 papers)
  3. Gal Vardi (37 papers)
Citations (1)

Summary

We haven't generated a summary for this paper yet.