Emergent Mind

$σ$-zero: Gradient-based Optimization of $\ell_0$-norm Adversarial Examples

(2402.01879)
Published Feb 2, 2024 in cs.LG , cs.CV , and cs.CR

Abstract

Evaluating the adversarial robustness of deep networks to gradient-based attacks is challenging. While most attacks consider $\ell2$- and $\ell\infty$-norm constraints to craft input perturbations, only a few investigate sparse $\ell1$- and $\ell0$-norm attacks. In particular, $\ell0$-norm attacks remain the least studied due to the inherent complexity of optimizing over a non-convex and non-differentiable constraint. However, evaluating adversarial robustness under these attacks could reveal weaknesses otherwise left untested with more conventional $\ell2$- and $\ell\infty$-norm attacks. In this work, we propose a novel $\ell0$-norm attack, called $\sigma$-zero, which leverages an ad hoc differentiable approximation of the $\ell0$ norm to facilitate gradient-based optimization, and an adaptive projection operator to dynamically adjust the trade-off between loss minimization and perturbation sparsity. Extensive evaluations using MNIST, CIFAR10, and ImageNet datasets, involving robust and non-robust models, show that $\sigma$-zero finds minimum $\ell0$-norm adversarial examples without requiring any time-consuming hyperparameter tuning, and that it outperforms all competing sparse attacks in terms of success rate, perturbation size, and scalability.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.