Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Block Switching: A Stochastic Approach for Deep Learning Security (2002.07920v1)

Published 18 Feb 2020 in cs.LG and cs.CV

Abstract: Recent study of adversarial attacks has revealed the vulnerability of modern deep learning models. That is, subtly crafted perturbations of the input can make a trained network with high accuracy produce arbitrary incorrect predictions, while maintain imperceptible to human vision system. In this paper, we introduce Block Switching (BS), a defense strategy against adversarial attacks based on stochasticity. BS replaces a block of model layers with multiple parallel channels, and the active channel is randomly assigned in the run time hence unpredictable to the adversary. We show empirically that BS leads to a more dispersed input gradient distribution and superior defense effectiveness compared with other stochastic defenses such as stochastic activation pruning (SAP). Compared to other defenses, BS is also characterized by the following features: (i) BS causes less test accuracy drop; (ii) BS is attack-independent and (iii) BS is compatible with other defenses and can be used jointly with others.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Xiao Wang (507 papers)
  2. Siyue Wang (16 papers)
  3. Pin-Yu Chen (311 papers)
  4. Xue Lin (92 papers)
  5. Peter Chin (46 papers)
Citations (13)

Summary

We haven't generated a summary for this paper yet.