Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Improving Adversarial Robustness via Guided Complement Entropy (1903.09799v3)

Published 23 Mar 2019 in cs.LG, cs.CV, cs.PF, and stat.ML

Abstract: Adversarial robustness has emerged as an important topic in deep learning as carefully crafted attack samples can significantly disturb the performance of a model. Many recent methods have proposed to improve adversarial robustness by utilizing adversarial training or model distillation, which adds additional procedures to model training. In this paper, we propose a new training paradigm called Guided Complement Entropy (GCE) that is capable of achieving "adversarial defense for free," which involves no additional procedures in the process of improving adversarial robustness. In addition to maximizing model probabilities on the ground-truth class like cross-entropy, we neutralize its probabilities on the incorrect classes along with a "guided" term to balance between these two terms. We show in the experiments that our method achieves better model robustness with even better performance compared to the commonly used cross-entropy training objective. We also show that our method can be used orthogonal to adversarial training across well-known methods with noticeable robustness gain. To the best of our knowledge, our approach is the first one that improves model robustness without compromising performance.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Hao-Yun Chen (4 papers)
  2. Jhao-Hong Liang (2 papers)
  3. Shih-Chieh Chang (10 papers)
  4. Jia-Yu Pan (9 papers)
  5. Yu-Ting Chen (48 papers)
  6. Wei Wei (425 papers)
  7. Da-Cheng Juan (38 papers)
Citations (45)

Summary

We haven't generated a summary for this paper yet.