Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Learning Adversarially Robust Representations via Worst-Case Mutual Information Maximization (2002.11798v2)

Published 26 Feb 2020 in cs.LG, cs.CR, cs.IT, math.IT, and stat.ML

Abstract: Training machine learning models that are robust against adversarial inputs poses seemingly insurmountable challenges. To better understand adversarial robustness, we consider the underlying problem of learning robust representations. We develop a notion of representation vulnerability that captures the maximum change of mutual information between the input and output distributions, under the worst-case input perturbation. Then, we prove a theorem that establishes a lower bound on the minimum adversarial risk that can be achieved for any downstream classifier based on its representation vulnerability. We propose an unsupervised learning method for obtaining intrinsically robust representations by maximizing the worst-case mutual information between the input and output distributions. Experiments on downstream classification tasks support the robustness of the representations found using unsupervised learning with our training principle.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Sicheng Zhu (15 papers)
  2. Xiao Zhang (435 papers)
  3. David Evans (63 papers)
Citations (26)

Summary

We haven't generated a summary for this paper yet.