Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 60 tok/s
Gemini 2.5 Pro 51 tok/s Pro
GPT-5 Medium 18 tok/s Pro
GPT-5 High 14 tok/s Pro
GPT-4o 77 tok/s Pro
Kimi K2 159 tok/s Pro
GPT OSS 120B 456 tok/s Pro
Claude Sonnet 4 38 tok/s Pro
2000 character limit reached

The Adversarial Consistency of Surrogate Risks for Binary Classification (2305.09956v3)

Published 17 May 2023 in cs.LG, math.ST, and stat.TH

Abstract: We study the consistency of surrogate risks for robust binary classification. It is common to learn robust classifiers by adversarial training, which seeks to minimize the expected $0$-$1$ loss when each example can be maliciously corrupted within a small ball. We give a simple and complete characterization of the set of surrogate loss functions that are \emph{consistent}, i.e., that can replace the $0$-$1$ loss without affecting the minimizing sequences of the original adversarial risk, for any data distribution. We also prove a quantitative version of adversarial consistency for the $\rho$-margin loss. Our results reveal that the class of adversarially consistent surrogates is substantially smaller than in the standard setting, where many common surrogates are known to be consistent.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (28)
  1. Calibration and consistency of adversarial surrogate losses. NeurIps, 2021a.
  2. On the existence of the adversarial bayes classifier (extended version). arxiv, 2021b.
  3. A finer calibration analysis for adversarial robustness. arxiv, 2021c.
  4. H-consistency bounds for surrogate loss minimizers. In Proceedings of the 39th International Conference on Machine Learning. PMLR, 2022.
  5. Calibrated surrogate losses for adversarially robust classification. arxiv, 2021.
  6. Convexity, classification, and risk bounds. Journal of the American Statistical Association, 101(473), 2006.
  7. On the difficulty of approximately maximizing agreements. Journal of Computer System Sciences, 2003.
  8. R. Bhattacharjee and K. Chaudhuri. When are non-parametric methods robust? PMLR, 2020.
  9. R. Bhattacharjee and K. Chaudhuri. Consistent non-parametric methods for maximizing robustness. NeurIps, 2021.
  10. Evasion attacks against machine learning at test time. In Joint European conference on machine learning and knowledge discovery in databases, pages 387–402. Springer, 2013.
  11. The geometry of adversarial training in binary classification. arxiv, 2021.
  12. G. B. Folland. Real analysis: modern techniques and their applications, volume 40. John Wiley & Sons, 1999.
  13. N. S. Frank and J. Niles-Weed. Existence and minimax theorems for adversarial surrogate risks in binary classification. arXiv, 2023.
  14. Adaptive square attack: Fooling autonomous cars with adversarial traffic signs. IEEE Internet of Things Journal, 8(8), 2021.
  15. Y. Lin. A note on margin-based loss functions in classification. Statistics & Probability Letters, 68(1):73–82, 2004.
  16. Cross-entropy loss functions: Theoretical analysis and applications, 2023.
  17. Towards consistency in adversarial classification. arXiv, 2022.
  18. S. A. Mingyuan Zhang. Consistency vs. h-consistency: The interplay between surrogate loss functions and the scoring function class. NeurIps, 2020.
  19. Generalizability vs. robustness: Adversarial examples for medical imaging. Springer, 2018.
  20. R. A. S. Philip M. Long. Consistency versus realizable h-consistency for multiclass classification. ICML, 2013.
  21. M. S. Pydi and V. Jog. The many faces of adversarial risk. Neural Information Processing Systems, 2021.
  22. Surrogate regret bounds for proper losses. In Proceedings of the 26th Annual International Conference on Machine Learning, New York, NY, USA, 2009. Association for Computing Machinery.
  23. I. Steinwart. How to compare different loss functions and their risks. Constructive Approximation, 2007.
  24. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199, 2013.
  25. N. G. Trillos and R. Murray. Adversarial classification: Necessary conditions and geometric flows. arxiv, 2020.
  26. The multimarginal optimal transport formulation of adversarial multiclass classification. arXiv, 2022.
  27. On the existence of solutions to adversarial training in multiclass classification, 2023.
  28. T. Zhang. Statistical behavior and consistency of classification methods based on convex risk minimization. The Annals of Statistics, 2004.
Citations (5)

Summary

We haven't generated a summary for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Lightbulb On Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube