Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Safer Classification by Synthesis (1711.08534v2)

Published 22 Nov 2017 in cs.LG, cs.AI, and stat.ML

Abstract: The discriminative approach to classification using deep neural networks has become the de-facto standard in various fields. Complementing recent reservations about safety against adversarial examples, we show that conventional discriminative methods can easily be fooled to provide incorrect labels with very high confidence to out of distribution examples. We posit that a generative approach is the natural remedy for this problem, and propose a method for classification using generative models. At training time, we learn a generative model for each class, while at test time, given an example to classify, we query each generator for its most similar generation, and select the class corresponding to the most similar one. Our approach is general and can be used with expressive models such as GANs and VAEs. At test time, our method accurately "knows when it does not know," and provides resilience to out of distribution examples while maintaining competitive performance for standard examples.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. William Wang (38 papers)
  2. Angelina Wang (24 papers)
  3. Aviv Tamar (69 papers)
  4. Xi Chen (1040 papers)
  5. Pieter Abbeel (372 papers)
Citations (41)

Summary

We haven't generated a summary for this paper yet.