Emergent Mind

Towards Speeding up Adversarial Training in Latent Spaces

(2102.00662)
Published Feb 1, 2021 in cs.LG and cs.AI

Abstract

Adversarial training is wildly considered as one of the most effective way to defend against adversarial examples. However, existing adversarial training methods consume unbearable time, due to the fact that they need to generate adversarial examples in the large input space. To speed up adversarial training, we propose a novel adversarial training method that does not need to generate real adversarial examples. By adding perturbations to logits to generate Endogenous Adversarial Examples (EAEs) -- the adversarial examples in the latent space, the time consuming gradient calculation can be avoided. Extensive experiments are conducted on CIFAR-10 and ImageNet, and the results show that comparing to state-of-the-art methods, our EAE adversarial training not only shortens the training time, but also enhances the robustness of the model and has less impact on the accuracy of clean examples than the existing methods.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.