Emergent Mind
Diagnosing Vulnerability of Variational Auto-Encoders to Adversarial Attacks
(2103.06701)
Published Mar 10, 2021
in
cs.CR
,
cs.LG
,
and
stat.ML
Abstract
In this work, we explore adversarial attacks on the Variational Autoencoders (VAE). We show how to modify data point to obtain a prescribed latent code (supervised attack) or just get a drastically different code (unsupervised attack). We examine the influence of model modifications ($\beta$-VAE, NVAE) on the robustness of VAEs and suggest metrics to quantify it.
We're not able to analyze this paper right now due to high demand.
Please check back later (sorry!).
Generate a summary of this paper on our Pro plan:
We ran into a problem analyzing this paper.