2000 character limit reached
On the Vulnerability of Capsule Networks to Adversarial Attacks (1906.03612v1)
Published 9 Jun 2019 in cs.LG, cs.CR, and stat.ML
Abstract: This paper extensively evaluates the vulnerability of capsule networks to different adversarial attacks. Recent work suggests that these architectures are more robust towards adversarial attacks than other neural networks. However, our experiments show that capsule networks can be fooled as easily as convolutional neural networks.
Collections
Sign up for free to add this paper to one or more collections.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.