Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Input Validation for Neural Networks via Runtime Local Robustness Verification (2002.03339v2)

Published 9 Feb 2020 in cs.LG and stat.ML

Abstract: Local robustness verification can verify that a neural network is robust wrt. any perturbation to a specific input within a certain distance. We call this distance Robustness Radius. We observe that the robustness radii of correctly classified inputs are much larger than that of misclassified inputs which include adversarial examples, especially those from strong adversarial attacks. Another observation is that the robustness radii of correctly classified inputs often follow a normal distribution. Based on these two observations, we propose to validate inputs for neural networks via runtime local robustness verification. Experiments show that our approach can protect neural networks from adversarial examples and improve their accuracies.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (40)
  1. Verification of rnn-based neural agent-environment systems. In Proceedings of the 33th AAAI Conference on Artificial Intelligence (AAAI19). Honolulu, HI, USA. AAAI Press. To appear, 2019.
  2. Adversarial examples, uncertainty, and transfer testing robustness in gaussian process hybrid deep networks. arXiv preprint arXiv:1707.02476, 2017.
  3. Ground-truth adversarial examples. 2018.
  4. Towards evaluating the robustness of neural networks. In 2017 ieee symposium on security and privacy (sp), pages 39–57. IEEE, 2017.
  5. Davide Castelvecchi. Can we open the black box of ai? Nature News, 538(7623):20, 2016.
  6. Hopskipjumpattack: A query-efficient decision-based attack. arXiv preprint arXiv:1904.02144, 2019.
  7. Zoo: Zeroth order optimization based black-box attacks to deep neural networks without training substitute models. In Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security, pages 15–26. ACM, 2017.
  8. Tests for departure from normality. empirical results for the distributions of b2superscript𝑏2b^{2}italic_b start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT and b1superscript𝑏1b^{1}italic_b start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT. Biometrika, 60(3):613–622, 1973.
  9. Soter: programming safe robotics system using runtime assurance. arXiv preprint arXiv:1808.07921, 2018.
  10. Output range analysis for deep feedforward neural networks. In NASA Formal Methods Symposium, pages 121–138. Springer, 2018.
  11. Rüdiger Ehlers. Formal verification of piece-wise linear feed-forward neural networks. In Deepak D’Souza and K. Narayan Kumar, editors, Automated Technology for Verification and Analysis, pages 269–286, Cham, 2017. Springer International Publishing.
  12. Detecting adversarial samples from artifacts. arXiv preprint arXiv:1703.00410, 2017.
  13. Ai2: Safety and robustness certification of neural networks with abstract interpretation. In 2018 IEEE Symposium on Security and Privacy (SP), pages 3–18. IEEE, 2018.
  14. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572, 2014.
  15. Outside the box: Abstraction-based monitoring of neural networks. arXiv preprint arXiv:1911.09032, 2019.
  16. Squeeze-and-excitation networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 7132–7141, 2018.
  17. Safety verification of deep neural networks. In International Conference on Computer Aided Verification, pages 3–29. Springer, 2017.
  18. Adversarial examples are not bugs, they are features. arXiv preprint arXiv:1905.02175, 2019.
  19. Reluplex: An efficient smt solver for verifying deep neural networks. In International Conference on Computer Aided Verification, pages 97–117. Springer, 2017.
  20. Cifar-10 (canadian institute for advanced research). 2009.
  21. Deep learning. nature, 521(7553):436–444, 2015.
  22. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998.
  23. Safetynet: Detecting and rejecting adversarial examples robustly. In Proceedings of the IEEE International Conference on Computer Vision, pages 446–454, 2017.
  24. Towards imperceptible and robust adversarial example attacks against neural networks. In Thirty-Second AAAI Conference on Artificial Intelligence, 2018.
  25. Magnet: a two-pronged defense against adversarial examples. 2017.
  26. Deepfool: a simple and accurate method to fool deep neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2574–2582, 2016.
  27. Adversarial robustness toolbox v1.0.1. CoRR, 1807.01069, 2018.
  28. Transferability in machine learning: from phenomena to black-box attacks using adversarial samples. arXiv preprint arXiv:1605.07277, 2016.
  29. Finding anomalous periodic time series. Machine learning, 74(3):281–313, 2009.
  30. Fast and effective robustness certification. In Advances in Neural Information Processing Systems, pages 10802–10813, 2018.
  31. Boosting robustness certification of neural networks. 2018.
  32. An abstract domain for certifying neural networks. Proceedings of the ACM on Programming Languages, 3(POPL):41, 2019.
  33. Pixeldefend: Leveraging generative models to understand and defend against adversarial examples. arXiv preprint arXiv:1710.10766, 2017.
  34. Intriguing properties of neural networks. In 2nd International Conference on Learning Representations, ICLR 2014, Banff, AB, Canada, April 14-16, 2014, Conference Track Proceedings, 2014.
  35. Detecting adversarial samples for deep neural networks through mutation testing. arXiv preprint arXiv:1805.05010, 2018.
  36. Formal security analysis of neural networks using symbolic intervals. In 27th USENIX Security Symposium (USENIX Security 18), pages 1599–1614, 2018.
  37. Analyzing the robustness of nearest neighbors to adversarial examples. arXiv preprint arXiv:1706.03922, 2017.
  38. Evaluating the robustness of neural networks: An extreme value theory approach. arXiv preprint arXiv:1801.10578, 2018.
  39. Aggregated residual transformations for deep neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1492–1500, 2017.
  40. Adversarial examples: Attacks and defenses for deep learning. IEEE transactions on neural networks and learning systems, 2019.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Jiangchao Liu (3 papers)
  2. Liqian Chen (13 papers)
  3. Antoine Mine (107 papers)
  4. Ji Wang (210 papers)
Citations (7)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets