Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
139 tokens/sec
GPT-4o
47 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

TranSegPGD: Improving Transferability of Adversarial Examples on Semantic Segmentation (2312.02207v1)

Published 3 Dec 2023 in cs.CV

Abstract: Transferability of adversarial examples on image classification has been systematically explored, which generates adversarial examples in black-box mode. However, the transferability of adversarial examples on semantic segmentation has been largely overlooked. In this paper, we propose an effective two-stage adversarial attack strategy to improve the transferability of adversarial examples on semantic segmentation, dubbed TranSegPGD. Specifically, at the first stage, every pixel in an input image is divided into different branches based on its adversarial property. Different branches are assigned different weights for optimization to improve the adversarial performance of all pixels.We assign high weights to the loss of the hard-to-attack pixels to misclassify all pixels. At the second stage, the pixels are divided into different branches based on their transferable property which is dependent on Kullback-Leibler divergence. Different branches are assigned different weights for optimization to improve the transferability of the adversarial examples. We assign high weights to the loss of the high-transferability pixels to improve the transferability of adversarial examples. Extensive experiments with various segmentation models are conducted on PASCAL VOC 2012 and Cityscapes datasets to demonstrate the effectiveness of the proposed method. The proposed adversarial attack method can achieve state-of-the-art performance.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (56)
  1. On the robustness of semantic segmentation models to adversarial attacks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 888–897, 2018.
  2. Hilbert-based generative defense for adversarial examples. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 4784–4793, 2019.
  3. Improving query efficiency of black-box adversarial attack. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XXV 16, pages 101–116. Springer, 2020.
  4. Clustering effect of (linearized) adversarial robust models. arXiv preprint arXiv:2111.12922, 2021a.
  5. Improving adversarial robustness via channel-wise activation suppressing. arXiv preprint arXiv:2103.08307, 2021b.
  6. Query efficient black-box adversarial attack on deep neural networks. Pattern Recognition, 133:109037, 2023.
  7. Rethinking atrous convolution for semantic image segmentation. arXiv preprint arXiv:1706.05587, 2017.
  8. The cityscapes dataset for semantic urban scene understanding. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3213–3223, 2016.
  9. Boosting adversarial attacks with momentum. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 9185–9193, 2018.
  10. Evading defenses to transferable adversarial examples by translation-invariant attacks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 4312–4321, 2019.
  11. The pascal visual object classes (voc) challenge. International journal of computer vision, 88:303–338, 2010.
  12. Scalable certified segmentation via randomized smoothing. In International Conference on Machine Learning, pages 3340–3351. PMLR, 2021.
  13. Adversarial examples for semantic image segmentation. arXiv preprint arXiv:1703.01101, 2017.
  14. Generalizability vs. robustness: Investigating medical imaging networks using adversarial examples. In International Conference on Medical Image Computing and Computer-Assisted Intervention, number DZNE-2022-01068. Image Analysis, 2018.
  15. Explaining and harnessing adversarial examples. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings, 2015.
  16. Adversarial examples on segmentation models can be easy to transfer. arXiv preprint arXiv:2111.11368, 2021.
  17. Segpgd: An effective and efficient adversarial attack for evaluating and boosting segmentation robustness. In European Conference on Computer Vision, pages 308–325. Springer, 2022.
  18. A survey on transferability of adversarial examples across deep neural networks. arXiv preprint arXiv:2310.17626, 2023.
  19. Generating transferable 3d adversarial point cloud via random perturbation factorization. In Proceedings of the AAAI Conference on Artificial Intelligence, pages 764–772, 2023a.
  20. Transferable attack for semantic segmentation. arXiv preprint arXiv:2307.16572, 2023b.
  21. Non-local context encoder: Robust biomedical image segmentation against adversarial attacks. In Proceedings of the AAAI Conference on Artificial Intelligence, pages 8417–8424, 2019.
  22. Universal adversarial perturbations against semantic image segmentation. In Proceedings of the IEEE international conference on computer vision, pages 2755–2764, 2017.
  23. Erosion attack: Harnessing corruption to improve adversarial examples. IEEE Transactions on Image Processing, 2023a.
  24. Yi Huang and Adams Wai-Kin Kong. Transferable adversarial attack based on integrated gradients. In International Conference on Learning Representations, 2021.
  25. On the robustness of segment anything. arXiv preprint arXiv:2305.16220, 2023b.
  26. Adv-watermark: A novel watermark perturbation for adversarial examples. In Proceedings of the 28th ACM International Conference on Multimedia, pages 1579–1587, 2020.
  27. Las-at: adversarial training with learnable attack strategy. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 13398–13408, 2022.
  28. Segment anything. arXiv preprint arXiv:2304.02643, 2023.
  29. Deep learning. nature, 521(7553):436–444, 2015.
  30. Towards transferable targeted attack. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 641–649, 2020.
  31. Nesterov accelerated gradient and scale invariance for adversarial attacks. In International Conference on Learning Representations, 2019.
  32. Towards deep learning models resistant to adversarial attacks. In International Conference on Learning Representations, 2018.
  33. Deepfool: a simple and accurate method to fool deep neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2574–2582, 2016.
  34. Understanding the failure modes of out-of-distribution generalization. In International Conference on Learning Representations, 2020.
  35. Boosting the transferability of adversarial attacks with reverse adversarial perturbation. Advances in Neural Information Processing Systems, 35:29845–29858, 2022.
  36. Fully convolutional networks for semantic segmentation. IEEE transactions on pattern analysis and machine intelligence, 39(4):640–651, 2017.
  37. On calibration and out-of-domain generalization. Advances in neural information processing systems, 34:2215–2227, 2021.
  38. Enhancing the transferability of adversarial attacks through variance tuning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1924–1933, 2021.
  39. Admix: Enhancing the transferability of adversarial attacks. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 16158–16167, 2021.
  40. On the role of generalization in transferability of adversarial examples. In Uncertainty in Artificial Intelligence, pages 2259–2270. PMLR, 2023.
  41. Adversarial training of deep neural networks guided by texture and structural information. In Proceedings of the 31st ACM International Conference on Multimedia, pages 4958–4967, 2023.
  42. Towards understanding and improving the transferability of adversarial examples in deep neural networks. In Asian Conference on Machine Learning, pages 837–850. PMLR, 2020.
  43. Improving the transferability of adversarial samples with adversarial transformations. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 9024–9033, 2021.
  44. Characterizing adversarial examples based on spatial consistency information for semantic segmentation. In Proceedings of the European Conference on Computer Vision (ECCV), pages 217–234, 2018.
  45. Adversarial examples for semantic segmentation and object detection. In Proceedings of the IEEE international conference on computer vision, pages 1369–1378, 2017.
  46. Improving transferability of adversarial examples with input diversity. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 2730–2739, 2019.
  47. Stochastic variance reduced ensemble adversarial attack for boosting the adversarial transferability. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 14983–14992, 2022.
  48. Dynamic divide-and-conquer adversarial training for robust semantic segmentation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 7486–7495, 2021.
  49. Reliable evaluation of adversarial transferability. arXiv preprint arXiv:2306.08565, 2023.
  50. Investigating top-k white-box and transferable black-box attack. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 15085–15094, 2022.
  51. Improving the transferability of adversarial samples by path-augmented method. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8173–8182, 2023.
  52. Pyramid scene parsing network. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2881–2890, 2017.
  53. On success and simplicity: A second look at transferable targeted attacks. Advances in Neural Information Processing Systems, 34:6115–6128, 2021.
  54. Learning deep features for discriminative localization. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2921–2929, 2016.
  55. Boosting the transferability of adversarial attacks with adaptive points selecting in temporal neighborhood. Information Sciences, 641:119081, 2023.
  56. Toward understanding and boosting adversarial transferability from a distribution perspective. IEEE Transactions on Image Processing, 31:6487–6501, 2022.
Citations (2)

Summary

We haven't generated a summary for this paper yet.