Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
162 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

SAM-Lightening: A Lightweight Segment Anything Model with Dilated Flash Attention to Achieve 30 times Acceleration (2403.09195v2)

Published 14 Mar 2024 in cs.CV

Abstract: Segment Anything Model (SAM) has garnered significant attention in segmentation tasks due to their zero-shot generalization ability. However, a broader application of SAMs to real-world practice has been restricted by their low inference speed and high computational memory demands, which mainly stem from the attention mechanism. Existing work concentrated on optimizing the encoder, yet has not adequately addressed the inefficiency of the attention mechanism itself, even when distilled to a smaller model, which thus leaves space for further improvement. In response, we introduce SAM-Lightening, a variant of SAM, that features a re-engineered attention mechanism, termed Dilated Flash Attention. It not only facilitates higher parallelism, enhancing processing efficiency but also retains compatibility with the existing FlashAttention. Correspondingly, we propose a progressive distillation to enable an efficient knowledge transfer from the vanilla SAM without costly training from scratch. Experiments on COCO and LVIS reveal that SAM-Lightening significantly outperforms the state-of-the-art methods in both run-time efficiency and segmentation accuracy. Specifically, it can achieve an inference speed of 7 milliseconds (ms) per image, for images of size 1024*1024 pixels, which is 30.1 times faster than the vanilla SAM and 2.1 times than the state-of-the-art. Moreover, it takes only 244MB memory, which is 3.5\% of the vanilla SAM. The code and weights are available at https://anonymous.4open.science/r/SAM-LIGHTENING-BC25/.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (21)
  1. Kirillov et al., “Segment anything,” arXiv preprint arXiv:2304.02643, 2023.
  2. Archit et al., “Segment anything for microscopy,” Aug 2023.
  3. Ma et al., “Segment anything in medical images,” Apr 2023.
  4. Cheng et al., “Sam-med2d,” arXiv preprint arXiv:2308.16184, 2023.
  5. Yang et al., “Track anything: Segment anything meets videos,” Apr 2023.
  6. Shen et al., “Anything-3d: Towards single-view anything reconstruction in the wild,” arXiv preprint arXiv:2304.10261, 2023.
  7. Ronneberger et al., U-Net: Convolutional Networks for Biomedical Image Segmentation, p. 234–241, Jan 2015.
  8. Zhao et al., “Fast segment anything,” arXiv preprint arXiv:2306.12156, 2023.
  9. Zhang et al., “Faster segment anything: Towards lightweight sam for mobile applications,” arXiv preprint arXiv:2306.14289, 2023.
  10. Wu et al., “Tinyvit: Fast pretraining distillation for small vision transformers,” Springer, Cham, 2022.
  11. Xiong et al., “Efficientsam: Leveraged masked image pretraining for efficient segment anything,” arXiv preprint arXiv:2312.00863, 2023.
  12. PyTorch Team, “Accelerating generative ai,” accelerating-generative-ai, 2023.
  13. Dao et al., “Flashattention: Fast and memory-efficient exact attention with io-awareness,” Advances in Neural Information Processing Systems, vol. 35, pp. 16344–16359, 2022.
  14. Jocher et al., “Ultralytics yolov8,” 2023.
  15. Dao, “Flashattention-2: Faster attention with better parallelism and work partitioning,” arXiv preprint arXiv:2307.08691, 2023.
  16. Hinton et al., “Distilling the knowledge in a neural network,” arXiv: Machine Learning,arXiv: Machine Learning, Mar 2015.
  17. Ji et al., “Show, attend and distill:knowledge distillation via attention-based feature matching,” Proceedings of the AAAI Conference on Artificial Intelligence, p. 7945–7952, Sep 2022.
  18. Lin et al., “Microsoft coco: Common objects in context,” COMPUTER VISION - ECCV 2014, PT V, pp. 740–755, 2014.
  19. Gupta et al., “Lvis: A dataset for large vocabulary instance segmentation,” in 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Jun 2019.
  20. Chen et al., “MMDetection: Open mmlab detection toolbox and benchmark,” arXiv preprint arXiv:1906.07155, 2019.
  21. Wang et al., “Seggpt: Segmenting everything in context,” arXiv preprint arXiv:2304.03284, 2023.
Citations (7)

Summary

We haven't generated a summary for this paper yet.