SAM-Lightening: A Lightweight Segment Anything Model with Dilated Flash Attention to Achieve 30 times Acceleration (2403.09195v2)
Abstract: Segment Anything Model (SAM) has garnered significant attention in segmentation tasks due to their zero-shot generalization ability. However, a broader application of SAMs to real-world practice has been restricted by their low inference speed and high computational memory demands, which mainly stem from the attention mechanism. Existing work concentrated on optimizing the encoder, yet has not adequately addressed the inefficiency of the attention mechanism itself, even when distilled to a smaller model, which thus leaves space for further improvement. In response, we introduce SAM-Lightening, a variant of SAM, that features a re-engineered attention mechanism, termed Dilated Flash Attention. It not only facilitates higher parallelism, enhancing processing efficiency but also retains compatibility with the existing FlashAttention. Correspondingly, we propose a progressive distillation to enable an efficient knowledge transfer from the vanilla SAM without costly training from scratch. Experiments on COCO and LVIS reveal that SAM-Lightening significantly outperforms the state-of-the-art methods in both run-time efficiency and segmentation accuracy. Specifically, it can achieve an inference speed of 7 milliseconds (ms) per image, for images of size 1024*1024 pixels, which is 30.1 times faster than the vanilla SAM and 2.1 times than the state-of-the-art. Moreover, it takes only 244MB memory, which is 3.5\% of the vanilla SAM. The code and weights are available at https://anonymous.4open.science/r/SAM-LIGHTENING-BC25/.
- Kirillov et al., “Segment anything,” arXiv preprint arXiv:2304.02643, 2023.
- Archit et al., “Segment anything for microscopy,” Aug 2023.
- Ma et al., “Segment anything in medical images,” Apr 2023.
- Cheng et al., “Sam-med2d,” arXiv preprint arXiv:2308.16184, 2023.
- Yang et al., “Track anything: Segment anything meets videos,” Apr 2023.
- Shen et al., “Anything-3d: Towards single-view anything reconstruction in the wild,” arXiv preprint arXiv:2304.10261, 2023.
- Ronneberger et al., U-Net: Convolutional Networks for Biomedical Image Segmentation, p. 234–241, Jan 2015.
- Zhao et al., “Fast segment anything,” arXiv preprint arXiv:2306.12156, 2023.
- Zhang et al., “Faster segment anything: Towards lightweight sam for mobile applications,” arXiv preprint arXiv:2306.14289, 2023.
- Wu et al., “Tinyvit: Fast pretraining distillation for small vision transformers,” Springer, Cham, 2022.
- Xiong et al., “Efficientsam: Leveraged masked image pretraining for efficient segment anything,” arXiv preprint arXiv:2312.00863, 2023.
- PyTorch Team, “Accelerating generative ai,” accelerating-generative-ai, 2023.
- Dao et al., “Flashattention: Fast and memory-efficient exact attention with io-awareness,” Advances in Neural Information Processing Systems, vol. 35, pp. 16344–16359, 2022.
- Jocher et al., “Ultralytics yolov8,” 2023.
- Dao, “Flashattention-2: Faster attention with better parallelism and work partitioning,” arXiv preprint arXiv:2307.08691, 2023.
- Hinton et al., “Distilling the knowledge in a neural network,” arXiv: Machine Learning,arXiv: Machine Learning, Mar 2015.
- Ji et al., “Show, attend and distill:knowledge distillation via attention-based feature matching,” Proceedings of the AAAI Conference on Artificial Intelligence, p. 7945–7952, Sep 2022.
- Lin et al., “Microsoft coco: Common objects in context,” COMPUTER VISION - ECCV 2014, PT V, pp. 740–755, 2014.
- Gupta et al., “Lvis: A dataset for large vocabulary instance segmentation,” in 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Jun 2019.
- Chen et al., “MMDetection: Open mmlab detection toolbox and benchmark,” arXiv preprint arXiv:1906.07155, 2019.
- Wang et al., “Seggpt: Segmenting everything in context,” arXiv preprint arXiv:2304.03284, 2023.