Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 39 tok/s
Gemini 2.5 Pro 49 tok/s Pro
GPT-5 Medium 12 tok/s Pro
GPT-5 High 18 tok/s Pro
GPT-4o 91 tok/s Pro
Kimi K2 191 tok/s Pro
GPT OSS 120B 456 tok/s Pro
Claude Sonnet 4 37 tok/s Pro
2000 character limit reached

Pyramid Vision Transformer: A Versatile Backbone for Dense Prediction without Convolutions (2102.12122v2)

Published 24 Feb 2021 in cs.CV

Abstract: Although using convolutional neural networks (CNNs) as backbones achieves great successes in computer vision, this work investigates a simple backbone network useful for many dense prediction tasks without convolutions. Unlike the recently-proposed Transformer model (e.g., ViT) that is specially designed for image classification, we propose Pyramid Vision Transformer~(PVT), which overcomes the difficulties of porting Transformer to various dense prediction tasks. PVT has several merits compared to prior arts. (1) Different from ViT that typically has low-resolution outputs and high computational and memory cost, PVT can be not only trained on dense partitions of the image to achieve high output resolution, which is important for dense predictions but also using a progressive shrinking pyramid to reduce computations of large feature maps. (2) PVT inherits the advantages from both CNN and Transformer, making it a unified backbone in various vision tasks without convolutions by simply replacing CNN backbones. (3) We validate PVT by conducting extensive experiments, showing that it boosts the performance of many downstream tasks, e.g., object detection, semantic, and instance segmentation. For example, with a comparable number of parameters, RetinaNet+PVT achieves 40.4 AP on the COCO dataset, surpassing RetinNet+ResNet50 (36.3 AP) by 4.1 absolute AP. We hope PVT could serve as an alternative and useful backbone for pixel-level predictions and facilitate future researches. Code is available at https://github.com/whai362/PVT.

Citations (3,214)

Summary

  • The paper presents a novel convolution-free backbone using a progressive pyramid structure and spatial-reduction attention to reduce computational costs.
  • It demonstrates improved performance on ImageNet, COCO, and ADE20K, outperforming conventional CNNs in accuracy and efficiency.
  • The architecture integrates pure Transformer pipelines for object detection and semantic segmentation, paving the way for fully Transformer-based vision systems.

Pyramid Vision Transformer: A Versatile Backbone for Dense Prediction without Convolutions

The paper introduces the Pyramid Vision Transformer (PVT), a novel backbone network designed as a convolution-free alternative for dense prediction tasks. PVT addresses the limitations of Vision Transformer (ViT) in handling dense prediction by incorporating a pyramid structure, progressive shrinking, and spatial-reduction attention (SRA). The authors demonstrate PVT's effectiveness across various tasks, including image classification, object detection, instance segmentation, and semantic segmentation.

Architecture and Design

The PVT architecture draws inspiration from CNN backbones, featuring a multi-stage design that generates feature maps at different scales (Figure 1). Each stage comprises a patch embedding layer and a series of Transformer encoder layers. Figure 1

Figure 1: Overall architecture of Pyramid Vision Transformer (PVT).

The key architectural innovations include:

  • Progressive Shrinking Pyramid: This mechanism progressively reduces the spatial resolution of feature maps as the network deepens. This is achieved through patch embedding layers with varying patch sizes (PiP_i), enabling the construction of a feature pyramid suitable for dense prediction.
  • Spatial-Reduction Attention (SRA): SRA replaces the standard multi-head attention (MHA) within the Transformer encoder to reduce computational costs, especially when processing high-resolution feature maps. SRA reduces the spatial scale of the key (KK) and value (VV) inputs before the attention operation, significantly decreasing computational and memory overhead (Figure 2). Figure 2

    Figure 2: Multi-head attention (MHA) \vs spatial-reduction attention (SRA).

The SRA mechanism can be formulated as:

SRA(Q,K,V)=Concat(head0,...,headNi)WO{\rm SRA}(Q, K, V) = {\rm Concat}({\rm head}_0,... , {\rm head}_{N_i})W^O

headj=Attention(QWjQ,SR( ⁣K ⁣)WjK,SR( ⁣V ⁣)WjV){\rm head}_j={\rm Attention}(QW_j^Q, {\rm SR}(\!K\!)W_j^K, {\rm SR}(\!V\!)W_j^V)

where SR(x)=Norm(Reshape(x,Ri)WS){\rm SR}(\mathbf{x}) = {\rm Norm}({\rm Reshape}(\mathbf{x}, R_i)W^S).

Experimental Results

PVT's performance was evaluated on several tasks, demonstrating its versatility and effectiveness:

  • Image Classification: PVT models achieved competitive results on ImageNet, outperforming CNN backbones with similar parameter counts and computational budgets (Table 1). For instance, PVT-Small achieved a top-1 error rate of 20.2%, surpassing ResNet50's 21.5%.
  • Object Detection and Instance Segmentation: Using RetinaNet and Mask R-CNN on the COCO dataset, PVT models consistently outperformed ResNet and ResNeXt backbones. For example, PVT-Small achieved 40.4 AP with RetinaNet, a 4.1-point improvement over ResNet50 (Figure 3). Figure 3

    Figure 3: Performance comparison on COCO val2017 of different backbones using RetinaNet for object detection.

  • Semantic Segmentation: Evaluated on the ADE20K dataset with Semantic FPN, PVT models showed superior performance compared to ResNet and ResNeXt counterparts. PVT-Large achieved 42.1 mIoU, a 1.9-point increase over ResNeXt101-64x4d, despite having 20% fewer parameters.

Ablation Studies and Analysis

The paper includes ablation studies that provide insights into PVT's design choices:

  • Pyramid Structure: The pyramid structure is crucial for dense prediction tasks. ViT, with its columnar structure, resulted in lower detection performance (31.7 AP) compared to PVT's 40.4 AP.
  • Deeper vs. Wider: Deeper PVT models consistently outperformed wider models with comparable parameter counts, suggesting that increasing depth is more effective for PVT's representation learning.
  • Pre-trained Weights: Using weights pre-trained on ImageNet significantly improved convergence speed and final AP for PVT-based models (Figure 4). Figure 4

    Figure 4: AP curves of RetinaNet on COCO val2017 under different backbone settings.

  • Comparison with CNNs with Non-Local Blocks: PVT outperformed CNNs augmented with non-local blocks, indicating that stacking multiple MHAs in a pure Transformer architecture is more effective at capturing global dependencies.

Pure Transformer Pipelines

To demonstrate PVT's potential for convolution-free vision systems, the authors constructed pure Transformer pipelines for object detection (PVT+DETR) and semantic segmentation (PVT+Trans2Seg). PVT+DETR achieved 34.7 AP on COCO val2017, a 2.4-point increase over the original ResNet50-based DETR. PVT-Small+Trans2Seg achieved 42.6 mIoU on ADE20K, outperforming ResNet50-d8+DeeplabV3+. Qualitative results highlight PVT's ability to be integrated into dense prediction models, producing high-quality results (Figure 5). Figure 5

Figure 5: Qualitative results of object detection and instance segmentation on COCO val2017, and semantic segmentation on ADE20K.

Computational Overhead

The growth rate of GFLOPs with increasing input scale is higher for PVT than for ResNet but lower than for ViT (Figure 6). This suggests that PVT is well-suited for tasks with medium-resolution inputs. Figure 6

Figure 6: Models' GFLOPs under different input scales.

Conclusion

PVT presents a viable Transformer-based backbone for dense prediction tasks, offering competitive or superior performance compared to CNN-based alternatives. The authors identify potential areas for future research, including exploring CNN-specific modules within the PVT framework and investigating more efficient self-attention mechanisms. The introduction of PVT marks a significant step towards fully Transformer-based vision systems, broadening the applicability of Transformers beyond image classification.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Github Logo Streamline Icon: https://streamlinehq.com