Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
11 tokens/sec
Gemini 2.5 Pro Pro
47 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Res2Net: A New Multi-scale Backbone Architecture (1904.01169v3)

Published 2 Apr 2019 in cs.CV

Abstract: Representing features at multiple scales is of great importance for numerous vision tasks. Recent advances in backbone convolutional neural networks (CNNs) continually demonstrate stronger multi-scale representation ability, leading to consistent performance gains on a wide range of applications. However, most existing methods represent the multi-scale features in a layer-wise manner. In this paper, we propose a novel building block for CNNs, namely Res2Net, by constructing hierarchical residual-like connections within one single residual block. The Res2Net represents multi-scale features at a granular level and increases the range of receptive fields for each network layer. The proposed Res2Net block can be plugged into the state-of-the-art backbone CNN models, e.g., ResNet, ResNeXt, and DLA. We evaluate the Res2Net block on all these models and demonstrate consistent performance gains over baseline models on widely-used datasets, e.g., CIFAR-100 and ImageNet. Further ablation studies and experimental results on representative computer vision tasks, i.e., object detection, class activation mapping, and salient object detection, further verify the superiority of the Res2Net over the state-of-the-art baseline methods. The source code and trained models are available on https://mmcheng.net/res2net/.

Citations (2,132)

Summary

  • The paper introduces a novel Res2Net module that enhances multi-scale feature extraction within individual residual blocks, reducing top-1 error by 1.84% on ImageNet.
  • The paper demonstrates effective integration with architectures like ResNet and ResNeXt, yielding improved object detection, instance segmentation, and semantic segmentation results.
  • The paper validates its approach with extensive experiments on datasets such as CIFAR-100, COCO, and PASCAL VOC, showcasing robust performance improvements across various vision tasks.

Res2Net: A New Multi-scale Backbone Architecture

Introduction

The exploration of multi-scale features is paramount in various computer vision tasks like image classification, object detection, and semantic segmentation. Modern Convolutional Neural Networks (CNNs) inherently capture multi-scale features through hierarchical convolution operations across layers. However, most state-of-the-art (SOTA) backbones such as ResNet, DenseNet, and InceptionNet address multi-scale feature representation in a layer-wise manner, which may limit their ability to capture fine-grained scale variations within each layer. The paper "Res2Net: A New Multi-scale Backbone Architecture" introduces an innovative Res2Net module that enhances multi-scale feature extraction at a more granular level, thus widening the receptive fields within individual residual blocks significantly.

Res2Net Module Design

The Res2Net module builds on the classic bottleneck block by dividing the 3×33 \times 3 convolution filters into smaller groups interconnected in a hierarchical residual-like design. This configuration allows the representation of multi-scale features within a single residual block. Concretely, input feature maps are split into several subsets and processed through these smaller filter groups in a sequential manner. The processed feature maps are concatenated and fused using a 1×11 \times 1 convolution, facilitating efficient multi-scale representation.

Integration with Modern Backbone Networks

The superiority of the Res2Net module is demonstrated by its integration with various SOTA backbones, including ResNet, ResNeXt, and DLA. Furthermore, the module is orthogonal to other dimensional enhancements like cardinality in group convolutions (ResNeXt) and channel-wise recalibrations (SE block). The Res2Net module's integration into the backbones maintained computational efficiency while providing significant performance gains across multiple vision tasks.

Experimental Verification

Image Classification on ImageNet and CIFAR-100

Res2Net achieved remarkable improvements on the ImageNet dataset. When integrated into ResNet-50, it showed an impressive 1.84% reduction in top-1 error rate. Moreover, Res2NeXt-50 and Res2Net-DLA-60 models demonstrated consistent performance enhancement compared to their baseline counterparts. Deeper versions like Res2Net-101 further highlighted the module's scalability and efficacy.

On the CIFAR-100 dataset, Res2NeXt models incorporating the Res2Net module (with varying scales) achieved superior test accuracy compared to baselines such as DenseNet and vanilla ResNeXt, thereby affirming the module's effectiveness on smaller-scale datasets.

Object Detection and Instance Segmentation

Using the Faster R-CNN framework, the Res2Net-based models outperformed traditional ResNet-based models in terms of average precision (AP) on the PASCAL VOC07 and COCO datasets. Notably, the Res2Net-50 model exhibited enhanced detection performance for objects of varying sizes, underscoring its robust multi-scale feature representation.

In the instance segmentation task using Mask R-CNN, the Res2Net module improved both AP and AP at different IoU thresholds across small, medium, and large objects on the COCO dataset, showcasing its comprehensive applicability and effectiveness.

Semantic Segmentation

In the semantic segmentation domain with the PASCAL VOC12 dataset using DeepLab v3+, Res2Net-based models consistently achieved higher mean Intersection-over-Union (IoU) scores compared to ResNet counterparts, indicating finer and more precise segmentation capabilities.

Other Vision Tasks

Res2Net also demonstrated enhanced performance in salient object detection and human key-point estimation tasks. For instance, the DSS framework with Res2Net-50 showed substantial improvements in F-measure and MAE across multiple salient object detection benchmarks. Additionally, the SimpleBaseline framework for key-points estimation benefited from Res2Net with higher AP metrics on the COCO dataset.

Implications and Future Prospects

The Res2Net module's introduction of the scale dimension significantly improves multi-scale feature extraction. This enhanced multi-scale capability leads to superior performance across a diverse set of vision tasks, from image classification to dense prediction problems like object detection and semantic segmentation. The ability to seamlessly integrate with existing architectures while providing notable performance gains suggests vast potential for future developments.

Moreover, the module's flexibility and efficiency make it adaptable to varying computational requirements, paving the way for more refined deployment in both research and practical applications. Future explorations can delve into combining Res2Net with advanced model compression and pruning techniques to further optimize and expand its applicability in resource-constrained environments.

In summary, the Res2Net module represents a critical advancement in backbone network design, encouraging further research into exploiting multi-scale features at finer granularities within neural networks.