Emergent Mind

Once-for-All: Train One Network and Specialize it for Efficient Deployment

(1908.09791)
Published Aug 26, 2019 in cs.LG , cs.CV , and stat.ML

Abstract

We address the challenging problem of efficient inference across many devices and resource constraints, especially on edge devices. Conventional approaches either manually design or use neural architecture search (NAS) to find a specialized neural network and train it from scratch for each case, which is computationally prohibitive (causing $CO2$ emission as much as 5 cars' lifetime) thus unscalable. In this work, we propose to train a once-for-all (OFA) network that supports diverse architectural settings by decoupling training and search, to reduce the cost. We can quickly get a specialized sub-network by selecting from the OFA network without additional training. To efficiently train OFA networks, we also propose a novel progressive shrinking algorithm, a generalized pruning method that reduces the model size across many more dimensions than pruning (depth, width, kernel size, and resolution). It can obtain a surprisingly large number of sub-networks ($> 10{19}$) that can fit different hardware platforms and latency constraints while maintaining the same level of accuracy as training independently. On diverse edge devices, OFA consistently outperforms state-of-the-art (SOTA) NAS methods (up to 4.0% ImageNet top1 accuracy improvement over MobileNetV3, or same accuracy but 1.5x faster than MobileNetV3, 2.6x faster than EfficientNet w.r.t measured latency) while reducing many orders of magnitude GPU hours and $CO2$ emission. In particular, OFA achieves a new SOTA 80.0% ImageNet top-1 accuracy under the mobile setting ($<$600M MACs). OFA is the winning solution for the 3rd Low Power Computer Vision Challenge (LPCVC), DSP classification track and the 4th LPCVC, both classification track and detection track. Code and 50 pre-trained models (for many devices & many latency constraints) are released at https://github.com/mit-han-lab/once-for-all.

Training a versatile once-for-all network enables efficient model selection and reduces deployment cost and time.

Overview

  • The 'Once-for-All' (OFA) methodology introduced in the paper allows the training of a single, versatile neural network that can adapt to various hardware platforms without retraining, addressing significant deployment challenges in deep learning.

  • The methodology involves training a large, flexible network and then progressively shrinking it to support smaller sub-networks, optimizing for configurations in terms of depth, width, kernel size, and image resolution.

  • Experimental results demonstrate that the OFA approach outperforms state-of-the-art models in both accuracy and efficiency while significantly reducing computational costs and CO₂ emissions.

Once-for-All: Train One Network and Specialize it for Efficient Deployment

The paper titled "Once-for-All: Train One Network and Specialize it for Efficient Deployment" by Han Cai et al. presents a significant contribution to the field of efficient deep learning model deployment. The authors introduce the Once-for-All (OFA) methodology, focusing on decoupling the neural network training process from the architecture search to optimize resource usage for deploying deep neural networks (DNNs) across diverse hardware platforms and efficiency constraints.

Problem Statement

The explosive increase in the complexity and size of neural networks has made it challenging to deploy them effectively across varying platforms and hardware configurations. Traditional approaches either rely on manual design or Neural Architecture Search (NAS), both of which require retraining a specialized model for every deployment scenario. This process results in substantial computational expenses and energy consumption, making it unsustainable for large-scale applications.

Methodology

The OFA approach proposes training a single, versatile network that can adapt to different architectural configurations without the need for retraining. This is achieved through a two-stage process:

  1. Training the Once-for-All Network: A single extensive network is trained once, encompassing a wide range of configurations in terms of depth, width, kernel size, and image resolution.
  2. Progressive Shrinking: A novel technique proposed by the authors, where the largest network is initially trained to optimize for the most complex configurations and subsequently fine-tuned to support smaller sub-networks. This method helps avoid interference between sub-networks and preserves the accuracy of smaller models.

Architecture Space

The architecture space of the OFA network is designed to cover multiple dimensions:

  • Elastic Depth: Different layers configurations.
  • Elastic Width: Various amounts of channels.
  • Elastic Kernel Size: Adaptable kernel sizes.
  • Elastic Resolution: Multiple input image sizes.

This flexibility allows the OFA network to support over (10{19}) sub-networks, all sharing the same weights, significantly reducing the model size.

Training and Deployment

Training Procedure

The training of the OFA network is divided into stages:

  • Initial training of the largest network.
  • Progressive incorporation of elastic kernel sizes, depths, and widths.
  • Fine-tuning at each stage to ensure higher accuracy for sub-networks.

Deployment

For deploying a specialized sub-network for a given hardware constraint:

  • Architecture Search: An evolutionary search is guided by neural-network-twins predicting accuracy and latency, a highly efficient process compared to exhaustive searches.

Experimental Results

The effectiveness of the OFA methodology is extensively validated across diverse hardware platforms (e.g., mobile devices, GPUs, FPGAs) with varying latency and resource constraints. Key findings include:

  • ImageNet Performance: OFA-achieved models significantly outperform state-of-the-art NAS-based models in terms of accuracy and efficiency.
  • Efficiency Gains: The OFA approach reduces computational costs and CO₂ emissions by up to multiple orders of magnitude. For instance, achieving 80.0% ImageNet top-1 accuracy with less than 600M MACs, outperforming EfficientNet-B0 with substantially fewer computations and faster execution times on hardware.
  • Transferability: The architecture search and specialization of sub-networks using the OFA model demonstrate significant efficiency improvements across different hardware settings, from cloud-based GPUs to edge devices like mobile phones and FPGAs.

Implications and Future Developments

The OFA methodology not only addresses the immediate challenge of efficiently deploying DNNs but also sets a precedent for future research in the following areas:

  • Automated Model Optimization: The decoupling of training and architecture search allows scalable and sustainable deployment across numerous platforms.
  • Green AI: By significantly reducing the environmental impact of model training and deployment, the OFA method aligns with emerging concerns about the carbon footprint of AI research.
  • Hardware-Aware Design: OFA's ability to tailor models for specific hardware constraints could drive innovations in hardware-aware machine learning model design optimizations.

Given the profound implications for practical deployment, the OFA framework establishes a robust basis for future advancements in efficient AI deployment, promising more adaptive and resource-aware machine learning applications.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.

YouTube