Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
162 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

CARS: Continuous Evolution for Efficient Neural Architecture Search (1909.04977v6)

Published 11 Sep 2019 in cs.CV

Abstract: Searching techniques in most of existing neural architecture search (NAS) algorithms are mainly dominated by differentiable methods for the efficiency reason. In contrast, we develop an efficient continuous evolutionary approach for searching neural networks. Architectures in the population that share parameters within one SuperNet in the latest generation will be tuned over the training dataset with a few epochs. The searching in the next evolution generation will directly inherit both the SuperNet and the population, which accelerates the optimal network generation. The non-dominated sorting strategy is further applied to preserve only results on the Pareto front for accurately updating the SuperNet. Several neural networks with different model sizes and performances will be produced after the continuous search with only 0.4 GPU days. As a result, our framework provides a series of networks with the number of parameters ranging from 3.7M to 5.1M under mobile settings. These networks surpass those produced by the state-of-the-art methods on the benchmark ImageNet dataset.

Citations (215)

Summary

  • The paper introduces CARS, a NAS framework that integrates continuous evolution with a SuperNet for joint architecture and parameter optimization, reducing search time to 0.4 GPU days.
  • The paper leverages the pNSGA-III algorithm within evolutionary methods to efficiently explore diverse neural models and overcome the small model trap seen in traditional NAS.
  • The paper demonstrates that CARS outperforms state-of-the-art NAS approaches on benchmarks like CIFAR-10 and ILSVRC, delivering optimized models for resource-constrained environments.

Overview of CARS: Continuous Evolution for Efficient Neural Architecture Search

The paper under review presents a novel approach to Neural Architecture Search (NAS) termed CARS, which stands for Continuous Evolution for Efficient Neural Architecture Search. The method integrates the advantages of evolutionary algorithms with continuous parameter optimization, exploiting a SuperNet architecture to enhance efficiency. By employing a continuous evolution strategy, the approach encompasses both architecture and parameter optimization to yield neural networks that provide strong performance under constraints such as model size or latency.

Methodology

NAS methods traditionally cluster into evolutionary algorithms, reinforcement learning, and differentiable approaches. This paper focuses on evolutionary methods, which are known for capturing diverse model configurations but are often computationally expensive. To address this inefficiency, CARS employs a continuous evolutionary framework, leveraging a SuperNet to share parameters among different architectures within the search space. This approach uses genetic algorithms for architecture evolution, combined with a non-dominated sorting strategy, specifically pNSGA-III, to ensure efficient exploration of the search space while overcoming the "small model trap" phenomenon typically present in traditional NAS methods.

Key Features:

  • SuperNet: A single overarching network from which individual architectures derive by applying different masks, thus sharing learned parameters during the search.
  • pNSGA-III Algorithm: An improvement over conventional evolutionary algorithms that helps protect potentially better large models from being prematurely discarded due to slow initial convergence.
  • Parameter Warmup and Optimization: Initial parameter warmup followed by mini-batch training updates for efficient learning within the SuperNet framework.

Results

The efficiency of CARS is vividly illustrated by the significant reduction in search time, achieving successful architecture discovery in as little as 0.4 GPU days. Evaluated on both CIFAR-10 and the ILSVRC2012 datasets, the resulting architectures from the CARS framework showcase a wide range of model sizes and latencies. These models not only surpass many state-of-the-art manually designed architectures but also perform better than other automated NAS approaches like DARTS and SNAS, delivering both accuracy and efficiency improvements.

Implications and Future Directions

The implications of this work are twofold. Practically, the CARS framework provides a robust method for deploying lightweight yet high-performing neural architectures for various applications, especially in resource-constrained environments like mobile devices. Theoretically, it underscores the potential of integrating evolutionary strategies with differentiable methods, pointing to future works that might blend other NAS paradigms for further gains in efficiency and performance.

The future developments could focus on scaling this framework to more complex architectures or expanding its applicability to more diverse tasks beyond image classification. Further exploration could involve refining parameter sharing mechanisms or experimenting with other genetic operators to enhance the quality of the search further.

In conclusion, the CARS framework provides a compelling direction for efficient NAS, pinpointing a diverse set of solutions on the Pareto front with reduced computational overhead. This advancement paves the way for more accessible and faster architectural innovations in neural network design.