Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 75 tok/s
Gemini 2.5 Pro 51 tok/s Pro
GPT-5 Medium 20 tok/s Pro
GPT-5 High 18 tok/s Pro
GPT-4o 95 tok/s Pro
Kimi K2 193 tok/s Pro
GPT OSS 120B 467 tok/s Pro
Claude Sonnet 4 37 tok/s Pro
2000 character limit reached

BNAS-v2: Memory-efficient and Performance-collapse-prevented Broad Neural Architecture Search (2009.08886v4)

Published 18 Sep 2020 in cs.CV and stat.ML

Abstract: In this paper, we propose BNAS-v2 to further improve the efficiency of NAS, embodying both superiorities of BCNN simultaneously. To mitigate the unfair training issue of BNAS, we employ continuous relaxation strategy to make each edge of cell in BCNN relevant to all candidate operations for over-parameterized BCNN construction. Moreover, the continuous relaxation strategy relaxes the choice of a candidate operation as a softmax over all predefined operations. Consequently, BNAS-v2 employs the gradient-based optimization algorithm to simultaneously update every possible path of over-parameterized BCNN, rather than the single sampled one as BNAS. However, continuous relaxation leads to another issue named performance collapse, in which those weight-free operations are prone to be selected by the search strategy. For this consequent issue, two solutions are given: 1) we propose Confident Learning Rate (CLR) that considers the confidence of gradient for architecture weights update, increasing with the training time of over-parameterized BCNN; 2) we introduce the combination of partial channel connections and edge normalization that also can improve the memory efficiency further. Moreover, we denote differentiable BNAS (i.e. BNAS with continuous relaxation) as BNAS-D, BNAS-D with CLR as BNAS-v2-CLR, and partial-connected BNAS-D as BNAS-v2-PC. Experimental results on CIFAR-10 and ImageNet show that 1) BNAS-v2 delivers state-of-the-art search efficiency on both CIFAR-10 (0.05 GPU days that is 4x faster than BNAS) and ImageNet (0.19 GPU days); and 2) the proposed CLR is effective to alleviate the performance collapse issue in both BNAS-D and vanilla differentiable NAS framework.

Summary

We haven't generated a summary for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Lightbulb On Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube