On Robustness and Generalization of ML-Based Congestion Predictors to Valid and Imperceptible Perturbations (2403.00103v1)
Abstract: There is substantial interest in the use of ML-based techniques throughout the electronic computer-aided design (CAD) flow, particularly methods based on deep learning. However, while deep learning methods have achieved state-of-the-art performance in several applications, recent work has demonstrated that neural networks are generally vulnerable to small, carefully chosen perturbations of their input (e.g. a single pixel change in an image). In this work, we investigate robustness in the context of ML-based EDA tools -- particularly for congestion prediction. As far as we are aware, we are the first to explore this concept in the context of ML-based EDA. We first describe a novel notion of imperceptibility designed specifically for VLSI layout problems defined on netlists and cell placements. Our definition of imperceptibility is characterized by a guarantee that a perturbation to a layout will not alter its global routing. We then demonstrate that state-of-the-art CNN and GNN-based congestion models exhibit brittleness to imperceptible perturbations. Namely, we show that when a small number of cells (e.g. 1%-5% of cells) have their positions shifted such that a measure of global congestion is guaranteed to remain unaffected (e.g. 1% of the design adversarially shifted by 0.001% of the layout space results in a predicted decrease in congestion of up to 90%, while no change in congestion is implied by the perturbation). In other words, the quality of a predictor can be made arbitrarily poor (i.e. can be made to predict that a design is "congestion-free") for an arbitrary input layout. Next, we describe a simple technique to train predictors that improves robustness to these perturbations. Our work indicates that CAD engineers should be cautious when integrating neural network-based mechanisms in EDA flows to ensure robust and high-quality results.
- High-Definition Routing Congestion Prediction for Large-Scale FPGAs. In 2020 25th Asia and South Pacific Design Automation Conference (ASP-DAC). 26–31. https://doi.org/10.1109/ASP-DAC47756.2020.9045178
- Stephen Boyd and Lieven Vandenberghe. 2004. Convex Optimization. Cambridge University Press, USA.
- N. Carlini and D. Wagner. 2017. Towards Evaluating the Robustness of Neural Networks. In 2017 IEEE Symposium on Security and Privacy (SP). 39–57.
- CircuitNet: An Open-Source Dataset for Machine Learning Applications in Electronic Design Automation (EDA). SCIENCE CHINA Information Sciences 65, 12 (2022), 227401–.
- MAVIREC: ML-Aided Vectored IR-Drop Estimation and Classification. 2021 Design, Automation & Test in Europe Conference & Exhibition (DATE) (2021), 1825–1828.
- Certified Adversarial Robustness via Randomized Smoothing (Proceedings of Machine Learning Research, Vol. 97). PMLR, Long Beach, California, USA, 1310–1320.
- Francesco Croce and Matthias Hein. 2019. Sparse and Imperceivable Adversarial Attacks. In ICCV.
- Francesco Croce and Matthias Hein. 2020. Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks. In ICML.
- Francesco Croce and Matthias Hein. 2021. Mind the box: l1subscript𝑙1l_{1}italic_l start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT-APGD for sparse adversarial attacks on image classifiers. In ICML.
- Explaining and Harnessing Adversarial Examples. In International Conference on Learning Representations.
- Joseph L. Greathouse and Gabriel H. Loh. 2018. Machine Learning for Performance and Power Modeling of Heterogeneous Systems. In 2018 IEEE/ACM International Conference on Computer-Aided Design (ICCAD). 1–6. https://doi.org/10.1145/3240765.3243484
- Thomas N. Kipf and Max Welling. 2017. Semi-Supervised Classification with Graph Convolutional Networks. In International Conference on Learning Representations.
- Global Placement with Deep Learning-Enabled Explicit Routability Optimization. In 2021 Design, Automation & Test in Europe Conference & Exhibition (DATE). 1821–1824.
- Towards Deep Learning Models Resistant to Adversarial Attacks. In International Conference on Learning Representations (ICLR).
- Samuel K. Moore. 2018. DARPA Picks Its First Set of Winners in Electronics Resurgence Initiative. https://spectrum.ieee.org/tech-talk/semiconductors/design/darpa-picks-its-first-set-of-winners-in-electronics-resurgence-initiative
- Peter Spindler and Frank M. Johannes. 2007. Fast and Accurate Routing Demand Estimation for Efficient Routability-driven Placement. In 2007 Design, Automation & Test in Europe Conference & Exhibition. 1–6. https://doi.org/10.1109/DATE.2007.364463
- One pixel attack for fooling deep neural networks. CoRR abs/1710.08864 (2017). arXiv:1710.08864
- Intriguing properties of neural networks. arXiv abs/1312.6199 (2014).
- Improving Adversarial Robustness Requires Revisiting Misclassified Examples. In International Conference on Learning Representations.
- RouteNet: Routability prediction for Mixed-Size Designs Using Convolutional Neural Network. In 2018 IEEE/ACM International Conference on Computer-Aided Design (ICCAD). 1–8.
- Developing Synthesis Flows without Human Knowledge. In Proceedings of the 55th Annual Design Automation Conference (San Francisco, California) (DAC ’18). Association for Computing Machinery, New York, NY, USA, Article 50, 6 pages. https://doi.org/10.1145/3195970.3196026
- Accurate process-hotspot detection using critical design rule extraction. In DAC Design Automation Conference 2012. 1163–1168.
- Chester Holtz (11 papers)
- Yucheng Wang (83 papers)
- Chung-Kuan Cheng (13 papers)
- Bill Lin (23 papers)