Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

On Robustness and Generalization of ML-Based Congestion Predictors to Valid and Imperceptible Perturbations (2403.00103v1)

Published 29 Feb 2024 in cs.LG and cs.AR

Abstract: There is substantial interest in the use of ML-based techniques throughout the electronic computer-aided design (CAD) flow, particularly methods based on deep learning. However, while deep learning methods have achieved state-of-the-art performance in several applications, recent work has demonstrated that neural networks are generally vulnerable to small, carefully chosen perturbations of their input (e.g. a single pixel change in an image). In this work, we investigate robustness in the context of ML-based EDA tools -- particularly for congestion prediction. As far as we are aware, we are the first to explore this concept in the context of ML-based EDA. We first describe a novel notion of imperceptibility designed specifically for VLSI layout problems defined on netlists and cell placements. Our definition of imperceptibility is characterized by a guarantee that a perturbation to a layout will not alter its global routing. We then demonstrate that state-of-the-art CNN and GNN-based congestion models exhibit brittleness to imperceptible perturbations. Namely, we show that when a small number of cells (e.g. 1%-5% of cells) have their positions shifted such that a measure of global congestion is guaranteed to remain unaffected (e.g. 1% of the design adversarially shifted by 0.001% of the layout space results in a predicted decrease in congestion of up to 90%, while no change in congestion is implied by the perturbation). In other words, the quality of a predictor can be made arbitrarily poor (i.e. can be made to predict that a design is "congestion-free") for an arbitrary input layout. Next, we describe a simple technique to train predictors that improves robustness to these perturbations. Our work indicates that CAD engineers should be cautious when integrating neural network-based mechanisms in EDA flows to ensure robust and high-quality results.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (22)
  1. High-Definition Routing Congestion Prediction for Large-Scale FPGAs. In 2020 25th Asia and South Pacific Design Automation Conference (ASP-DAC). 26–31. https://doi.org/10.1109/ASP-DAC47756.2020.9045178
  2. Stephen Boyd and Lieven Vandenberghe. 2004. Convex Optimization. Cambridge University Press, USA.
  3. N. Carlini and D. Wagner. 2017. Towards Evaluating the Robustness of Neural Networks. In 2017 IEEE Symposium on Security and Privacy (SP). 39–57.
  4. CircuitNet: An Open-Source Dataset for Machine Learning Applications in Electronic Design Automation (EDA). SCIENCE CHINA Information Sciences 65, 12 (2022), 227401–.
  5. MAVIREC: ML-Aided Vectored IR-Drop Estimation and Classification. 2021 Design, Automation & Test in Europe Conference & Exhibition (DATE) (2021), 1825–1828.
  6. Certified Adversarial Robustness via Randomized Smoothing (Proceedings of Machine Learning Research, Vol. 97). PMLR, Long Beach, California, USA, 1310–1320.
  7. Francesco Croce and Matthias Hein. 2019. Sparse and Imperceivable Adversarial Attacks. In ICCV.
  8. Francesco Croce and Matthias Hein. 2020. Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks. In ICML.
  9. Francesco Croce and Matthias Hein. 2021. Mind the box: l1subscript𝑙1l_{1}italic_l start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT-APGD for sparse adversarial attacks on image classifiers. In ICML.
  10. Explaining and Harnessing Adversarial Examples. In International Conference on Learning Representations.
  11. Joseph L. Greathouse and Gabriel H. Loh. 2018. Machine Learning for Performance and Power Modeling of Heterogeneous Systems. In 2018 IEEE/ACM International Conference on Computer-Aided Design (ICCAD). 1–6. https://doi.org/10.1145/3240765.3243484
  12. Thomas N. Kipf and Max Welling. 2017. Semi-Supervised Classification with Graph Convolutional Networks. In International Conference on Learning Representations.
  13. Global Placement with Deep Learning-Enabled Explicit Routability Optimization. In 2021 Design, Automation & Test in Europe Conference & Exhibition (DATE). 1821–1824.
  14. Towards Deep Learning Models Resistant to Adversarial Attacks. In International Conference on Learning Representations (ICLR).
  15. Samuel K. Moore. 2018. DARPA Picks Its First Set of Winners in Electronics Resurgence Initiative. https://spectrum.ieee.org/tech-talk/semiconductors/design/darpa-picks-its-first-set-of-winners-in-electronics-resurgence-initiative
  16. Peter Spindler and Frank M. Johannes. 2007. Fast and Accurate Routing Demand Estimation for Efficient Routability-driven Placement. In 2007 Design, Automation & Test in Europe Conference & Exhibition. 1–6. https://doi.org/10.1109/DATE.2007.364463
  17. One pixel attack for fooling deep neural networks. CoRR abs/1710.08864 (2017). arXiv:1710.08864
  18. Intriguing properties of neural networks. arXiv abs/1312.6199 (2014).
  19. Improving Adversarial Robustness Requires Revisiting Misclassified Examples. In International Conference on Learning Representations.
  20. RouteNet: Routability prediction for Mixed-Size Designs Using Convolutional Neural Network. In 2018 IEEE/ACM International Conference on Computer-Aided Design (ICCAD). 1–8.
  21. Developing Synthesis Flows without Human Knowledge. In Proceedings of the 55th Annual Design Automation Conference (San Francisco, California) (DAC ’18). Association for Computing Machinery, New York, NY, USA, Article 50, 6 pages. https://doi.org/10.1145/3195970.3196026
  22. Accurate process-hotspot detection using critical design rule extraction. In DAC Design Automation Conference 2012. 1163–1168.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Chester Holtz (11 papers)
  2. Yucheng Wang (83 papers)
  3. Chung-Kuan Cheng (13 papers)
  4. Bill Lin (23 papers)

Summary

We haven't generated a summary for this paper yet.