Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 163 tok/s
Gemini 2.5 Pro 50 tok/s Pro
GPT-5 Medium 36 tok/s Pro
GPT-5 High 35 tok/s Pro
GPT-4o 125 tok/s Pro
Kimi K2 208 tok/s Pro
GPT OSS 120B 445 tok/s Pro
Claude Sonnet 4.5 36 tok/s Pro
2000 character limit reached

Less is More: Hop-Wise Graph Attention for Scalable and Generalizable Learning on Circuits (2403.01317v4)

Published 2 Mar 2024 in cs.LG and cs.AR

Abstract: While graph neural networks (GNNs) have gained popularity for learning circuit representations in various electronic design automation (EDA) tasks, they face challenges in scalability when applied to large graphs and exhibit limited generalizability to new designs. These limitations make them less practical for addressing large-scale, complex circuit problems. In this work we propose HOGA, a novel attention-based model for learning circuit representations in a scalable and generalizable manner. HOGA first computes hop-wise features per node prior to model training. Subsequently, the hop-wise features are solely used to produce node representations through a gated self-attention module, which adaptively learns important features among different hops without involving the graph topology. As a result, HOGA is adaptive to various structures across different circuits and can be efficiently trained in a distributed manner. To demonstrate the efficacy of HOGA, we consider two representative EDA tasks: quality of results (QoR) prediction and functional reasoning. Our experimental results indicate that (1) HOGA reduces estimation error over conventional GNNs by 46.76% for predicting QoR after logic synthesis; (2) HOGA improves 10.0% reasoning accuracy over GNNs for identifying functional blocks on unseen gate-level netlists after complex technology mapping; (3) The training time for HOGA almost linearly decreases with an increase in computing resources.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (21)
  1. Opencores hardware rtl designs. https://opencores.org/.
  2. Christoph Albrecht. Iwls 2005 benchmarks. In Proc. IWLS, 2005.
  3. Abc: An academic industrial-strength verification tool. In Proc. CAV. Springer, 2010.
  4. Jie Chen et al. Fastgcn: fast learning with graph convolutional networks via importance sampling. In Proc. ICLR, 2018.
  5. Brendon Chetwynd et al. Common Evaluation Platform.
  6. Animesh Basak Chowdhury et al. Openabc-d: A large-scale dataset for machine learning guided integrated circuit synthesis. arXiv:2110.11292, 2021.
  7. Fabrizio Frasca et al. Sign: Scalable inception graph neural networks. arXiv:2004.11198, 2020.
  8. Zizheng Guo et al. A timing engine inspired graph neural network model for pre-routing slack prediction. In Proc. DAC, 2022.
  9. Will Hamilton et al. Inductive representation learning on large graphs. Proc. NeurIPS, 2017.
  10. Guyue Huang et al. Machine learning for electronic design automation: A survey. ACM TODAES, 2021.
  11. Semi-supervised classification with graph convolutional networks. In Proc. ICLR, 2017.
  12. Wenchao Li et al. Wordrev: Finding word-level structures in a sea of bit-level gates. In Proc. HOST, 2013.
  13. Daniela Sánchez et al. A comprehensive survey on electronic design automation and graph neural networks: Theory and applications. ACM TODAES, 2023.
  14. Yingxia Shao et al. Distributed graph neural network training: A survey. arXiv:2211.00216, 2022.
  15. Ecenur Ustun et al. Accurate operation delay prediction for fpga hls using graph neural networks. In Proc. ICCAD, 2020.
  16. Ashish Vaswani et al. Attention is all you need. Proc. NeurIPS, 2017.
  17. Ziyi Wang et al. Functionality matters in netlist representation learning. In Proc. DAC, 2022.
  18. Nan Wu et al. Gamora: Graph learning based symbolic reasoning for large-scale boolean networks. In Proc. DAC, 2023.
  19. Cunxi Yu et al. Formal verification of arithmetic circuits by function extraction. IEEE TCAD, 2016.
  20. Hanqing Zeng et al. Graphsaint: Graph sampling based inductive learning method. In Proc. ICLR, 2020.
  21. Yanqing Zhang et al. Grannite: Graph neural network inference for transferable power estimation. In Proc. DAC, 2020.

Summary

We haven't generated a summary for this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.