Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
162 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Circuit Transformer: A Transformer That Preserves Logical Equivalence (2403.13838v2)

Published 14 Mar 2024 in cs.LG and cs.AR

Abstract: Implementing Boolean functions with circuits consisting of logic gates is fundamental in digital computer design. However, the implemented circuit must be exactly equivalent, which hinders generative neural approaches on this task due to their occasionally wrong predictions. In this study, we introduce a generative neural model, the "Circuit Transformer", which eliminates such wrong predictions and produces logic circuits strictly equivalent to given Boolean functions. The main idea is a carefully designed decoding mechanism that builds a circuit step-by-step by generating tokens, which has beneficial "cutoff properties" that block a candidate token once it invalidate equivalence. In such a way, the proposed model works similar to typical LLMs while logical equivalence is strictly preserved. A Markov decision process formulation is also proposed for optimizing certain objectives of circuits. Experimentally, we trained an 88-million-parameter Circuit Transformer to generate equivalent yet more compact forms of input circuits, outperforming existing neural approaches on both synthetic and real world benchmarks, without any violation of equivalence constraints.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (23)
  1. Armin Biere. 2007. The AIGER And-Inverter Graph (AIG) Format Version 20071012. Technical Report 07/1. Institute for Formal Models and Verification, Johannes Kepler University, Altenbergerstr. 69, 4040 Linz, Austria.
  2. Chip-Chat: Challenges and Opportunities in Conversational Hardware Design. In 2023 ACM/IEEE 5th Workshop on Machine Learning for CAD (MLCAD). 1–6.
  3. Robert Brayton and Alan Mishchenko. 2010. ABC: An Academic Industrial-Strength Verification Tool. In Computer Aided Verification. Springer Berlin Heidelberg, Berlin, Heidelberg, 24–40.
  4. ChipGPT: How far are we from natural language hardware design. arXiv:2305.14019 [cs.AI]
  5. BOiLS: Bayesian Optimisation for Logic Synthesis. In 2022 Design, Automation & Test in Europe Conference & Exhibition (DATE). 1193–1196.
  6. Deep Learning for Logic Optimization Algorithms. In 2018 IEEE International Symposium on Circuits and Systems (ISCAS). 1–4.
  7. DRiLLS: Deep Reinforcement Learning for Logic Synthesis. In 2020 25th Asia and South Pacific Design Automation Conference (ASP-DAC). 581–586.
  8. Machine Learning for Electronic Design Automation: A Survey. ACM Trans. Des. Autom. Electron. Syst. 26, 5, Article 40 (jun 2021), 46 pages.
  9. Survey of Hallucination in Natural Language Generation. ACM Comput. Surv. 55, 12, Article 248 (mar 2023), 38 pages.
  10. ChipNeMo: Domain-Adapted LLMs for Chip Design. arXiv preprint arXiv:2311.00176 (2023).
  11. LSOracle: a Logic Synthesis Framework Driven by Artificial Intelligence: Invited Paper. In 2019 IEEE/ACM International Conference on Computer-Aided Design (ICCAD). 1–6.
  12. OpenAI. 2023. GPT-4 Technical Report. arXiv:2303.08774 [cs].
  13. Logic synthesis meets machine learning: Trading exactness for generalization. In 2021 Design, Automation & Test in Europe Conference & Exhibition (DATE). IEEE, 1026–1031.
  14. Neural Circuit Synthesis with Pre-trained Language Models. In First International Workshop on Deep Learning-aided Verification.
  15. Neural Circuit Synthesis from Specification Patterns. In Advances in Neural Information Processing Systems, Vol. 34. Curran Associates, Inc., 15408–15420.
  16. Vighnesh Shiv and Chris Quirk. 2019. Novel positional encodings to enable tree-based transformers. In Advances in Neural Information Processing Systems, Vol. 32. Curran Associates, Inc.
  17. Mastering the game of go without human knowledge. nature 550, 7676 (2017), 354–359.
  18. Gemini: A Family of Highly Capable Multimodal Models. arXiv:2312.11805 [cs.CL]
  19. Attention is All you Need. In Advances in Neural Information Processing Systems, Vol. 30. Curran Associates, Inc.
  20. S. Wolfram. 2023. What Is ChatGPT Doing … and Why Does It Work? Wolfram Media, Incorporated.
  21. Developing Synthesis Flows without Human Knowledge. In Proceedings of the 55th Annual Design Automation Conference (San Francisco, California) (DAC ’18). Association for Computing Machinery, New York, NY, USA, Article 50, 6 pages.
  22. Dan Yu. 2023. Decoding LLM Hallucinations: Insights and Taming them for EDA Applications. https://blogs.sw.siemens.com/verificationhorizons/2023/06/15/decoding-llm-hallucinations/ Section: News.
  23. Exploring Logic Optimizations with Reinforcement Learning and Graph Convolutional Network. In 2020 ACM/IEEE 2nd Workshop on Machine Learning for CAD (MLCAD). 145–150.
Citations (6)

Summary

We haven't generated a summary for this paper yet.