Lean Copilot: Large Language Models as Copilots for Theorem Proving in Lean (2404.12534v3)
Abstract: Neural theorem proving combines LLMs with proof assistants such as Lean, where the correctness of formal proofs can be rigorously verified, leaving no room for hallucination. With existing neural theorem provers pretrained on a fixed collection of data and offering valuable suggestions at times, it is challenging for them to continually prove novel theorems in a fully autonomous mode, where human insights may be critical. In this paper, we explore LLMs as copilots that assist humans in proving theorems. We introduce Lean Copilot, a general framework for running LLM inference natively in Lean. It enables programmers to build various LLM-based proof automation tools that integrate seamlessly into the workflow of Lean users. Lean users can use our pretrained models or bring their own ones that run either locally (with or without GPUs) or on the cloud. Using Lean Copilot, we build LLM-based tools that suggest proof steps, complete proof goals, and select relevant premises. Experimental results on the Mathematics in Lean textbook demonstrate the effectiveness of our method compared to existing rule-based proof automation in Lean (aesop). When assisting humans, Lean Copilot requires only 2.08 manually-entered proof steps on average (3.86 required by aesop); when automating the theorem proving process, Lean Copilot automates 74.2% proof steps on average, 85% better than aesop (40.1%). We open source all code and artifacts under a permissive MIT license to facilitate further research.
- The Coq proof assistant reference manual: Version 6.1. PhD thesis, Inria, 1997.
- Isabelle/HOL: a proof assistant for higher-order logic. 2002.
- The Lean theorem prover (system description). In International Conference on Automated Deduction (CADE), 2015.
- A formal proof of the Kepler conjecture. In Forum of Mathematics, Pi, volume 5, 2017.
- Mathlib Community. Completion of the liquid tensor experiment. https://leanprover-community.github.io/blog/posts/lte-final/, 2022.
- LeanDojo: Theorem proving with retrieval-augmented language models. In Neural Information Processing Systems (NeurIPS), 2023.
- Learning to prove theorems via interacting with proof assistants. In International Conference on Machine Learning (ICML), 2019.
- Generative language modeling for automated theorem proving. arXiv preprint arXiv:2009.03393, 2020.
- ByT5: Towards a token-free future with pre-trained byte-to-byte models. Transactions of the Association for Computational Linguistics (TACL), 10:291–306, 2022.
- Language models are few-shot learners, 2020.
- The mathlib Community. The Lean mathematical library. In Proceedings of the 9th ACM SIGPLAN International Conference on Certified Programs and Proofs, CPP 2020, pages 367–381, New York, NY, USA, 2020. Association for Computing Machinery.
- A small scale reflection extension for the coq system. 2008.
- LISA: Language models of ISAbelle proofs. In Conference on Artificial Intelligence and Theorem Proving (AITP), 2021.
- Thor: Wielding hammers to integrate language models and automated theorem provers. In Neural Information Processing Systems (NeurIPS), 2022.
- Proof artifact co-training for theorem proving with language models. In International Conference on Learning Representations (ICLR), 2022.
- HyperTree proof search for neural theorem proving. In Neural Information Processing Systems (NeurIPS), 2022.
- Formal mathematics statement curriculum learning. In International Conference on Learning Representations (ICLR), 2023.
- Baldur: Whole-proof generation and repair with large language models. arXiv preprint arXiv:2303.04910, 2023.
- DT-Solver: Automated theorem proving with dynamic-tree sampling guided by proof-level value function. In Annual Meeting of the Association for Computational Linguistics (ACL), 2023.
- Lego-prover: Neural theorem proving with growing libraries, 2023.
- OpenAI Gym. arXiv preprint arXiv:1606.01540, 2016.
- MiniF2F: a cross-system benchmark for formal olympiad-level mathematics. In International Conference on Learning Representations (ICLR), 2022.
- Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374, 2021.
- Huggingface’s transformers: State-of-the-art natural language processing, 2020.
- Aesop: White-box best-first proof search for Lean. In International Conference on Certified Programs and Proofs (CPP), 2023.
- The OpenNMT Authors. CTranslate2: a c++ and python library for efficient inference with transformer models. https://github.com/OpenNMT/CTranslate2, 2020.
- Mathematics in Lean, 2020.
- TensorFlow: Large-scale machine learning on heterogeneous systems, 2015. Software available from tensorflow.org.
- Pytorch: An imperative style, high-performance deep learning library, 2019.
- François Chollet et al. Keras. https://keras.io, 2015.
- llmstep: LLM proofstep suggestions in Lean. https://github.com/wellecks/llmstep, 2023.
- Pythia: A suite for analyzing large language models across training and scaling. In International Conference on Machine Learning (ICML), 2023.
- Machine-learned premise selection for Lean. In International Conference on Automated Reasoning with Analytic Tableaux and Related Methods (TABLEAUX), 2023.
- Deepmath - deep sequence models for premise selection, 2017.
- Magnushammer: A transformer-based approach to premise selection, 2024.
- Premise selection for theorem proving by deep graph embedding. In Neural Information Processing Systems (NeurIPS), 2017.
- Leon Merten Lohse. Libnpy: a simple c++ library for reading and writing of numpy’s .npy files., 2017.
- DeepMath—deep sequence models for premise selection. In Neural Information Processing Systems (NeurIPS), 2016.
- Magnushammer: A transformer-based approach to premise selection. arXiv preprint arXiv:2303.04488, 2023.
- GamePad: A learning environment for theorem proving. In International Conference on Learning Representations (ICLR), 2019.
- IsarStep: a benchmark for high-level mathematical reasoning. In International Conference on Learning Representations (ICLR), 2021.
- HolStep: A machine learning dataset for higher-order logic theorem proving. In International Conference on Learning Representations (ICLR), 2017.
- HOList: An environment for machine learning of higher order logic theorem proving. In International Conference on Machine Learning (ICML), 2019.
- Learning to reason in large theories without imitation. arXiv preprint arXiv:1905.10501, 2019.
- Graph representations for higher-order logic and theorem proving. In AAAI Conference on Artificial Intelligence, 2020.
- Learning to prove theorems by learning to generate theorems. In Neural Information Processing Systems (NeurIPS), 2020.
- TacTok: semantics-aware proof synthesis. In Object-oriented Programming, Systems, Languages, and Applications (OOPSLA), 2020.
- Mathematical reasoning via self-supervised skip-tree training. In International Conference on Learning Representations (ICLR), 2021.
- Passport: Improving automated formal verification with identifiers. In ACM Transactions on Programming Languages and Systems (TOPLAS), 2023.
- Attention is all you need. In Neural Information Processing Systems (NeurIPS), 2017.
- FIMO: A challenge formal dataset for automated theorem proving. arXiv preprint arXiv:2309.04295, 2023.
- SMTCoq: A plug-in for integrating SMT solvers into Coq. In International Conference on Computer Aided Verification (CAV), 2017.
- Frédéric Besson. Fast reflexive arithmetic tactics the linear case and beyond. In International Workshop on Types for Proofs and Programs, 2007.
- Proving equalities in a commutative ring done right in Coq. In International Conference on Theorem Proving in Higher Order Logics, 2005.
- Hammering towards QED. Journal of Formalized Reasoning, 9(1):101–148, 2016.
- Sledgehammer: judgement day. In International Joint Conference on Automated Reasoning (IJCAR), 2010.
- Hammer for Coq: Automation for dependent type theory. Journal of Automated Reasoning, 2018.
- TacticToe: learning to prove with tactics. Journal of Automated Reasoning, 65:257–286, 2021.
- The Tactician: A seamless, interactive tactic learner and prover for Coq. In Conference on Intelligent Computer Mathematics (CICM), 2020.
- Alistair Geesing. Premise Selection for Lean 4. PhD thesis, Universiteit van Amsterdam, 2023.
- coq-synthesis: Coq plugin for proof generation and next tactic prediction. https://github.com/agrarpan/coq-synthesis, 2023.
- lean-gptf: Interactive neural theorem proving in Lean. https://github.com/jesse-michael-han/lean-gptf, 2023.
- Sagredo: automated dialogue between GPT and Lean. https://www.youtube.com/watch?v=CEwRMT0GpKo, 2023.
- Evaluating language models for mathematics through interactions. arXiv preprint arXiv:2306.01694, 2023.