Learning Heuristics for Template-based CEGIS of Loop Invariants with Reinforcement Learning
(2107.09766)Abstract
Loop-invariant synthesis is the basis of program verification. Due to the undecidability of the problem in general, a tool for invariant synthesis necessarily uses heuristics. Despite the common belief that the design of heuristics is vital for the performance of a synthesizer, heuristics are often engineered by their developers based on experience and intuition, sometimes in an \emph{ad-hoc} manner. In this work, we propose an approach to systematically learning heuristics for template-based CounterExample-Guided Inductive Synthesis (CEGIS) with reinforcement learning. As a concrete example, we implement the approach on top of PCSat, which is an invariant synthesizer based on template-based CEGIS. Experiments show that PCSat guided by the heuristics learned by our framework not only outperforms existing state-of-the-art CEGIS-based solvers such as HoICE and the neural solver Code2Inv, but also has slight advantages over non-CEGIS-based solvers such as Eldarica and Spacer in linear Constrained Horn Clause (CHC) solving.
We're not able to analyze this paper right now due to high demand.
Please check back later (sorry!).
Generate a summary of this paper on our Pro plan:
We ran into a problem analyzing this paper.