Toward Trustworthy Neural Program Synthesis (2210.00848v2)
Abstract: We develop an approach to estimate the probability that a program sampled from a LLM is correct. Given a natural language description of a programming problem, our method samples both candidate programs as well as candidate predicates specifying how the program should behave. This allows learning a model that forms a well-calibrated probabilistic prediction of program correctness. Our system also infers which predicates are useful to explain the behavior of the generated code, and humans preferred these in a human study over raw LLM outputs. Our method is simple, easy to implement, and maintains state of the art generation accuracy results.
Collections
Sign up for free to add this paper to one or more collections.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.