Emergent Mind

Jigsaw: Large Language Models meet Program Synthesis

(2112.02969)
Published Dec 6, 2021 in cs.SE and cs.PL

Abstract

Large pre-trained language models such as GPT-3, Codex, and Google's language model are now capable of generating code from natural language specifications of programmer intent. We view these developments with a mixture of optimism and caution. On the optimistic side, such LLMs have the potential to improve productivity by providing an automated AI pair programmer for every programmer in the world. On the cautionary side, since these LLMs do not understand program semantics, they offer no guarantees about quality of the suggested code. In this paper, we present an approach to augment these LLMs with post-processing steps based on program analysis and synthesis techniques, that understand the syntax and semantics of programs. Further, we show that such techniques can make use of user feedback and improve with usage. We present our experiences from building and evaluating such a tool jigsaw, targeted at synthesizing code for using Python Pandas API using multi-modal inputs. Our experience suggests that as these LLMs evolve for synthesizing code from intent, jigsaw has an important role to play in improving the accuracy of the systems.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.