Prompt engineering is an increasingly important skill set needed to converse effectively with LLMs, such as ChatGPT. Prompts are instructions given to an LLM to enforce rules, automate processes, and ensure specific qualities (and quantities) of generated output. Prompts are also a form of programming that can customize the outputs and interactions with an LLM. This paper describes a catalog of prompt engineering techniques presented in pattern form that have been applied to solve common problems when conversing with LLMs. Prompt patterns are a knowledge transfer method analogous to software patterns since they provide reusable solutions to common problems faced in a particular context, i.e., output generation and interaction when working with LLMs. This paper provides the following contributions to research on prompt engineering that apply LLMs to automate software development tasks. First, it provides a framework for documenting patterns for structuring prompts to solve a range of problems so that they can be adapted to different domains. Second, it presents a catalog of patterns that have been applied successfully to improve the outputs of LLM conversations. Third, it explains how prompts can be built from multiple patterns and illustrates prompt patterns that benefit from combination with other prompt patterns.
The paper presents a structured framework for documenting and applying a catalog of prompt engineering techniques for LLMs like ChatGPT.
A comprehensive catalog of 16 prompt patterns divided into five categories: Input Semantics, Output Customization, Error Identification, Prompt Improvement, and Interaction is introduced, each providing solutions for common problems in LLM interactions.
The combination and synergy of prompt patterns are discussed, demonstrating how they can be integrated to enhance the effectiveness of LLM interactions, particularly in complex scenarios.
The research paper titled "A Prompt Pattern Catalog to Enhance Prompt Engineering with ChatGPT" by Jules White et al. from Vanderbilt University presents a structured framework for documenting and applying a catalog of prompt engineering techniques designed specifically for LLMs like ChatGPT. The authors introduce the concept of "prompt patterns," likening their methodology to software engineering patterns by providing reproducible solutions for common problems faced while interacting with LLMs.
Prompt engineering, a critical skill for customizing and optimizing interactions with LLMs, entails crafting instructions—prompts—for LLMs to enforce rules, automate processes, and enhance the quality and relevance of generated outputs. The authors' approach extends the utility of LLMs by offering a framework for systematically documenting prompt patterns, which consequently could be adapted to various domains, thereby enhancing the overall efficacy and productivity of LLM-based applications.
The prompt patterns outlined in the paper show a deep understanding of user interaction with LLMs. Each pattern is meticulously detailed with an intent and context, motivation, structure, key ideas, example implementation, and consequences. The format is analogous to classic software patterns, encompassing elements such as a name, intent, motivation, structure and participants, example code, and consequences.
For instance, the Meta Language Creation pattern allows the user to define custom semantics for interaction keywords, optimizing interactions for specific contexts like describing graph structures or state machines. Similarly, the Output Automater pattern instructs the LLM to generate executable artifacts, significantly reducing the manual effort required to implement LLM suggestions.
Other patterns such as Persona enable LLMs to generate outputs from a specific perspective, useful in scenarios requiring domain-specific knowledge. For example, instructing the LLM to act as a security expert during code reviews adds immense value by automating nuanced analysis and feedback. Patterns like Question Refinement and Cognitive Verifier enhance the interaction by ensuring the LLM deconstructs user queries into more effective sub-queries, thereby improving the final output.
The implications of this research are multifaceted. Practically, the introduction of prompt patterns holds promise for a wide range of applications, from software engineering to educational tools, and beyond. By systematically documenting prompt patterns, the authors emphasize the reuse and adaptability of these solutions, showcasing their potential to dramatically streamline the user experience with LLMs. This structured approach can enhance model accuracy, minimize errors, and reduce user burden, particularly in complex interaction scenarios.
Theoretically, the paper underscores the principles of knowledge transfer and contextual integrity within LLM interactions. As LLM capabilities evolve, the prompt patterns will require continuous refinement to accommodate new functionalities and use cases. This dynamic aspect points towards an ongoing research trajectory in prompt engineering, with a focus on developing more advanced pattern languages that guide users through increasingly sophisticated interactions with LLMs.
The prompt patterns documented in this paper provide a transformative approach to enhancing the capabilities and effectiveness of interactions with LLMs like ChatGPT. By drawing parallels to software design patterns, the authors offer a structured, reusable, and adaptable methodology for addressing a broad spectrum of user interaction challenges. This foundational work paves the way for future research to further refine prompt engineering techniques, ultimately leading to more intuitive, efficient, and powerful AI-driven solutions across various domains. The insightful amalgamation of pattern documentation, combined usage strategies, and explicit instructions presents a compelling case for the widespread adoption and continuous evolution of prompt patterns in leveraging the full potential of LLMs.