Recent AI-assistant agents, such as ChatGPT, predominantly rely on supervised fine-tuning (SFT) with human annotations and reinforcement learning from human feedback (RLHF) to align the output of LLMs with human intentions, ensuring they are helpful, ethical, and reliable. However, this dependence can significantly constrain the true potential of AI-assistant agents due to the high cost of obtaining human supervision and the related issues on quality, reliability, diversity, self-consistency, and undesirable biases. To address these challenges, we propose a novel approach called SELF-ALIGN, which combines principle-driven reasoning and the generative power of LLMs for the self-alignment of AI agents with minimal human supervision. Our approach encompasses four stages: first, we use an LLM to generate synthetic prompts, and a topic-guided method to augment the prompt diversity; second, we use a small set of human-written principles for AI models to follow, and guide the LLM through in-context learning from demonstrations (of principles application) to produce helpful, ethical, and reliable responses to user's queries; third, we fine-tune the original LLM with the high-quality self-aligned responses so that the resulting model can generate desirable responses for each query directly without the principle set and the demonstrations anymore; and finally, we offer a refinement step to address the issues of overly-brief or indirect responses. Applying SELF-ALIGN to the LLaMA-65b base language model, we develop an AI assistant named Dromedary. With fewer than 300 lines of human annotations (including < 200 seed prompts, 16 generic principles, and 5 exemplars for in-context learning). Dromedary significantly surpasses the performance of several state-of-the-art AI systems, including Text-Davinci-003 and Alpaca, on benchmark datasets with various settings.
We're not able to analyze this paper right now due to high demand.
Please check back later (sorry!).
Generate a detailed summary of this paper with a premium account.
We ran into a problem analyzing this paper.
Anthropic. Claude’s constitution, 2023a. https://www.anthropic.com/index/claudes-constitution.
Anthropic. Core views on ai safety: When, why, what, and how, 2023b. https://www.anthropic.com/index/core-views-on-ai-safety.
Vicuna: An open-source chatbot impressing gpt-4 with 90%* chatgpt quality, March 2023. https://vicuna.lmsys.org.
Databricks. Free dolly: Introducing the world’s first truly open instruction-tuned llm, 2023. https://www.databricks.com/blog/2023/04/12/dolly-first-open-commercially-viable-instruction-tuned-llm.
Koala: A dialogue model for academic research. Blog post, April 2023. https://bair.berkeley.edu/blog/2023/04/03/koala/.
Microsoft. Introducing the new bing, 2023. https://www.bing.com/new#features.
OpenAI. OpenAI: Introducing ChatGPT, 2022. https://openai.com/blog/chatgpt.
OpenAI. OpenAI: GPT-4, 2023b. https://openai.com/research/gpt-4.
OpenAI. How do text-davinci-002 and text-davinci-003 differ? https://help.openai.com/en/articles/6779149-how-do-text-davinci-002-and-text-davinci-003-differ, 2023c.
Recitation-augmented language models. In International Conference on Learning Representations, 2023b. https://openreview.net/forum?id=-cqvvvb-NkI.
Stanford alpaca: An instruction-following llama model. https://github.com/tatsu-lab/stanford_alpaca