Emergent Mind

Self-Instruct: Aligning Language Models with Self-Generated Instructions

(2212.10560)
Published Dec 20, 2022 in cs.CL and cs.AI

Abstract

Large "instruction-tuned" language models (i.e., finetuned to respond to instructions) have demonstrated a remarkable ability to generalize zero-shot to new tasks. Nevertheless, they depend heavily on human-written instruction data that is often limited in quantity, diversity, and creativity, therefore hindering the generality of the tuned model. We introduce Self-Instruct, a framework for improving the instruction-following capabilities of pretrained language models by bootstrapping off their own generations. Our pipeline generates instructions, input, and output samples from a language model, then filters invalid or similar ones before using them to finetune the original model. Applying our method to the vanilla GPT3, we demonstrate a 33% absolute improvement over the original model on Super-NaturalInstructions, on par with the performance of InstructGPT-001, which was trained with private user data and human annotations. For further evaluation, we curate a set of expert-written instructions for novel tasks, and show through human evaluation that tuning GPT3 with Self-Instruct outperforms using existing public instruction datasets by a large margin, leaving only a 5% absolute gap behind InstructGPT-001. Self-Instruct provides an almost annotation-free method for aligning pre-trained language models with instructions, and we release our large synthetic dataset to facilitate future studies on instruction tuning. Our code and data are available at https://github.com/yizhongw/self-instruct.

Overview

  • The paper introduces SELF-INSTRUCT, a novel framework that improves the instruction-following capabilities of pre-trained language models (LMs) through self-generated instructions, reducing the dependency on human-written data.

  • SELF-INSTRUCT employs a self-bootstrapping methodology where an LM generates new instructions, classifies them, generates input-output instances, and filters out low-quality data iteratively.

  • When evaluated on GPT3, SELF-INSTRUCT produces significant improvements, generating over 52,000 instructions and 82,000 instances, and achieving a 33% performance increase over the baseline model on the SUPER-NATURAL INSTRUCTIONS benchmark.

  • The paper suggests that SELF-INSTRUCT not only enhances the generative abilities of LMs but also offers a new path for instruction tuning that lessens the need for extensive human-labeled datasets.

Unveiling the SELF-INSTRUCT: A Method for Aligning Language Models with Self-Generated Instructions

Introduction to SELF-INSTRUCT

The proliferation of language models trained to follow instructions has marked a significant milestone in the evolution of generative AI. These models demonstrate remarkable capabilities in generalizing zero-shot to new tasks by leveraging human-written instructions. However, the dependency on such instructions presents a bottleneck due to the scarcity, limited diversity, and the sheer labor-intensiveness of generating these datasets. To address these challenges, this paper introduces SELF-INSTRUCT, a novel framework designed to enhance the instruction-following abilities of pre-trained language models (LMs) using a self-bootstrapping methodology.

Core Methodology of SELF-INSTRUCT

SELF-INSTRUCT stands on the forefront of instruction tuning by employing an LM to autonomously generate new instruction data, which includes tasks, inputs, and corresponding outputs. This self-generation process is iteratively refined through several steps:

  1. Instruction Generation: The LM is primed with a set of seed tasks to generate new instructions.
  2. Task Classification: Newly generated instructions are classified into task categories (e.g., classification tasks).
  3. Instance Generation: For each instruction, the model generates corresponding input-output instances.
  4. Data Filtering: Utilizing various heuristics, low-quality or repetitive instructions and instances are filtered out.

The crux of this methodology lies in its ability to exploit the latent knowledge embedded within LMs to generate a broad spectrum of instructions, thereby circumventing the necessity for extensive human-labeled datasets.

Empirical Evaluation and Results

When applied to GPT3, the SELF-INSTRUCT framework yields a synthetic dataset comprising over 52,000 instructions and 82,000 instances. An evaluation against the SUPER-NATURAL INSTRUCTIONS benchmark shows an absolute improvement of 33% over the baseline GPT3 model, a performance comparable to that of InstructGPT 001. This significant leap underscores the framework's potential in expanding the scope and capabilities of instruction-following models.

Moreover, a curated set of novel tasks subjects the fine-tuned GPT3 model to human evaluation, revealing that models trained with SELF-INSTRUCT data outperform those trained on existing public instruction datasets. These findings hint at an almost untapped potential for enhancing the generative abilities of LMs in understanding and executing a wider array of human instructions.

The Theoretical Implications and Future Directions

The approach taken by SELF-INSTRUCT challenges and extends the current paradigms in the instruction tuning realm. By leveraging the generative capacity of LMs to spawn new instruction data, we unveil a pathway toward reducing the reliance on labor-intensive, human-generated datasets. This method opens avenues for further research into automatic dataset generation, instruction tuning efficiency, and the exploration of more complex or creative tasks beyond the current NLP task spectrum.

Further development could involve refining the data generation process through advanced filtering techniques or integrating human-in-the-loop mechanisms to enhance the quality and diversity of generated tasks. Moreover, the scalability and efficiency of instruction tuning as models grow in size and complexity present areas ripe for investigation.

Conclusion

The SELF-INSTRUCT framework marks a novel step in aligning pre-trained language models more closely with human instructions, mitigating one of the key challenges in the instruction-tuned model landscape. By demonstrating significant improvements in instruction-following capabilities with minimal reliance on human-annotated data, this work paves the way for the next generation of more generalizable, efficient, and autonomously improving language models.

Subscribe by Email

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.