Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Promptagator: Few-shot Dense Retrieval From 8 Examples (2209.11755v1)

Published 23 Sep 2022 in cs.CL and cs.IR

Abstract: Much recent research on information retrieval has focused on how to transfer from one task (typically with abundant supervised data) to various other tasks where supervision is limited, with the implicit assumption that it is possible to generalize from one task to all the rest. However, this overlooks the fact that there are many diverse and unique retrieval tasks, each targeting different search intents, queries, and search domains. In this paper, we suggest to work on Few-shot Dense Retrieval, a setting where each task comes with a short description and a few examples. To amplify the power of a few examples, we propose Prompt-base Query Generation for Retriever (Promptagator), which leverages LLMs (LLM) as a few-shot query generator, and creates task-specific retrievers based on the generated data. Powered by LLM's generalization ability, Promptagator makes it possible to create task-specific end-to-end retrievers solely based on a few examples {without} using Natural Questions or MS MARCO to train %question generators or dual encoders. Surprisingly, LLM prompting with no more than 8 examples allows dual encoders to outperform heavily engineered models trained on MS MARCO like ColBERT v2 by more than 1.2 nDCG on average on 11 retrieval sets. Further training standard-size re-rankers using the same generated data yields another 5.0 point nDCG improvement. Our studies determine that query generation can be far more effective than previously observed, especially when a small amount of task-specific knowledge is given.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (10)
  1. Zhuyun Dai (26 papers)
  2. Vincent Y. Zhao (8 papers)
  3. Ji Ma (72 papers)
  4. Yi Luan (25 papers)
  5. Jianmo Ni (31 papers)
  6. Jing Lu (158 papers)
  7. Anton Bakalov (3 papers)
  8. Kelvin Guu (26 papers)
  9. Keith B. Hall (3 papers)
  10. Ming-Wei Chang (44 papers)
Citations (190)

Summary

  • The paper introduces PROMPTAGATOR, a prompt-based retrieval framework that leverages few-shot examples to generate task-specific queries and achieve competitive nDCG improvements.
  • It employs large language models to synthesize high-quality training data through smart prompt engineering and round-trip consistency filtering, reducing reliance on extensive labeled data.
  • The approach outperforms strong baselines like ColBERT v2 and SPLADE v2 by an average of 1.2 nDCG points across diverse retrieval sets, demonstrating its cost-effectiveness and adaptability.

Analysis and Insights into PROMPTAGATOR: Few-Shot Dense Retrieval

The paper, "PROMPTAGATOR: Few-Shot Dense Retrieval from 8 Examples," introduces a novel approach in the field of information retrieval (IR), tackling challenges associated with deploying neural retrieval models in multiple contexts with sparse supervision. This work proposes a framework that leverages LLMs to enhance the capabilities of few-shot retrieval tasks by utilizing only a minimal number of labeled examples to generate task-specific datasets for robust retrieval system training.

Key Contributions and Results

The authors address a prevalent issue in IR: the generalization difficulty of models trained on well-known datasets such as MS MARCO due to the diversity in retrieval tasks and the definition of “relevance.” The central proposition is the introduction of PROMPTAGATOR, a methodology that employs LLMs as query generators for establishing task-specific retrievers. Unlike prior methods that depend heavily on large-scale supervised data transfer from datasets such as MS MARCO, PROMPTAGATOR achieves competitive performance using as few as 2 to 8 examples per task.

Significantly, dual encoders using PROMPTAGATOR manage to surpass models heavily trained on MS MARCO, such as ColBERT v2 and SPLADE v2, achieving average improvements over these benchmarks by 1.2 points in normalized Discounted Cumulative Gain (nDCG) on 11 distinct retrieval sets. Furthermore, retrained standard-size rerankers using the internally generated data furnish close to a 5-point nDCG increment, emphasizing the framework's adaptability and potency.

Approach and Methodology

PROMPTAGATOR introduces a process for generating synthetic task-specific training examples using prompt-based query generation. By designing task-specific prompting strategies, the LLM can produce a wide array of queries that conform to the target task's intent and relevance conditions. The authors employ a filtering technique based on round-trip consistency to enhance the quality of the generated data, effectively eliminating questions deemed ambiguous or of low quality.

Remarkably, the application of LLMs in the proposed method bypasses the significant costs associated with direct fine-tuning or embedding large-scale LLMs within retrieval architectures. Thus, this highlights PROMPTAGATOR's efficiency and cost-effectiveness in serving diverse retrieval purposes without extensive annotated data.

Practical and Theoretical Implications

From a practical standpoint, the research illustrates the potential for significant reduction in necessary annotated data for new retrieval tasks, propelling the deployment feasibility of dense retrieval models across numerous domains where data scarcity is a barrier. Theoretically, this work challenges conventional reliance on large supervised datasets and shows that the syntactic and semantic comprehension encoded in advanced LLMs can be drawn on efficiently through the few-shot retrieval paradigm.

The application of prompt-based query generation broadens the interpretation scope of LLMs and underscores the role of smart prompt engineering, providing a basis for future research in adaptive retrieval settings. PROMPTAGATOR’s promising retrieval performance, alongside its simplicity, paves the path for exploring more refined prompt strategies, potentially in harmony with distillation techniques to extract more nuanced embeddings or representations.

Future Directions

Given the substantial achievements demonstrated, future efforts might focus on examining the sensitivity of the retrieval model's performance to various prompt designs, quantifying the exact minimum of labeled data necessary for comparable retrieval effectiveness, and exploring cross-domain transferability without supplementary human labeling. Additionally, expanding the research on distillation processes integrated with few-shot retrieval approaches could further optimize training efficiency and model compactness.

Through PROMPTAGATOR, the authors impart substantial advancements towards realizing generalizable retrieval solutions and elucidate the untapped potentials of leveraging LLMs in the IR discipline. This synthesis of few-shot learning with effective query generation has the potential to redefine existing paradigms and facilitate the advent of more agile and adaptable IR systems.

X Twitter Logo Streamline Icon: https://streamlinehq.com
Youtube Logo Streamline Icon: https://streamlinehq.com