Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
125 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Discrete Prompt Optimization via Constrained Generation for Zero-shot Re-ranker (2305.13729v1)

Published 23 May 2023 in cs.IR, cs.AI, and cs.CL

Abstract: Re-rankers, which order retrieved documents with respect to the relevance score on the given query, have gained attention for the information retrieval (IR) task. Rather than fine-tuning the pre-trained LLM (PLM), the large-scale LLM is utilized as a zero-shot re-ranker with excellent results. While LLM is highly dependent on the prompts, the impact and the optimization of the prompts for the zero-shot re-ranker are not explored yet. Along with highlighting the impact of optimization on the zero-shot re-ranker, we propose a novel discrete prompt optimization method, Constrained Prompt generation (Co-Prompt), with the metric estimating the optimum for re-ranking. Co-Prompt guides the generated texts from PLM toward optimal prompts based on the metric without parameter update. The experimental results demonstrate that Co-Prompt leads to outstanding re-ranking performance against the baselines. Also, Co-Prompt generates more interpretable prompts for humans against other prompt optimization methods.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Sukmin Cho (17 papers)
  2. Soyeong Jeong (22 papers)
  3. Jeongyeon Seo (5 papers)
  4. Jong C. Park (28 papers)
Citations (20)

Summary

We haven't generated a summary for this paper yet.