Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 49 tok/s
Gemini 2.5 Pro 53 tok/s Pro
GPT-5 Medium 19 tok/s Pro
GPT-5 High 16 tok/s Pro
GPT-4o 103 tok/s Pro
Kimi K2 172 tok/s Pro
GPT OSS 120B 472 tok/s Pro
Claude Sonnet 4 39 tok/s Pro
2000 character limit reached

SGPT: GPT Sentence Embeddings for Semantic Search (2202.08904v5)

Published 17 Feb 2022 in cs.CL, cs.AI, and cs.IR

Abstract: Decoder transformers have continued increasing in scale reaching hundreds of billions of parameters. Due to their scale the same decoder sets state-of-the-art results on various language tasks via prompting or fine-tuning. Yet, these large foundation models remain unusable for the related fields of semantic search and sentence embeddings. This prevents possibly new state-of-the-art results and forces organizations to train and maintain separate models. To this end, we propose SGPT to use decoders for sentence embeddings and semantic search via prompting or fine-tuning. At 5.8 billion parameters SGPT improves on the previously best sentence embeddings by a margin of 7% and outperforms a concurrent method with 175 billion parameters as measured on the BEIR search benchmark. Code, models and result files are freely available at https://github.com/Muennighoff/sgpt.

Citations (151)

Summary

  • The paper introduces SGPT, which leverages large decoder-only transformers to generate sentence embeddings and enhance semantic search.
  • It compares Cross-Encoder and Bi-Encoder setups, with SGPT-CE showing an 8% improvement on BEIR and SGPT-BE-5.8B setting new benchmarks.
  • By applying methods like position-weighted pooling and BitFit fine-tuning, the approach reduces computational costs while boosting performance.

The paper "SGPT: GPT Sentence Embeddings for Semantic Search" by Niklas Muennighoff explores the application of decoder-only transformers, specifically GPT models, to the domains of semantic search and sentence embeddings. The central proposition is the utilization of SGPT (Sentence GPT) for these purposes, addressing both performance enhancements and computational efficiency.

Background and Significance

Semantic search involves retrieving top-k answers from a document corpus based on a query, emphasizing comprehension beyond mere keyword matching. Historically, this has been dominated by encoder-based models like BERT. The growing scale of GPT-like decoder models presents a potential shift due to their performance on various language tasks.

Despite the scaling of GPT models, their application in sentence embeddings and semantic search has been underutilized. The paper posits that leveraging the extensive parameter scales of such models can yield state-of-the-art results in search applications. Additionally, reusing these models across tasks promises computational savings by eliminating the need for separate encoder and decoder models.

Methodological Innovations

SGPT introduces a novel approach to harnessing decoder transformers for semantic tasks through two primary settings: Cross-Encoder and Bi-Encoder. In the Cross-Encoder setup, SGPT-CE uses pre-trained GPT models to compute log probabilities for search relevance without fine-tuning. The results demonstrate promising unsupervised state-of-the-art performance on the BEIR benchmark when adjusting scale and re-ranking strategies.

Conversely, the Bi-Encoder setting employs a position-weighted mean pooling method and fine-tuning of bias parameters (BitFit). SGPT-BE achieves notable results, narrowing the performance gap with encoder models. The Bi-Encoder configuration is tested extensively on both symmetric and asymmetric search tasks, proving its competitive edge and setting new benchmarks in specific contexts.

Experimental Results

The quantitative achievements of the SGPT models are noteworthy. SGPT-BE-5.8B establishes a new benchmark for sentence embeddings, surpassing previous methods by a margin of 7% and offering robust performance against larger models. SGPT-CE, on the other hand, demonstrates significant unsupervised capabilities, achieving an 8% improvement on BEIR compared to existing alternatives.

Implications and Future Directions

Practically, this research facilitates a paradigm where a single large decoder model can serve multiple semantic tasks, potentially transforming computational resource management in AI applications. Theoretically, the results emphasize the viability of decoder transformers for embedding tasks, previously dominated by encoders.

Future investigations could explore fine-tuning strategies for Cross-Encoders and the injection of SGPT embeddings in generative models for enhanced generative search results. Understanding the biases within large GPT architectures may also yield insights for future model training approaches.

In conclusion, the paper "SGPT: GPT Sentence Embeddings for Semantic Search" presents a well-documented exploration of GPT models' capabilities in semantic search and sentence embeddings, offering valuable contributions to the landscape of AI research.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Lightbulb On Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (1)

Github Logo Streamline Icon: https://streamlinehq.com