Emergent Mind

Abstract

In large e-commerce platforms, search systems are typically composed of a series of modules, including recall, pre-ranking, and ranking phases. The pre-ranking phase, serving as a lightweight module, is crucial for filtering out the bulk of products in advance for the downstream ranking module. Industrial efforts on optimizing the pre-ranking model have predominantly focused on enhancing ranking consistency, model structure, and generalization towards long-tail items. Beyond these optimizations, meeting the system performance requirements presents a significant challenge. Contrasting with existing industry works, we propose a novel method: a Generalizable and RAnk-ConsistEnt Pre-Ranking Model (GRACE), which achieves: 1) Ranking consistency by introducing multiple binary classification tasks that predict whether a product is within the top-k results as estimated by the ranking model, which facilitates the addition of learning objectives on common point-wise ranking models; 2) Generalizability through contrastive learning of representation for all products by pre-training on a subset of ranking product embeddings; 3) Ease of implementation in feature construction and online deployment. Our extensive experiments demonstrate significant improvements in both offline metrics and online A/B test: a 0.75% increase in AUC and a 1.28% increase in CVR.

The proposed framework utilizes attribute information like brand, shop, and category.

Overview

  • The GRACE model (Generalizable and Rank-Consistent Pre-Ranking Model), developed by researchers at JD.com, addresses the need for both generalizability and rank consistency in e-commerce pre-ranking systems. It enhances the accuracy and efficiency of identifying likely top-ranked items from a large dataset without modifying existing systems extensively.

  • GRACE utilizes binary classification to ensure rank consistency and employs contrastive learning with Info-NCE loss to improve its generalizability across diverse products, including long-tail items.

  • Empirical results from deploying GRACE show substantial improvements in system performance, including a 0.75% increase in Area Under Curve (AUC) and up to a 2.89% rise in online conversion rates, particularly benefiting the inclusion of long-tail items.

E-Commerce Pre-Ranking Optimization: Insights from the GRACE Model

Introduction to Pre-Ranking Challenges

In the vast world of e-commerce, search systems are paramount in helping users find the products they're looking for among millions available. These systems usually work in phases: first, a recall phase to fetch potentially relevant items, followed by pre-ranking to filter these down to a more manageable size, and finally, a ranking phase to determine the order in which these products will appear to the user.

The pre-ranking phase especially plays a critical role. It essentially acts as a gatekeeper, determining which items from the recall phase make it to the ranking phase. Given its place in the search pipeline, the pre-ranking model must balance efficiency with accuracy. Efficiency is needed because it has to process a large number of items quickly. Accuracy is crucial because the selection it makes directly influences the effectiveness of the subsequent ranking phase—the better the pre-ranking, the better the ultimate product rankings.

Challenges and the GRACE Model

Optimizing these pre-ranking models comes with unique challenges. Primarily, these models must ensure rank consistency—their guesses about which products will rank high should align with the actual ranking model's decisions. They also need generalizability to handle a broad range of items effectively, including less popular or new products which might not have much historical data (known as long-tail items).

Researchers at JD.com have developed a new model, known as GRACE (Generalizable and Rank-Consistent Pre-Ranking Model), which aims to address both these critical needs. Here's a breakdown of how they've achieved this:

  • Rank Consistency: GRACE innovatively includes tasks within the pre-ranking model to predict if an item will make it into the top ranks (top-k). This is done using binary classification built directly into the model, which doesn't require changing the existing training or data handling processes. It's a smart way to ensure that the pre-ranking phase is aligned with the ranking phase without extensive modification to existing systems.
  • Generalizability: The GRACE model uses a method called contrastive learning to adapt its predictions to a wide range of products, including those long-tail items. By pre-training on embeddings from a subset of products and using Info-NCE loss, GRACE hones its ability to generalize across different items without the need for excessive parameter adjustments.

Empirical Results

The numbers speak clearly of GRACE's efficacy:

  • An increase of 0.75% in Area Under Curve (AUC), consistent with improved binary classification performance.
  • Enhancement in online conversion rate (CVR) by 1.28% and even more pronounced improvements in long-tail items, where a 2.89% increase in CVR was observed.

These improvements are especially significant given the scale at which systems like these operate, where even small percentage increases can translate into considerable gains.

Implications and Future Prospects

The GRACE model not only boosts current capabilities in e-commerce search systems but also opens doors for further refinement and application in similar areas where pre-ranking models are used, such as content recommendation engines or digital advertising platforms.

Looking ahead, the principles applied in GRACE could be modified to enhance other phases of information retrieval systems or adapted for use in different contexts outside e-commerce, anywhere hierarchical decision-making processes are used to handle large datasets.

The simplicity of integrating GRACE with existing systems, combined with its demonstrable benefits, suggests that it could serve as a new benchmark for pre-ranking optimizations in both academic and industrial settings. The broader adoption could lead to more responsive and user-friendly search and recommendation systems across various platforms.

Create an account to read this summary for free:

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.