Emergent Mind

CELA: Cost-Efficient Language Model Alignment for CTR Prediction

(2405.10596)
Published May 17, 2024 in cs.IR

Abstract

Click-Through Rate (CTR) prediction holds a paramount position in recommender systems. The prevailing ID-based paradigm underperforms in cold-start scenarios due to the skewed distribution of feature frequency. Additionally, the utilization of a single modality fails to exploit the knowledge contained within textual features. Recent efforts have sought to mitigate these challenges by integrating Pre-trained Language Models (PLMs). They design hard prompts to structure raw features into text for each interaction and then apply PLMs for text processing. With external knowledge and reasoning capabilities, PLMs extract valuable information even in cases of sparse interactions. Nevertheless, compared to ID-based models, pure text modeling degrades the efficacy of collaborative filtering, as well as feature scalability and efficiency during both training and inference. To address these issues, we propose \textbf{C}ost-\textbf{E}fficient \textbf{L}anguage Model \textbf{A}lignment (\textbf{CELA}) for CTR prediction. CELA incorporates textual features and language models while preserving the collaborative filtering capabilities of ID-based models. This model-agnostic framework can be equipped with plug-and-play textual features, with item-level alignment enhancing the utilization of external information while maintaining training and inference efficiency. Through extensive offline experiments, CELA demonstrates superior performance compared to state-of-the-art methods. Furthermore, an online A/B test conducted on an industrial App recommender system showcases its practical effectiveness, solidifying the potential for real-world applications of CELA.

CELA framework: Pre-training, aligning text with features, merging representations, and alternating final stages.

Overview

  • CELA (Cost-Efficient Language Model Alignment) is designed to enhance Click-Through Rate (CTR) prediction by integrating Pre-trained Language Models (PLMs) with traditional ID-based models.

  • The framework operates through three phases: Domain-Adaptive Pre-training (DAP), Recommendation-Oriented Modal Alignment (ROMA), and Multi-Modal Feature Fusion (MF²), to effectively utilize textual features and align them with ID-based embeddings, improving accuracy and scalability.

  • Empirical evaluations demonstrate CELA's effectiveness in both offline and real-world metrics, addressing challenges like cold-start problems and maintaining high efficiency and low latency during inference.

CELA: A Cost-Efficient Solution for CTR Prediction Using Language Models

Introduction

Understanding and predicting user behavior is at the heart of modern recommender systems, and an essential part of this is Click-Through Rate (CTR) prediction. Traditionally, industry has leaned on ID-based models for this task. These models encode user and item features into sparse one-hot vectors and transform them into dense embeddings, which are then fed through sophisticated feature interaction layers to predict CTR. Despite their success, ID-based models struggle with two main issues: dependency on historical data and limited feature scope.

To address these challenges, researchers have proposed integrating Pre-trained Language Models (PLMs) into recommender systems. The paper "CELA: Cost-Efficient Language Model Alignment for CTR Prediction" introduces an innovative approach, blending the robustness of PLMs with the practical strengths of ID-based models.

The CELA Framework

CELA, or Cost-Efficient Language Model Alignment, is designed to merge the advantages of textual feature utilization and ID-based models. It leverages PLMs to mitigate cold-start issues and broaden feature scope, all while maintaining a scalable and efficient system. The framework is a three-phase process:

  1. Domain-Adaptive Pre-training (DAP):

    • Adapt the PLM to domain-specific texts from the dataset.
    • Use techniques like Masked Language Model (MLM) loss and SimCSE to ensure the PLM understands the domain and has effective, uniformly distributed embeddings.
    • Train an ID-based model separately to serve as a reference point.
  2. Recommendation-Oriented Modal Alignment (ROMA):

  3. Multi-Modal Feature Fusion (MF²):

    • Integrate aligned text representations with non-textual features in a new ID-based model.
    • This stage ensures collaborative filtering effectiveness while enriching the model with semantic knowledge.

These stages are designed to be iteratively refined, making the system progressively more accurate and efficient.

Key Contributions and Findings

The proposed approach delivers several notable contributions:

  • Efficient Integration: Integrates textual features with ID-based models in a model-agnostic way, requiring minimal changes to existing architectures.
  • Item-Level Alignment: Reduces training overhead by aligning item text representations at the item level rather than the interaction level, maintaining low latency during inference.
  • Empirical Success: Comprehensive experiments on public and industrial datasets, including an A/B test in a real-world app store scenario, demonstrate CELA's effectiveness. It achieved notable improvements in both offline metrics (AUC, Logloss) and real-world metrics (eCPM, Download-Through Rate).

Practical Implications

The practical implications of this research are substantial:

  • Scalability: Efficient training and inference make CELA suitable for real-world applications where scalability is critical.
  • Cold-Start Problem: By leveraging PLMs, CELA effectively addresses the cold-start problem, making it robust in scenarios with sparse interactions.
  • Enhanced Accuracy: The integration of textual features captured by PLMs enriches the model's understanding and prediction capabilities, leading to higher accuracy.

Future Directions

The research opens several intriguing avenues for future work:

  1. Exploration of Larger PLMs: Investigate the use of even larger and more sophisticated PLMs, while finding ways to manage the increased complexity and overhead.
  2. Cross-Domain Applications: Adapt and apply CELA to different domains and types of recommender systems beyond CTR prediction.
  3. Real-Time Adaptation: Develop techniques for real-time learning and adaptation to dynamically changing datasets, ensuring the model remains up-to-date and accurate.

Conclusion

The CELA framework represents a significant step forward in CTR prediction, combining the strengths of traditional ID-based models with the rich semantic capabilities of PLMs. It is efficient, scalable, and effective, making it a promising solution for modern recommender systems.

Create an account to read this summary for free:

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.