Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Developing a Recommendation Benchmark for MLPerf Training and Inference (2003.07336v2)

Published 16 Mar 2020 in cs.LG, cs.PF, and stat.ML

Abstract: Deep learning-based recommendation models are used pervasively and broadly, for example, to recommend movies, products, or other information most relevant to users, in order to enhance the user experience. Among various application domains which have received significant industry and academia research attention, such as image classification, object detection, language and speech translation, the performance of deep learning-based recommendation models is less well explored, even though recommendation tasks unarguably represent significant AI inference cycles at large-scale datacenter fleets. To advance the state of understanding and enable machine learning system development and optimization for the commerce domain, we aim to define an industry-relevant recommendation benchmark for the MLPerf Training andInference Suites. The paper synthesizes the desirable modeling strategies for personalized recommendation systems. We lay out desirable characteristics of recommendation model architectures and data sets. We then summarize the discussions and advice from the MLPerf Recommendation Advisory Board.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Carole-Jean Wu (62 papers)
  2. Robin Burke (40 papers)
  3. Ed H. Chi (74 papers)
  4. Joseph Konstan (4 papers)
  5. Julian McAuley (238 papers)
  6. Yves Raimond (3 papers)
  7. Hao Zhang (948 papers)
Citations (29)

Summary

We haven't generated a summary for this paper yet.