Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 43 tok/s
Gemini 2.5 Pro 49 tok/s Pro
GPT-5 Medium 17 tok/s Pro
GPT-5 High 19 tok/s Pro
GPT-4o 96 tok/s Pro
Kimi K2 197 tok/s Pro
GPT OSS 120B 455 tok/s Pro
Claude Sonnet 4 36 tok/s Pro
2000 character limit reached

RecPrompt: A Self-tuning Prompting Framework for News Recommendation Using Large Language Models (2312.10463v4)

Published 16 Dec 2023 in cs.IR

Abstract: News recommendations heavily rely on NLP methods to analyze, understand, and categorize content, enabling personalized suggestions based on user interests and reading behaviors. LLMs like GPT-4 have shown promising performance in understanding natural language. However, the extent of their applicability to news recommendation systems remains to be validated. This paper introduces RecPrompt, the first self-tuning prompting framework for news recommendation, leveraging the capabilities of LLMs to perform complex news recommendation tasks. This framework incorporates a news recommender and a prompt optimizer that applies an iterative bootstrapping process to enhance recommendations through automatic prompt engineering. Extensive experimental results with 400 users show that RecPrompt can achieve an improvement of 3.36% in AUC, 10.49% in MRR, 9.64% in nDCG@5, and 6.20% in nDCG@10 compared to deep neural models. Additionally, we introduce TopicScore, a novel metric to assess explainability by evaluating LLM's ability to summarize topics of interest for users. The results show LLM's effectiveness in accurately identifying topics of interest and delivering comprehensive topic-based explanations.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (14)
  1. Neural news recommendation with long- and short-term user representations. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 336–345. Association for Computational Linguistics.
  2. Recommender systems in the era of large language models (llms). ArXiv, abs/2307.02046.
  3. Chat-rec: Towards interactive and explainable llms-augmented recommender system. arXiv preprint arXiv:2303.14524.
  4. Exploring fine-tuning chatgpt for news recommendation. ArXiv, abs/2311.05850.
  5. Pre-train, prompt and recommendation: A comprehensive survey of language modelling paradigm adaptations in recommender systems. arXiv preprint arXiv:2302.03735.
  6. A first look at llm-powered generative news recommendation. ArXiv, abs/2305.06566.
  7. OpenAI. 2023. Gpt-4 technical report.
  8. Dkn: Deep knowledge-aware network for news recommendation. Proceedings of the 2018 World Wide Web Conference.
  9. Neural news recommendation with attentive multi-view learning. In International Joint Conference on Artificial Intelligence (IJCAI 2019).
  10. Npa: Neural news recommendation with personalized attention. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining.
  11. Neural news recommendation with multi-head self-attention. In Proceedings of the 2019 conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 6389–6394.
  12. Personalized news recommendation: Methods and challenges. ACM Transactions on Information Systems, 41(1):1–50.
  13. MIND: A large-scale dataset for news recommendation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 3597–3606. Association for Computational Linguistics.
  14. Training large-scale news recommenders with pretrained language models in the loop. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, pages 4215–4225.
Citations (1)
List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-Up Questions

We haven't generated follow-up questions for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com
Youtube Logo Streamline Icon: https://streamlinehq.com