Item-side Fairness of Large Language Model-based Recommendation System (2402.15215v1)
Abstract: Recommendation systems for Web content distribution intricately connect to the information access and exposure opportunities for vulnerable populations. The emergence of LLMs-based Recommendation System (LRS) may introduce additional societal challenges to recommendation systems due to the inherent biases in LLMs. From the perspective of item-side fairness, there remains a lack of comprehensive investigation into the item-side fairness of LRS given the unique characteristics of LRS compared to conventional recommendation systems. To bridge this gap, this study examines the property of LRS with respect to item-side fairness and reveals the influencing factors of both historical users' interactions and inherent semantic biases of LLMs, shedding light on the need to extend conventional item-side fairness methods for LRS. Towards this goal, we develop a concise and effective framework called IFairLRS to enhance the item-side fairness of an LRS. IFairLRS covers the main stages of building an LRS with specifically adapted strategies to calibrate the recommendations of LRS. We utilize IFairLRS to fine-tune LLaMA, a representative LLM, on \textit{MovieLens} and \textit{Steam} datasets, and observe significant item-side fairness improvements. The code can be found in https://github.com/JiangM-C/IFairLRS.git.
- A Bi-Step Grounding Paradigm for Large Language Models in Recommendation Systems. CoRR abs/2308.08434 (2023).
- TALLRec: An Effective and Efficient Tuning Framework to Align Large Language Model with Recommendation. In RecSys. ACM, 1007–1014.
- Large Language Models for Recommendation: Progresses and Future Directions. In SIGIR-AP.
- Equity of attention: Amortizing individual fairness in rankings. In SIGIR. 405–414.
- Language Models are Few-Shot Learners. In NeurIPS. Curran Associates, Inc., 1877–1901.
- Robin Burke. 2017. Multisided Fairness for Recommendation. CoRR abs/1707.00093 (2017).
- BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. CoRR abs/1810.04805 (2018).
- Recommender Systems in the Era of Large Language Models (LLMs). CoRR abs/2307.02046 (2023).
- Ada-Ranker: A Data Distribution Adaptive Ranking Paradigm for Sequential Recommendation. In SIGIR. 1599–1610.
- Towards long-term fairness in recommendation. In WSDM. 445–453.
- Toward Pareto efficient fairness-utility trade-off in recommendation through reinforcement learning. In WSDM. 316–324.
- Social Media Recommendation Based on People and Tags. In SIGIR. 194–201.
- F. Maxwell Harper and Joseph A. Konstan. 2016. The MovieLens Datasets: History and Context. TIIS (2016), 19:1–19:19.
- Reliable medical recommendation systems with patient privacy. In IHI, Tiffany C. Veinot, Ümit V. Çatalyürek, Gang Luo, Henrique Andrade, and Neil R. Smalheiser (Eds.). ACM, 173–182.
- Large Language Models are Zero-Shot Rankers for Recommender Systems. CoRR abs/2305.08845 (2023).
- Recommendation Independence. In FAT (Proceedings of Machine Learning Research, Vol. 81). PMLR, 187–201.
- Wang-Cheng Kang and Julian J. McAuley. 2018. Self-Attentive Sequential Recommendation. In ICDM. IEEE Computer Society, 197–206.
- Diederik P. Kingma and Jimmy Ba. 2015. Adam: A Method for Stochastic Optimization. In ICLR.
- Large Language Models are Zero-Shot Reasoners. In NeurIPS.
- What’s in a Name? Understanding the Interplay between Titles, Content, and Communities in Social Media. In ICWSM. The AAAI Press.
- BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension. In ACL. Association for Computational Linguistics, 7871–7880.
- FairGAN: GANs-based fairness-aware learning for recommendations with implicit feedback. In WWW. 297–307.
- Exploring the Upper Limits of Text-Based Collaborative Filtering Using Large Language Models: Discoveries and Insights. arXiv preprint arXiv:2305.11700 (2023).
- Fairness in Recommendation: Foundations, Methods, and Applications. TIST (2023), 1–48.
- How Can Recommender Systems Benefit from Large Language Models: A Survey. CoRR abs/2306.05817 (2023).
- ReLLa: Retrieval-enhanced Large Language Models for Lifelong Sequential Behavior Comprehension in Recommendation. CoRR abs/2308.11131 (2023).
- Crank up the Volume: Preference Bias Amplification in Collaborative Recommendation. In RecSys. CEUR-WS.org.
- A Multi-facet Paradigm to Bridge Large Language Model and Recommendation. CoRR abs/2310.06491 (2023).
- Is ChatGPT a Good Recommender? A Preliminary Study. CoRR abs/2304.10149 (2023).
- Food Recommendation: Framework, Existing Solutions, and Challenges. TMM 22, 10 (2020), 2659–2671.
- Controlling fairness and bias in dynamic learning-to-rank. In SIGIR. 429–438.
- A Combined Relevance Feedback Approach for User Recommendation in E-commerce Applications. In ACHI, Ray Jarvis and Cosmin Dini (Eds.). IEEE Computer Society, 209–214.
- OpenAI. 2023. GPT-4 Technical Report. CoRR abs/2303.08774 (2023).
- Training language models to follow instructions with human feedback. NeurIPS 35 (2022), 27730–27744.
- Machine learned job recommendation. In RecSys, Bamshad Mobasher, Robin D. Burke, Dietmar Jannach, and Gediminas Adomavicius (Eds.). ACM, 325–328.
- Language models are unsupervised multitask learners. OpenAI blog 1, 8 (2019), 9.
- Decoding and Diversity in Machine Translation. CoRR abs/2011.13477 (2020).
- Understanding and mitigating the effect of outliers in fair ranking. In WSDM. 861–869.
- First the Worst: Finding Better Gender Translations During Beam Search. In Findings of ACL. 3814–3823.
- Recommendations as treatments: Debiasing learning and evaluation. In ICML. PMLR, 1670–1679.
- Sar-net: a scenario-aware ranking network for personalized fair recommendation in hundreds of travel scenarios. In CIKM. 4094–4103.
- Preliminary Study on Incremental Learning for Large Language Model-based Recommender Systems. CoRR abs/2312.15599 (2023).
- Harald Steck. 2018. Calibrated Recommendations. In RecSys. Association for Computing Machinery, 154–162.
- LLaMA: Open and Efficient Foundation Language Models. CoRR abs/2302.13971 (2023).
- News Recommendations from Social Media Opinion Leaders: Effects on Media Trust and Information Seeking. J COMPUT-MEDIAT COMM (2015), 520–535.
- Generative Recommendation: Towards Next-generation Recommender Paradigm. CoRR abs/2304.03516 (2023).
- Practical compositional fairness: Understanding fairness in multi-component recommender systems. In WSDM. 436–444.
- A survey on the fairness of recommender systems. TOIS (2023), 1–43.
- Emergent Abilities of Large Language Models. TMLR 2022 (2022).
- Chain-of-Thought Prompting Elicits Reasoning in Large Language Models. In NeurIPS.
- A Survey on Large Language Models for Recommendation. CoRR abs/2305.19860 (2023).
- Towards Open-World Recommendation with Knowledge Augmentation from Large Language Models. CoRR abs/2306.10933 (2023).
- A Generic Learning Framework for Sequential Recommendation with Distribution Shifts. In SIGIR. ACM, 331–340.
- Large Language Model Can Interpret Latent Space of Sequential Recommender. arXiv:2310.20487 [cs.IR]
- Sirui Yao and Bert Huang. 2017. Beyond Parity: Fairness Objectives for Collaborative Filtering. In NeurIPS. 2921–2930.
- Is ChatGPT Fair for Recommendation? Evaluating Fairness in Large Language Model Recommendation. In RecSys. ACM, 993–999.
- Recommendation as Instruction Following: A Large Language Model Empowered Recommendation Approach. CoRR (2023).
- Causal intervention for leveraging popularity bias in recommendation. In SIGIR. 11–20.
- CoLLM: Integrating Collaborative Embeddings into Large Language Models for Recommendation. CoRR abs/2310.19488 (2023).
- A Survey of Large Language Models. CoRR abs/2303.18223 (2023).
- The impact of YouTube recommendation system on video views. In SIGCOMM, Mark Allman (Ed.). ACM, 404–410.
- Fairness among new items in cold start recommender systems. In SIGIR. 767–776.
- Meng Jiang (126 papers)
- Keqin Bao (21 papers)
- Jizhi Zhang (24 papers)
- Wenjie Wang (150 papers)
- Zhengyi Yang (24 papers)
- Fuli Feng (143 papers)
- Xiangnan He (200 papers)