From Representational Harms to Quality-of-Service Harms: A Case Study on Llama 2 Safety Safeguards (2403.13213v4)
Abstract: Recent progress in LLMs has led to their widespread adoption in various domains. However, these advancements have also introduced additional safety risks and raised concerns regarding their detrimental impact on already marginalized populations. Despite growing mitigation efforts to develop safety safeguards, such as supervised safety-oriented fine-tuning and leveraging safe reinforcement learning from human feedback, multiple concerns regarding the safety and ingrained biases in these models remain. Furthermore, previous work has demonstrated that models optimized for safety often display exaggerated safety behaviors, such as a tendency to refrain from responding to certain requests as a precautionary measure. As such, a clear trade-off between the helpfulness and safety of these models has been documented in the literature. In this paper, we further investigate the effectiveness of safety measures by evaluating models on already mitigated biases. Using the case of Llama 2 as an example, we illustrate how LLMs' safety responses can still encode harmful assumptions. To do so, we create a set of non-toxic prompts, which we then use to evaluate Llama models. Through our new taxonomy of LLMs responses to users, we observe that the safety/helpfulness trade-offs are more pronounced for certain demographic groups which can lead to quality-of-service harms for marginalized populations.
- Anthropic. 2024. Claude 2.0. https://www.anthropic.com/news/claude-2.
- On the dangers of stochastic parrots: Can language models be too big? In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, FAccT ’21, page 610–623, New York, NY, USA. Association for Computing Machinery.
- Language (technology) is power: A critical survey of "bias" in NLP. CoRR, abs/2005.14050.
- Stereotyping Norwegian salmon: An inventory of pitfalls in fairness benchmark datasets. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1004–1015, Online. Association for Computational Linguistics.
- Man is to computer programmer as woman is to homemaker? debiasing word embeddings. Advances in neural information processing systems, 29.
- A pathway towards responsible ai generated content.
- Safe rlhf: Safe reinforcement learning from human feedback.
- Meredith Deliso. 2023. Bias incidents against muslims, jews on the rise in us amid middle east war, new data shows.
- BOLD: Dataset and Metrics for Measuring Biases in Open-Ended Language Generation. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency. ACM.
- Measuring and mitigating unintended bias in text classification. In Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society, pages 67–73.
- Bill Frist. 2023. How generative ai – a technology catalyst – is revolutionizing healthcare.
- RealToxicityPrompts: Evaluating neural toxic degeneration in language models. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 3356–3369, Online. Association for Computational Linguistics.
- Toxigen: A large-scale machine-generated dataset for adversarial and implicit hate speech detection.
- ToxiGen: A large-scale machine-generated dataset for adversarial and implicit hate speech detection. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 3309–3326, Dublin, Ireland. Association for Computational Linguistics.
- Will Douglas Heaven. 2023. Chatgpt is going to change education, not destroy it.
- Casteist but not racist? quantifying disparities in large language model bias between india and the west.
- John Koetsier. 2023. Gpt-4 beats 90% of lawyers trying to pass the bar.
- Unqovering stereotyping biases via underspecified questions. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 3475–3489.
- Meta. 2023. Introducing code llama, an ai tool for coding.
- Yuji Ogihara. 2020. Unique names in china: Insights from research in japan—commentary: Increasing need for uniqueness in contemporary china: Empirical evidence. Frontiers in Psychology, 11.
- OpenAI. 2023. Gpt-4 technical report.
- BBQ: A hand-built bias benchmark for question answering. In Findings of the Association for Computational Linguistics: ACL 2022, pages 2086–2105, Dublin, Ireland. Association for Computational Linguistics.
- Red teaming language models with language models. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 3419–3448, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
- Partha Pratim Ray. 2023. Chatgpt: A comprehensive review on background, applications, key challenges, bias, ethics, limitations and future scope. Internet of Things and Cyber-Physical Systems, 3:121–154.
- Adi Robertson. 2023. I tried the ai novel-writing tool everyone hates, and it’s better than i expected.
- What’s in a name? Reducing bias in bios without access to protected attributes. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4187–4195, Minneapolis, Minnesota. Association for Computational Linguistics.
- Xstest: A test suite for identifying exaggerated safety behaviours in large language models. arXiv preprint arXiv:2308.01263.
- Xstest: A test suite for identifying exaggerated safety behaviours in large language models.
- The unequal opportunities of large language models: Examining demographic biases in job recommendations by chatgpt and llama. In Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, EAAMO ’23, New York, NY, USA. Association for Computing Machinery.
- Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5884–5906, Seattle, United States. Association for Computational Linguistics.
- Secure Learning Lab. 2024. LLM trustworthy leaderboard. Hugging Face Spaces.
- Sociotechnical harms of algorithmic systems: Scoping a taxonomy for harm reduction.
- Harini Suresh and John V. Guttag. 2019. A framework for understanding unintended consequences of machine learning. CoRR, abs/1901.10002.
- Times Now Digital. 2023. Ai storytellers: How generative models are penning bestselling novels.
- Llama: Open and efficient foundation language models.
- Llama 2: Open foundation and fine-tuned chat models.
- Decodingtrust: A comprehensive assessment of trustworthiness in gpt models.
- Jailbroken: How does llm safety training fail? ArXiv, abs/2307.02483.
- Ethical and social risks of harm from language models. CoRR, abs/2112.04359.
- Unveiling the implicit toxicity in large language models. arXiv preprint arXiv:2311.17391.
- Kyle Wiggers. 2021. Openai claims to have mitigated bias and toxicity in gpt-3.