Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 45 tok/s
Gemini 2.5 Pro 54 tok/s Pro
GPT-5 Medium 22 tok/s Pro
GPT-5 High 20 tok/s Pro
GPT-4o 99 tok/s Pro
Kimi K2 183 tok/s Pro
GPT OSS 120B 467 tok/s Pro
Claude Sonnet 4 39 tok/s Pro
2000 character limit reached

Assessing Perceived Fairness from Machine Learning Developer's Perspective (2304.03745v1)

Published 7 Apr 2023 in cs.LG, cs.CY, and cs.HC

Abstract: Fairness in ML applications is an important practice for developers in research and industry. In ML applications, unfairness is triggered due to bias in the data, curation process, erroneous assumptions, and implicit bias rendered within the algorithmic development process. As ML applications come into broader use developing fair ML applications is critical. Literature suggests multiple views on how fairness in ML is described from the users perspective and students as future developers. In particular, ML developers have not been the focus of research relating to perceived fairness. This paper reports on a pilot investigation of ML developers perception of fairness. In describing the perception of fairness, the paper performs an exploratory pilot study to assess the attributes of this construct using a systematic focus group of developers. In the focus group, we asked participants to discuss three questions- 1) What are the characteristics of fairness in ML? 2) What factors influence developers belief about the fairness of ML? and 3) What practices and tools are utilized for fairness in ML development? The findings of this exploratory work from the focus group show that to assess fairness developers generally focus on the overall ML application design and development, i.e., business-specific requirements, data collection, pre-processing, in-processing, and post-processing. Thus, we conclude that the procedural aspects of organizational justice theory can explain developers perception of fairness. The findings of this study can be utilized further to assist development teams in integrating fairness in the ML application development lifecycle. It will also motivate ML developers and organizations to develop best practices for assessing the fairness of ML-based applications.

Summary

We haven't generated a summary for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Lightbulb On Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.