Emergent Mind

Abstract

AI solutions and technologies are being increasingly adopted in smart systems context, however, such technologies are continuously concerned with ethical uncertainties. Various guidelines, principles, and regulatory frameworks are designed to ensure that AI technologies bring ethical well-being. However, the implications of AI ethics principles and guidelines are still being debated. To further explore the significance of AI ethics principles and relevant challenges, we conducted a survey of 99 representative AI practitioners and lawmakers (e.g., AI engineers, lawyers) from twenty countries across five continents. To the best of our knowledge, this is the first empirical study that encapsulates the perceptions of two different types of population (AI practitioners and lawmakers) and the study findings confirm that transparency, accountability, and privacy are the most critical AI ethics principles. On the other hand, lack of ethical knowledge, no legal frameworks, and lacking monitoring bodies are found the most common AI ethics challenges. The impact analysis of the challenges across AI ethics principles reveals that conflict in practice is a highly severe challenge. Moreover, the perceptions of practitioners and lawmakers are statistically correlated with significant differences for particular principles (e.g. fairness, freedom) and challenges (e.g. lacking monitoring bodies, machine distortion). Our findings stimulate further research, especially empowering existing capability maturity models to support the development and quality assessment of ethics-aware AI systems.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.