Emergent Mind

Abstract

We consider two fundamental and related issues currently faced by AI development: the lack of ethics and interpretability of AI decisions. Can interpretable AI decisions help to address ethics in AI? Using a randomized study, we experimentally show that the empirical and liberal turn of the production of explanations tends to select AI explanations with a low denunciatory power. Under certain conditions, interpretability tools are therefore not means but, paradoxically, obstacles to the production of ethical AI since they can give the illusion of being sensitive to ethical incidents. We also show that the denunciatory power of AI explanations is highly dependent on the context in which the explanation takes place, such as the gender or education level of the person to whom the explication is intended for. AI ethics tools are therefore sometimes too flexible and self-regulation through the liberal production of explanations do not seem to be enough to address ethical issues. We then propose two scenarios for the future development of ethical AI: more external regulation or more liberalization of AI explanations. These two opposite paths will play a major role on the future development of ethical AI.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.