Emergent Mind

AI Failures: A Review of Underlying Issues

(2008.04073)
Published Jul 18, 2020 in cs.CY and cs.AI

Abstract

Instances of AI systems failing to deliver consistent, satisfactory performance are legion. We investigate why AI failures occur. We address only a narrow subset of the broader field of AI Safety. We focus on AI failures on account of flaws in conceptualization, design and deployment. Other AI Safety issues like trade-offs between privacy and security or convenience, bad actors hacking into AI systems to create mayhem or bad actors deploying AI for purposes harmful to humanity and are out of scope of our discussion. We find that AI systems fail on account of omission and commission errors in the design of the AI system, as well as upon failure to develop an appropriate interpretation of input information. Moreover, even when there is no significant flaw in the AI software, an AI system may fail because the hardware is incapable of robust performance across environments. Finally an AI system is quite likely to fail in situations where, in effect, it is called upon to deliver moral judgments -- a capability AI does not possess. We observe certain trade-offs in measures to mitigate a subset of AI failures and provide some recommendations.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.