Emergent Mind

Abstract

Artificial intelligence can cause inconvenience, harm, or other unintended consequences in various ways, including those that arise from defects or malfunctions in the AI system itself or those caused by its use or misuse. Responsibility for AI harms or unintended consequences must be addressed to hold accountable the people who caused such harms and ensure that victims receive compensation for any damages or losses they may have sustained. Historical instances of harm caused by AI have led to European Union establishing an AI Liability Directive. The directive aims to lay down a uniform set of rules for access to information, delineate the duty and level of care required for AI development and use, and clarify the burden of proof for damages or harms caused by AI systems, establishing broader protection for victims. The future ability of provider to contest a product liability claim will depend on good practices adopted in designing, developing, and maintaining AI systems in the market. This paper provides a risk-based approach to examining liability for AI-driven injuries. It also provides an overview of existing liability approaches, insights into limitations and complexities in these approaches, and a detailed self-assessment questionnaire to assess the risk associated with liability for a specific AI system from a provider's perspective.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.