Emergent Mind

Abstract

As human science pushes the boundaries towards the development of AI, the sweep of progress has caused scholars and policymakers alike to question the legality of applying or utilising AI in various human endeavours. For example, debate has raged in international scholarship about the legitimacy of applying AI to weapon systems to form lethal autonomous weapon systems (LAWS). Yet the argument holds true even when AI is applied to a military autonomous system that is not weaponised: how does one hold a machine accountable for a crime? What about a tort? Can an artificial agent understand the moral and ethical content of its instructions? These are thorny questions, and in many cases these questions have been answered in the negative, as artificial entities lack any contingent moral agency. So what if the AI is not alone, but linked with or overseen by a human being, with their own moral and ethical understandings and obligations? Who is responsible for any malfeasance that may be committed? Does the human bear the legal risks of unethical or immoral decisions by an AI? These are some of the questions this manuscript seeks to engage with.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.