- The paper introduces a framework for IAS that emphasizes moral and legal justification using both symbolic and sub-symbolic reasoning.
- The paper employs a hybrid model combining deontic logic and neural networks to provide clear, interpretable decision justifications.
- The paper envisions a cloud-based reasoning workbench with ethico-legal ontologies to boost trust and accountability in autonomous systems.
Reasonable Machines: A Research Manifesto
Introduction
The paper "Reasonable Machines: A Research Manifesto" outlines the conceptual framework for enabling Intelligent Autonomous Systems (IAS) to autonomously justify their actions through the use of novel reasoning tools, ethico-legal ontologies, and argumentation technology. This research addresses a critical gap in current IAS by focusing on moral and legal questions, particularly in sectors such as self-driving cars, healthcare, and military applications. The primary premise is that for IAS to integrate effectively into society, they must possess normative communication capabilities, enabling trust and providing transparency in their decision-making processes, circumventing the traditional reliance on sub-symbolic systems.
Objectives of Reasonable Machines
The core objective of the Reasonable Machines concept is to develop IAS capable of engaging in moral and legal reasoning to justify their actions. By leveraging symbolic logic and hybrid reasoning approaches, these systems can provide rational explanations that adhere to predefined ethico-legal regulations:
- Argument-Based Explanations: IAS should offer coherent justifications for decisions, mitigating the black-box nature of sub-symbolic AI.
- Ethico-Legal Reasoning: Facilitate public critique and governance of IAS through symbolic reasoning that aligns with moral and legal standards.
- Trustworthy Human Interaction: Enable IAS to communicate normative decisions effectively to humans, enhancing user trust and facilitating oversight.
The manifesto advocates for the use of Pluralistic, Expressive Normative Reasoning approaches to satisfy diverse ethical perspectives and societal demands.
Artificial Social Reasoning Model (aSRM)
The implementation of a Social Reasoning Model (SRM) in AI, analogous to human moral reasoning, is proposed as a promising solution for ethical and legal accountability. This model embraces both symbolic and sub-symbolic techniques:
- Symbolic Justifications: By employing deontic logic and moral/ethical standards, the AI systems provide a formal layer of reasoning that justifies intuitive decisions made by sub-symbolic components.
- Sub-symbolic Reasoning Models: Utilize neural networks to determine post-hoc justifications for decisions, maintaining coherence with socially-defined ethical standards.
By integrating symbolic and sub-symbolic reasoning layers, Reasonable Machines can facilitate both interpretability and the potential for dynamic learning, allowing systems to evolve their decision-making frameworks over time.
Implementation Strategy
To achieve the aim of Reasonable Machines, the paper delineates several key module developments:
- Responsible Machine Architecture: Design systems where symbolic and sub-symbolic AI components coexist, supporting rational decision-making.
- Ethico-Legal Ontologies: Develop comprehensive ontologies for encoding legal and ethical rules, crucial for interpreting decisions contextually.
- Interpretable AI Systems: Implement systems capable of producing human-understandable rational arguments through symbolic logic and reasoning networks.
Further, a cloud-based reasoning workbench is envisioned to facilitate access to normative reasoning systems, enabling broad adoption and deployment in real-world applications while emphasizing the integration of empirical studies and use-case testing to refine these systems.
Conclusion
The Reasonable Machines' vision emphasizes the need for IAS to engage in autonomous, ethical decision-making processes that are accountable and transparent. Implementing such systems requires interdisciplinary cooperation, substantial resource investment, and alignment with societal norms. This research manifesto outlines a path toward sophisticated autonomous systems capable of navigating complex moral landscapes, setting the stage for enhanced human-AI interactions and evolving AI towards genuine moral and legal agency.