Emergent Mind

Abstract

As ML components become increasingly integrated into software systems, the emphasis on the ethical or responsible aspects of their use has grown significantly. This includes building ML-based systems that adhere to human-centric requirements, such as fairness, privacy, explainability, well-being, transparency and human values. Meeting these human-centric requirements is not only essential for maintaining public trust but also a key factor determining the success of ML-based systems. However, as these requirements are dynamic in nature and continually evolve, pre-deployment monitoring of these models often proves insufficient to establish and sustain trust in ML components. Runtime monitoring approaches for ML are potentially valuable solutions to this problem. Existing state-of-the-art techniques often fall short as they seldom consider more than one human-centric requirement, typically focusing on fairness, safety, and trust. The technical expertise and effort required to set up a monitoring system are also challenging. In my PhD research, I propose a novel approach for the runtime monitoring of multiple human-centric requirements. This approach leverages model-driven engineering to more comprehensively monitor ML components. This doctoral symposium paper outlines the motivation for my PhD work, a potential solution, progress so far and future plans.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.