Emergent Mind

Abstract

While Machine Learning (ML) technologies are widely adopted in many mission critical fields to support intelligent decision-making, concerns remain about system resilience against ML-specific security attacks and privacy breaches as well as the trust that users have in these systems. In this article, we present our recent systematic and comprehensive survey on the state-of-the-art ML robustness and trustworthiness from a security engineering perspective, focusing on the problems in system threat analysis, design and evaluation faced in developing practical machine learning applications, in terms of robustness and user trust. Accordingly, we organize the presentation of this survey intended to facilitate the convey of the body of knowledge from this angle. We then describe a metamodel we created that represents the body of knowledge in a standard and visualized way. We further illustrate how to leverage the metamodel to guide a systematic threat analysis and security design process which extends and scales up the classic process. Finally, we propose the future research directions motivated by our findings. Our work differs itself from the existing surveys by (i) exploring the fundamental principles and best practices to support robust and trustworthy ML system development, and (ii) studying the interplay of robustness and user trust in the context of ML systems. We expect this survey provides a big picture for machine learning security practitioners.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.