Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Failure Modes in Machine Learning Systems (1911.11034v1)

Published 25 Nov 2019 in cs.LG, cs.CR, and stat.ML

Abstract: In the last two years, more than 200 papers have been written on how ML systems can fail because of adversarial attacks on the algorithms and data; this number balloons if we were to incorporate papers covering non-adversarial failure modes. The spate of papers has made it difficult for ML practitioners, let alone engineers, lawyers, and policymakers, to keep up with the attacks against and defenses of ML systems. However, as these systems become more pervasive, the need to understand how they fail, whether by the hand of an adversary or due to the inherent design of a system, will only become more pressing. In order to equip software developers, security incident responders, lawyers, and policy makers with a common vernacular to talk about this problem, we developed a framework to classify failures into "Intentional failures" where the failure is caused by an active adversary attempting to subvert the system to attain her goals; and "Unintentional failures" where the failure is because an ML system produces an inherently unsafe outcome. After developing the initial version of the taxonomy last year, we worked with security and ML teams across Microsoft, 23 external partners, standards organization, and governments to understand how stakeholders would use our framework. Throughout the paper, we attempt to highlight how machine learning failure modes are meaningfully different from traditional software failures from a technology and policy perspective.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Ram Shankar Siva Kumar (14 papers)
  2. David O Brien (1 paper)
  3. Kendra Albert (8 papers)
  4. Salomé Viljöen (1 paper)
  5. Jeffrey Snover (1 paper)
Citations (47)

Summary

We haven't generated a summary for this paper yet.