Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Using Quality Attribute Scenarios for ML Model Test Case Generation (2406.08575v1)

Published 12 Jun 2024 in cs.SE, cs.AI, and cs.LG

Abstract: Testing of ML models is a known challenge identified by researchers and practitioners alike. Unfortunately, current practice for ML model testing prioritizes testing for model performance, while often neglecting the requirements and constraints of the ML-enabled system that integrates the model. This limited view of testing leads to failures during integration, deployment, and operations, contributing to the difficulties of moving models from development to production. This paper presents an approach based on quality attribute (QA) scenarios to elicit and define system- and model-relevant test cases for ML models. The QA-based approach described in this paper has been integrated into MLTE, a process and tool to support ML model test and evaluation. Feedback from users of MLTE highlights its effectiveness in testing beyond model performance and identifying failures early in the development process.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Rachel Brower-Sinning (3 papers)
  2. Grace A. Lewis (9 papers)
  3. Sebastían Echeverría (1 paper)
  4. Ipek Ozkaya (10 papers)

Summary

We haven't generated a summary for this paper yet.