Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Model Validation Using Mutated Training Labels: An Exploratory Study (1905.10201v4)

Published 24 May 2019 in cs.LG and stat.ML

Abstract: We introduce an exploratory study on Mutation Validation (MV), a model validation method using mutated training labels for supervised learning. MV mutates training data labels, retrains the model against the mutated data, then uses the metamorphic relation that captures the consequent training performance changes to assess model fit. It does not use a validation set or test set. The intuition underpinning MV is that overfitting models tend to fit noise in the training data. We explore 8 different learning algorithms, 18 datasets, and 5 types of hyperparameter tuning tasks. Our results demonstrate that MV is accurate in model selection: the model recommendation hit rate is 92\% for MV and less than 60\% for out-of-sample-validation. MV also provides more stable hyperparameter tuning results than out-of-sample-validation across different runs.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Jie M. Zhang (39 papers)
  2. Mark Harman (31 papers)
  3. Benjamin Guedj (68 papers)
  4. Earl T. Barr (21 papers)
  5. John Shawe-Taylor (68 papers)
Citations (7)

Summary

We haven't generated a summary for this paper yet.