Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Repairing Neural Networks by Leaving the Right Past Behind (2207.04806v2)

Published 11 Jul 2022 in cs.LG

Abstract: Prediction failures of machine learning models often arise from deficiencies in training data, such as incorrect labels, outliers, and selection biases. However, such data points that are responsible for a given failure mode are generally not known a priori, let alone a mechanism for repairing the failure. This work draws on the Bayesian view of continual learning, and develops a generic framework for both, identifying training examples that have given rise to the target failure, and fixing the model through erasing information about them. This framework naturally allows leveraging recent advances in continual learning to this new problem of model repairment, while subsuming the existing works on influence functions and data deletion as specific instances. Experimentally, the proposed approach outperforms the baselines for both identification of detrimental training data and fixing model failures in a generalisable manner.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Ryutaro Tanno (36 papers)
  2. Melanie F. Pradier (13 papers)
  3. Aditya Nori (22 papers)
  4. Yingzhen Li (60 papers)
Citations (31)

Summary

We haven't generated a summary for this paper yet.