Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Stopping Criterion for Active Learning Based on Error Stability (2104.01836v2)

Published 5 Apr 2021 in stat.ML and cs.LG

Abstract: Active learning is a framework for supervised learning to improve the predictive performance by adaptively annotating a small number of samples. To realize efficient active learning, both an acquisition function that determines the next datum and a stopping criterion that determines when to stop learning should be considered. In this study, we propose a stopping criterion based on error stability, which guarantees that the change in generalization error upon adding a new sample is bounded by the annotation cost and can be applied to any Bayesian active learning. We demonstrate that the proposed criterion stops active learning at the appropriate timing for various learning models and real datasets.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Hideaki Ishibashi (6 papers)
  2. Hideitsu Hino (35 papers)
Citations (9)

Summary

We haven't generated a summary for this paper yet.