Quilt: Robust Data Segment Selection against Concept Drifts (2312.09691v1)
Abstract: Continuous machine learning pipelines are common in industrial settings where models are periodically trained on data streams. Unfortunately, concept drifts may occur in data streams where the joint distribution of the data X and label y, P(X, y), changes over time and possibly degrade model accuracy. Existing concept drift adaptation approaches mostly focus on updating the model to the new data possibly using ensemble techniques of previous models and tend to discard the drifted historical data. However, we contend that explicitly utilizing the drifted data together leads to much better model accuracy and propose Quilt, a data-centric framework for identifying and selecting data segments that maximize model accuracy. To address the potential downside of efficiency, Quilt extends existing data subset selection techniques, which can be used to reduce the training data without compromising model accuracy. These techniques cannot be used as is because they only assume virtual drifts where the posterior probabilities P(y|X) are assumed not to change. In contrast, a key challenge in our setup is to also discard undesirable data segments with concept drifts. Quilt thus discards drifted data segments and selects data segment subsets holistically for accurate and efficient model training. The two operations use gradient-based scores, which have little computation overhead. In our experiments, we show that Quilt outperforms state-of-the-art drift adaptation and data selection baselines on synthetic and real datasets.
- Deep Batch Active Learning by Diverse, Uncertain Gradient Lower Bounds. In ICLR.
- Early drift detection method. In Fourth international workshop on knowledge discovery from data streams, volume 6, 77–86. Citeseer.
- Detection of Abrupt Changes: Theory and Application. ISBN 0131267809.
- Learning from Time-Changing Data with Adaptive Windowing. In ICDM, 443–448.
- Adaptive Learning from Evolving Data Streams. In IDA, volume 5772, 249–260.
- MOA: Massive Online Analysis. J. Mach. Learn. Res., 11: 1601–1604.
- Hellinger distance based drift detection for nonstationary environments. In IEEE CIDUE, 41–48.
- Mining high-speed data streams. In KDD, 71–80.
- A Drift Region-Based Data Sample Filtering Method. IEEE Trans. Cybern., 52(9): 9377–9390.
- Incremental Learning of Concept Drift in Nonstationary Environments. IEEE Trans. Neural Networks, 22(10): 1517–1531.
- Fan, W. 2004. Systematic data selection to mine concept-drifting data streams. In KDD, 128–137.
- Learning with Drift Detection. In SBIA, volume 3171, 286–295.
- Accurate decision trees for mining high-speed data streams. In KDD, 523–528.
- A survey on concept drift adaptation. ACM Comput. Surv., 46(4): 44:1–44:37.
- Survey of distance measures for quantifying concept drift and shift in numeric data. Knowl. Inf. Syst., 60(2): 591–615.
- Adaptive random forests for evolving data stream classification. Mach. Learn., 106(9-10): 1469–1495.
- Harries, M. 1999. SPLICE-2 Comparative Evaluation: Electricity Pricing. Technical report.
- Towards non-parametric drift detection via Dynamic Adapting Window Independence Drift Detection (DAWIDD). In ICML, volume 119, 4249–4259.
- Mining time-changing data streams. In KDD, 97–106.
- An Ensemble of Classifiers for coping with Recurring Contexts in Data Streams. In ECAI, volume 178, 763–764.
- Tracking recurring contexts using ensemble classifiers: an application to email filtering. Knowl. Inf. Syst., 22(3): 371–391.
- Not All Samples Are Created Equal: Deep Learning with Importance Sampling. In ICML, volume 80, 2530–2539.
- GRAD-MATCH: Gradient Matching based Data Subset Selection for Efficient Deep Model Training. In ICML, volume 139, 5464–5474.
- GLISTER: Generalization based Data Subset Selection for Efficient and Robust Learning. In AAAI, 8110–8118.
- Ensemble learning for data stream analysis: A survey. Inf. Fusion, 37: 132–156.
- Learning under Concept Drift: A Review. IEEE Trans. Knowl. Data Eng., 31(12): 2346–2363.
- The Impact of Diversity on Online Ensemble Learning in the Presence of Concept Drift. IEEE Trans. Knowl. Data Eng., 22(5): 730–742.
- Coresets for Data-efficient Training of Machine Learning Models. In ICML, volume 119, 6950–6960.
- A survey on data preprocessing for data stream mining: Current status and future directions. Neurocomputing, 239: 39–57.
- A Segment-Based Drift Adaptation Method for Data Streams. IEEE Trans. Neural Networks Learn. Syst., 33(9): 4876–4889.
- A streaming ensemble algorithm (SEA) for large-scale classification. In KDD, 377–382.
- Characterizing concept drift. Data Min. Knowl. Discov., 30(4): 964–994.