Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 57 tok/s
Gemini 2.5 Pro 47 tok/s Pro
GPT-5 Medium 20 tok/s Pro
GPT-5 High 18 tok/s Pro
GPT-4o 87 tok/s Pro
Kimi K2 203 tok/s Pro
GPT OSS 120B 453 tok/s Pro
Claude Sonnet 4.5 33 tok/s Pro
2000 character limit reached

TIDo: Source-free Task Incremental Learning in Non-stationary Environments (2301.12055v1)

Published 28 Jan 2023 in cs.LG

Abstract: This work presents an incremental learning approach for autonomous agents to learn new tasks in a non-stationary environment. Updating a DNN model-based agent to learn new target tasks requires us to store past training data and needs a large labeled target task dataset. Few-shot task incremental learning methods overcome the limitation of labeled target datasets by adapting trained models to learn private target classes using a few labeled representatives and a large unlabeled target dataset. However, the methods assume that the source and target tasks are stationary. We propose a one-shot task incremental learning approach that can adapt to non-stationary source and target tasks. Our approach minimizes adversarial discrepancy between the model's feature space and incoming incremental data to learn an updated hypothesis. We also use distillation loss to reduce catastrophic forgetting of previously learned tasks. Finally, we use Gaussian prototypes to generate exemplar instances eliminating the need to store past training data. Unlike current work in task incremental learning, our model can learn both source and target task updates incrementally. We evaluate our method on various problem settings for incremental object detection and disease prediction model update. We evaluate our approach by measuring the performance of shared class and target private class prediction. Our results show that our approach achieved improved performance compared to existing state-of-the-art task incremental learning methods.

Summary

We haven't generated a summary for this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.