Emergent Mind

Moving Towards Open Set Incremental Learning: Readily Discovering New Authors

(1910.12944)
Published Oct 28, 2019 in cs.LG , cs.CL , and stat.ML

Abstract

The classification of textual data often yields important information. Most classifiers work in a closed world setting where the classifier is trained on a known corpus, and then it is tested on unseen examples that belong to one of the classes seen during training. Despite the usefulness of this design, often there is a need to classify unseen examples that do not belong to any of the classes on which the classifier was trained. This paper describes the open set scenario where unseen examples from previously unseen classes are handled while testing. This further examines a process of enhanced open set classification with a deep neural network that discovers new classes by clustering the examples identified as belonging to unknown classes, followed by a process of retraining the classifier with newly recognized classes. Through this process the model moves to an incremental learning model where it continuously finds and learns from novel classes of data that have been identified automatically. This paper also develops a new metric that measures multiple attributes of clustering open set data. Multiple experiments across two author attribution data sets demonstrate the creation an incremental model that produces excellent results.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.