Emergent Mind

Algorithms and Improved bounds for online learning under finite hypothesis class

(1903.10870)
Published Mar 24, 2019 in cs.LG and stat.ML

Abstract

Online learning is the process of answering a sequence of questions based on the correct answers to the previous questions. It is studied in many research areas such as game theory, information theory and machine learning. There are two main components of online learning framework. First, the learning algorithm also known as the learner and second, the hypothesis class which is essentially a set of functions which learner uses to predict answers to the questions. Sometimes, this class contains some functions which have the capability to provide correct answers to the entire sequence of questions. This case is called realizable case. And when hypothesis class does not contain such functions is called unrealizable case. The goal of the learner, in both the cases, is to make as few mistakes as that could have been made by most powerful functions in hypothesis class over the entire sequence of questions. Performance of the learners is analysed by theoretical bounds on the number of mistakes made by them. This paper proposes three algorithms to improve the mistakes bound in the unrealizable case. Proposed algorithms perform highly better than the existing ones in the long run when most of the input sequences presented to the learner are likely to be realizable.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.