Emergent Mind

Abstract

Hidden Markov models (HMMs) are widely used statistical models for modeling sequential data. The parameter estimation for HMMs from time series data is an important learning problem. The predominant methods for parameter estimation are based on local search heuristics, most notably the expectation-maximization (EM) algorithm. These methods are prone to local optima and oftentimes suffer from high computational and sample complexity. Recent years saw the emergence of spectral methods for the parameter estimation of HMMs, based on a method of moments approach. Two spectral learning algorithms as proposed by Hsu, Kakade and Zhang 2012 (arXiv:0811.4413) and Anandkumar, Hsu and Kakade 2012 (arXiv:1203.0683) are assessed in this work. Using experiments with synthetic data, the algorithms are compared with each other. Furthermore, the spectral methods are compared to the Baum-Welch algorithm, a well-established method applying the EM algorithm to HMMs. The spectral algorithms are found to have a much more favorable computational and sample complexity. Even though the algorithms readily handle high dimensional observation spaces, instability issues are encountered in this regime. In view of learning from real-world experimental data, the representation of real-valued observations for the use in spectral methods is discussed, presenting possible methods to represent data for the use in the learning algorithms.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.