Papers
Topics
Authors
Recent
2000 character limit reached

Stationary and Transition Probabilities in Slow Mixing, Long Memory Markov Processes (1301.6798v17)

Published 28 Jan 2013 in cs.IT and math.IT

Abstract: We observe a length-$n$ sample generated by an unknown,stationary ergodic Markov process (\emph{model}) over a finite alphabet $\mathcal{A}$. Given any string $\bf{w}$ of symbols from $\mathcal{A}$ we want estimates of the conditional probability distribution of symbols following $\bf{w}$, as well as the stationary probability of $\bf{w}$. Two distinct problems that complicate estimation in this setting are (i) long memory, and (ii) \emph{slow mixing} which could happen even with only one bit of memory. Any consistent estimator in this setting can only converge pointwise over the class of all ergodic Markov models. Namely, given any estimator and any sample size $n$, the underlying model could be such that the estimator performs poorly on a sample of size $n$ with high probability. But can we look at a length-$n$ sample and identify \emph{if} an estimate is likely to be accurate? Since the memory is unknown \emph{a-priori}, a natural approach is to estimate a potentially coarser model with memory $k_n=\mathcal{O}(\log n)$. As $n$ grows, pointwise consistent estimates that hold eventually almost surely (eas) are known so long as the scaling of $k_n$ is not superlogarithmic in $n$. Here, rather than eas convergence results, we want the best answers possible with a length-$n$ sample. Combining results in universal compression with Aldous' coupling arguments, we obtain sufficient conditions on the length-$n$ sample (even for slow mixing models) to identify when naive (i) estimates of the conditional probabilities and (ii) estimates related to the stationary probabilities are accurate; and also bound the deviations of the naive estimates from true values.

Citations (16)

Summary

We haven't generated a summary for this paper yet.

Whiteboard

Paper to Video (Beta)

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube