Emergent Mind

Masked Toeplitz covariance estimation

(1709.09377)
Published Sep 27, 2017 in cs.IT and math.IT

Abstract

The problem of estimating the covariance matrix $\Sigma$ of a $p$-variate distribution based on its $n$ observations arises in many data analysis contexts. While for $n>p$, the classical sample covariance matrix $\hat{\Sigma}n$ is a good estimator for $\Sigma$, it fails in the high-dimensional setting when $n\ll p$. In this scenario one requires prior knowledge about the structure of the covariance matrix in order to construct reasonable estimators. Under the common assumption that $\Sigma$ is sparse, a refined estimator is given by $M\cdot\hat{\Sigma}n$, where $M$ is a suitable symmetric mask matrix indicating the nonzero entries of $\Sigma$ and $\cdot$ denotes the entrywise product of matrices. In the present work we assume that $\Sigma$ has Toeplitz structure corresponding to stationary signals. This suggests to average the sample covariance $\hat{\Sigma}n$ over the diagonals in order to obtain an estimator $\tilde{\Sigma}n$ of Toeplitz structure. Assuming in addition that $\Sigma$ is sparse suggests to study estimators of the form $M\cdot\tilde{\Sigma}_n$. For Gaussian random vectors and, more generally, random vectors satisfying the convex concentration property, our main result bounds the estimation error in terms of $n$ and $p$ and shows that accurate estimation is indeed possible when $n \ll p$. The new bound significantly generalizes previous results by Cai, Ren and Zhou and provides an alternative proof. Our analysis exploits the connection between the spectral norm of a Toeplitz matrix and the supremum norm of the corresponding spectral density function.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.