Emergent Mind

Density Estimation in Infinite Dimensional Exponential Families

(1312.3516)
Published Dec 12, 2013 in math.ST , stat.ME , stat.ML , and stat.TH

Abstract

In this paper, we consider an infinite dimensional exponential family, $\mathcal{P}$ of probability densities, which are parametrized by functions in a reproducing kernel Hilbert space, $H$ and show it to be quite rich in the sense that a broad class of densities on $\mathbb{R}d$ can be approximated arbitrarily well in Kullback-Leibler (KL) divergence by elements in $\mathcal{P}$. The main goal of the paper is to estimate an unknown density, $p0$ through an element in $\mathcal{P}$. Standard techniques like maximum likelihood estimation (MLE) or pseudo MLE (based on the method of sieves), which are based on minimizing the KL divergence between $p0$ and $\mathcal{P}$, do not yield practically useful estimators because of their inability to efficiently handle the log-partition function. Instead, we propose an estimator, $\hat{p}n$ based on minimizing the \emph{Fisher divergence}, $J(p0\Vert p)$ between $p0$ and $p\in \mathcal{P}$, which involves solving a simple finite-dimensional linear system. When $p0\in\mathcal{P}$, we show that the proposed estimator is consistent, and provide a convergence rate of $n{-\min\left{\frac{2}{3},\frac{2\beta+1}{2\beta+2}\right}}$ in Fisher divergence under the smoothness assumption that $\log p0\in\mathcal{R}(C\beta)$ for some $\beta\ge 0$, where $C$ is a certain Hilbert-Schmidt operator on $H$ and $\mathcal{R}(C\beta)$ denotes the image of $C\beta$. We also investigate the misspecified case of $p0\notin\mathcal{P}$ and show that $J(p0\Vert\hat{p}n)\rightarrow \inf{p\in\mathcal{P}}J(p0\Vert p)$ as $n\rightarrow\infty$, and provide a rate for this convergence under a similar smoothness condition as above. Through numerical simulations we demonstrate that the proposed estimator outperforms the non-parametric kernel density estimator, and that the advantage with the proposed estimator grows as $d$ increases.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.