Emergent Mind

Abstract

We improve the existing achievable rate regions for causal and for zero-delay source coding of stationary Gaussian sources under an average mean squared error (MSE) distortion measure. To begin with, we find a closed-form expression for the information-theoretic causal rate-distortion function (RDF) under such distortion measure, denoted by $R{c}{it}(D)$, for first-order Gauss-Markov processes. Rc{it}(D) is a lower bound to the optimal performance theoretically attainable (OPTA) by any causal source code, namely Rc{op}(D). We show that, for Gaussian sources, the latter can also be upper bounded as Rc{op}(D)\leq Rc{it}(D) + 0.5 log{2}(2\pi e) bits/sample. In order to analyze $R{c}{it}(D)$ for arbitrary zero-mean Gaussian stationary sources, we introduce \bar{Rc{it}}(D), the information-theoretic causal RDF when the reconstruction error is jointly stationary with the source. Based upon \bar{Rc{it}}(D), we derive three closed-form upper bounds to the additive rate loss defined as \bar{Rc{it}}(D) - R(D), where R(D) denotes Shannon's RDF. Two of these bounds are strictly smaller than 0.5 bits/sample at all rates. These bounds differ from one another in their tightness and ease of evaluation; the tighter the bound, the more involved its evaluation. We then show that, for any source spectral density and any positive distortion D\leq \sigma{x}{2}, \bar{Rc{it}}(D) can be realized by an AWGN channel surrounded by a unique set of causal pre-, post-, and feedback filters. We show that finding such filters constitutes a convex optimization problem. In order to solve the latter, we propose an iterative optimization procedure that yields the optimal filters and is guaranteed to converge to \bar{Rc{it}}(D). Finally, by establishing a connection to feedback quantization we design a causal and a zero-delay coding scheme which, for Gaussian sources, achieves...

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.