Emergent Mind

The Entropy Gain of Linear Time-Invariant Filters and Some of its Implications

(1512.03655)
Published Dec 11, 2015 in cs.IT and math.IT

Abstract

We study the increase in per-sample differential entropy rate of random sequences and processes after being passed through a non minimum-phase (NMP) discrete-time, linear time-invariant (LTI) filter G. For such filters and random processes, it has long been established that this entropy gain, Gain(G), equals the integral of log|G(exp(jw))|. It is also known that, if the first sample of the impulse response of G has unit-magnitude, then this integral equals the sum of the logarithm of the magnitudes of the non-minimum phase zeros of G, say B(G). In this note, we begin by showing that existing time-domain proofs of these results, which consider finite length-n sequences and then let n tend to infinity, have neglected significant mathematical terms and, therefore, are inaccurate. We discuss some of the implications of this oversight when considering random processes. We then present a rigorous time-domain analysis of the entropy gain of LTI filters for random processes. In particular, we show that the entropy gain between equal-length input and output sequences is upper bounded by B(G) and arises if and only if there exists an output additive disturbance with finite differential entropy (no matter how small) or a random initial state. Instead, when comparing the input differential entropy to that of the entire (longer) output of G, the entropy gain equals B(G) without the need for additional exogenous random signals. We illustrate some of the consequences of these results by presenting their implications in three different problems. Specifically: a simple derivation of the rate-distortion function for Gaussian non-stationary sources, conditions for equality in an information inequality of importance in networked control problems, and an observation on the capacity of auto-regressive Gaussian channels with feedback.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.