Emergent Mind

Comparison-limited Vector Quantization

(2105.14464)
Published May 30, 2021 in cs.IT and math.IT

Abstract

In this paper a variation of the classic vector quantization problem is considered. In the standard formulation, a quantizer is designed to minimize the distortion between input and output when the number of reconstruction points is fixed. We consider, instead, the scenario in which the number of comparators used in quantization is fixed. More precisely, we study the case in which a vector quantizer of dimension d is comprised of k comparators, each receiving a linear combination of the inputs and producing the output value one/zero if this linear combination is above/below a certain threshold. In reconstruction, the comparators' output is mapped to a reconstruction point, chosen so as to minimize a chosen distortion measure between the quantizer input and its reconstruction. The Comparison-Limited Vector Quantization (CLVQ) problem is then defined as the problem of optimally designing the configuration of the compactors and the choice of reconstruction points so as to minimize the given distortion. In this paper, we design a numerical optimization algorithm for the CLVQ problem. This algorithm leverages combinatorial geometrical notions to describe the hyperplane arrangement induced by the configuration of the comparators. It also relies on a genetic genetic meta heuristic to improve the selection of the quantizer initialization and avoid local minima encountered during optimization. We numerically evaluate the performance of our algorithm in the case of input distributions following uniform and Gaussian i.i.d. sources to be compressed under quadratic distortion and compare it to the classic Linde-Buzo-Gray (LBG) algorithm.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.