Emergent Mind

Learning Better Lossless Compression Using Lossy Compression

(2003.10184)
Published Mar 23, 2020 in cs.CV , cs.LG , and eess.IV

Abstract

We leverage the powerful lossy image compression algorithm BPG to build a lossless image compression system. Specifically, the original image is first decomposed into the lossy reconstruction obtained after compressing it with BPG and the corresponding residual. We then model the distribution of the residual with a convolutional neural network-based probabilistic model that is conditioned on the BPG reconstruction, and combine it with entropy coding to losslessly encode the residual. Finally, the image is stored using the concatenation of the bitstreams produced by BPG and the learned residual coder. The resulting compression system achieves state-of-the-art performance in learned lossless full-resolution image compression, outperforming previous learned approaches as well as PNG, WebP, and JPEG2000.

Residuals plotted against predicted values showing heteroscedasticity.

Overview

  • The paper introduces a novel approach to lossless image compression that utilizes a powerful lossy image compression algorithm, BPG, alongside a convolutional neural network (CNN) based probabilistic model to enhance performance.

  • The proposed method involves compressing the original image with BPG, modeling the residual using a CNN, and then using entropy coding to losslessly encode the residual, resulting in a highly efficient compressed image.

  • The system achieves state-of-the-art performance across multiple datasets, significantly outperforming traditional algorithms like PNG, WebP, and JPEG2000, and even other learned lossless compression methods such as L3C.

Leveraging Lossy Compression for Improved Lossless Image Compression

The paper "Learning Better Lossless Compression Using Lossy Compression" presents a novel approach to lossless image compression by leveraging the efficiency of a powerful lossy image compression algorithm, BPG (Better Portable Graphics). This technique is a significant departure from traditional lossless image compression methods by integrating a lossy compressor and a convolutional neural network (CNN) based probabilistic model to achieve state-of-the-art performance.

The methodology begins by compressing the original image using BPG, which generates a lossy reconstruction. The residual, which is the difference between the original image and the lossy reconstruction, is then modeled using a CNN-based approach. This approach predicts the probability distribution of the residual conditioned on the lossy reconstruction. By combining this probabilistic model with entropy coding, the residual can be losslessly encoded. The final compressed image consists of the concatenated bitstreams produced by the BPG and the learned residual coder.

This method demonstrates impressive results, outperforming previous state-of-the-art learned lossless compression methods and traditional algorithms such as PNG, WebP, and JPEG2000 across several datasets.

Contributions and Techniques

The paper's primary contributions are as follows:

  1. Integration of BPG: By leveraging BPG, which is a highly efficient lossy compression algorithm, the authors achieve a robust initial compression. BPG is known for its high Peak Signal-to-Noise Ratio (PSNR), and its ability to handle high-frequency image components effectively.
  2. Residual Compression Using CNNs: The use of a convolutional neural network (CNN) to model the residual distribution is a key innovation. The CNN is conditioned on the BPG output and predicts the necessary parameters for a discrete mixture of logistic distributions, which are then used in entropy coding.
  3. Per-Image Optimization: The system includes an optimization step that fine-tunes the compression parameters on a per-image basis. This involves tuning the certainty (temperature) of the predicted residual distribution which helps in reducing the overall bitrate.
  4. Q-Classifier: The method proposes a lightweight classifier to predict the quantization parameter for BPG, optimizing the trade-off between the bits allocated to the lossy and residual parts effectively. The classifier is trained to predict an optimal quantization parameter for each image, thus enhancing the overall compression performance.

Numerical Results and Performance

The system exhibits state-of-the-art performance across multiple domains:

  • Open Images: 2.790 bits per subpixel (bpsp)
  • CLIC.mobile: 2.538 bpsp
  • CLIC.pro: 2.933 bpsp
  • DIV2K: 3.079 bpsp

These results show substantial improvements over other methods. For instance, the proposed system outperforms L3C, a contemporary learned lossless compression method, by margins varying from 0.4% to 7.2% across different datasets. Traditional approaches such as PNG, WebP, and JPEG2000 were outperformed by even larger margins.

Practical Implications and Future Directions

Practically, this method allows for efficient storage and transmission of high-fidelity images. Given that BPG was initially created for high-efficiency lossy compression, repurposing it for lossless compression enhances its utility without additional hardware requirements. The lightweight nature of the CNN used for residual compression means that this technique can be efficiently deployed even on mobile and other resource-constrained devices.

Theoretically, this research bridges the gap between lossy and lossless compression paradigms, highlighting the synergy that can be achieved by combining the two. Future work could extend this method by exploring other lossy compressors or employing more complex network architectures for better probabilistic modeling of residuals. Additionally, integrating learning-based adaptive entropy coding mechanisms could further improve compression efficiency.

Moreover, exploring the fine-tuning of compression parameters as a part of a meta-learning framework could yield even more significant reductions in bitrate, especially when applied to varied image datasets.

Conclusion

The paper demonstrates an effective and innovative approach to lossless image compression by leveraging the strengths of lossy compression algorithms and neural network-based probabilistic models. The impressive performance on diverse datasets and detailed ablation studies underscore the method's robustness and practical utility. This research provides a foundational step toward more integrated and adaptive image compression systems, with significant implications for both the industry and academic research in image processing and machine learning.

Create an account to read this summary for free:

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.