Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

ProxIQA: A Proxy Approach to Perceptual Optimization of Learned Image Compression (1910.08845v2)

Published 19 Oct 2019 in eess.IV, cs.CV, and cs.LG

Abstract: The use of $\ell_p$ $(p=1,2)$ norms has largely dominated the measurement of loss in neural networks due to their simplicity and analytical properties. However, when used to assess the loss of visual information, these simple norms are not very consistent with human perception. Here, we describe a different "proximal" approach to optimize image analysis networks against quantitative perceptual models. Specifically, we construct a proxy network, broadly termed ProxIQA, which mimics the perceptual model while serving as a loss layer of the network. We experimentally demonstrate how this optimization framework can be applied to train an end-to-end optimized image compression network. By building on top of an existing deep image compression model, we are able to demonstrate a bitrate reduction of as much as $31\%$ over MSE optimization, given a specified perceptual quality (VMAF) level.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Li-Heng Chen (9 papers)
  2. Christos G. Bampis (17 papers)
  3. Zhi Li (275 papers)
  4. Andrey Norkin (5 papers)
  5. Alan C. Bovik (84 papers)
Citations (59)

Summary

We haven't generated a summary for this paper yet.