Emergent Mind

Learning-Based Video Coding with Joint Deep Compression and Enhancement

(2111.14474)
Published Nov 29, 2021 in eess.IV and cs.CV

Abstract

The end-to-end learning-based video compression has attracted substantial attentions by paving another way to compress video signals as stacked visual features. This paper proposes an efficient end-to-end deep video codec with jointly optimized compression and enhancement modules (JCEVC). First, we propose a dual-path generative adversarial network (DPEG) to reconstruct video details after compression. An $\alpha$-path facilitates the structure information reconstruction with a large receptive field and multi-frame references, while a $\beta$-path facilitates the reconstruction of local textures. Both paths are fused and co-trained within a generative-adversarial process. Second, we reuse the DPEG network in both motion compensation and quality enhancement modules, which are further combined with other necessary modules to formulate our JCEVC framework. Third, we employ a joint training of deep video compression and enhancement that further improves the rate-distortion (RD) performance of compression. Compared with x265 LDP very fast mode, our JCEVC reduces the average bit-per-pixel (bpp) by 39.39\%/54.92\% at the same PSNR/MS-SSIM, which outperforms the state-of-the-art deep video codecs by a considerable margin.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.