Emergent Mind

Abstract

In this paper, we propose an adaptive panoramic video semantic transmission (APVST) network built on the deep joint source-channel coding (Deep JSCC) structure for the efficient end-to-end transmission of panoramic videos. The proposed APVST network can adaptively extract semantic features of panoramic frames and achieve semantic feature encoding. To achieve high spectral efficiency and save bandwidth, we propose a transmission rate control mechanism for the APVST via the entropy model and the latitude adaptive model. Besides, we take weighted-to-spherically-uniform peak signal-to-noise ratio (WS-PSNR) and weighted-to-spherically-uniform structural similarity (WS-SSIM) as distortion evaluation metrics, and propose the weight attention module to fuse the weights with the semantic features to achieve better quality of immersive experiences. Finally, we evaluate our proposed scheme on a panoramic video dataset containing 208 panoramic videos. The simulation results show that the APVST can save up to 20% and 50% on channel bandwidth cost compared with other semantic communication-based and traditional video transmission schemes.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.