Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 48 tok/s
Gemini 2.5 Pro 48 tok/s Pro
GPT-5 Medium 26 tok/s Pro
GPT-5 High 19 tok/s Pro
GPT-4o 107 tok/s Pro
Kimi K2 205 tok/s Pro
GPT OSS 120B 473 tok/s Pro
Claude Sonnet 4 37 tok/s Pro
2000 character limit reached

Wavelet-Based Fast Decoding of 360-Degree Videos (2208.10859v2)

Published 23 Aug 2022 in cs.GR

Abstract: In this paper, we propose a wavelet-based video codec specifically designed for VR displays that enables real-time playback of high-resolution 360{\deg} videos. Our codec exploits the fact that only a fraction of the full 360{\deg} video frame is visible on the display at any time. To load and decode the video viewport-dependently in real time, we make use of the wavelet transform for intra- as well as inter-frame coding. Thereby, the relevant content is directly streamed from the drive, without the need to hold the entire frames in memory. With an average of 193 frames per second at 8192x8192-pixel full-frame resolution, the conducted evaluation demonstrates that our codec's decoding performance is up to 272% higher than that of the state-of-the-art video codecs H.265 and AV1 for typical VR displays. By means of a perceptual study, we further illustrate the necessity of high frame rates for a better VR experience. Finally, we demonstrate how our wavelet-based codec can also directly be used in conjunction with foveation for further performance increase.

Citations (2)

Summary

We haven't generated a summary for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Lightbulb On Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.