Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 60 tok/s
Gemini 2.5 Pro 51 tok/s Pro
GPT-5 Medium 18 tok/s Pro
GPT-5 High 14 tok/s Pro
GPT-4o 77 tok/s Pro
Kimi K2 159 tok/s Pro
GPT OSS 120B 456 tok/s Pro
Claude Sonnet 4 38 tok/s Pro
2000 character limit reached

Variable-Length Lossy Compression Allowing Positive Overflow and Excess Distortion Probabilities (1701.01800v2)

Published 7 Jan 2017 in cs.IT and math.IT

Abstract: This paper investigates the problem of variable-length lossy source coding allowing a positive excess distortion probability and an overflow probability of codeword lengths. Novel one-shot achievability and converse bounds of the optimal rate are established by a new quantity based on the smooth max entropy (the smooth R\'enyi entropy of order zero). To derive the achievability bounds, we give an explicit code construction based on a distortion ball instead of using the random coding argument. The basic idea of the code construction is similar to the optimal code construction in the variable-length lossless source coding. Our achievability bounds are slightly different, depending on whether the encoder is stochastic or deterministic. One-shot results yield a general formula of the optimal rate for blocklength $n$. In addition, our general formula is applied to asymptotic analysis for a stationary memoryless source. As a result, we derive a single-letter characterization of the optimal rate by using the rate-distortion and rate-dispersion functions.

Citations (3)

Summary

We haven't generated a summary for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Lightbulb On Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.