Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 164 tok/s
Gemini 2.5 Pro 46 tok/s Pro
GPT-5 Medium 28 tok/s Pro
GPT-5 High 25 tok/s Pro
GPT-4o 122 tok/s Pro
Kimi K2 188 tok/s Pro
GPT OSS 120B 464 tok/s Pro
Claude Sonnet 4.5 36 tok/s Pro
2000 character limit reached

Kolmogorov complexity version of Slepian-Wolf coding (1511.03602v3)

Published 11 Nov 2015 in cs.IT and math.IT

Abstract: Alice and Bob are given two correlated n-bit strings x_1 and, respectively, x_2, which they want to losslessly compress and send to Zack. They can either collaborate by sharing their strings, or work separately. We show that there is no disadvantage in the second scenario: Alice and Bob, without knowing the other party's string, can achieve almost optimal compression in the sense of Kolmogorov complexity. Furthermore, compression takes polynomial time and can be made at any combination of lengths that satisfy some necessary conditions (modulo additive polylog terms). More precisely, there exist probabilistic algorithms E_1, E_2, and deterministic algorithm D, with E_1 and E_2 running in polynomial time, having the following behavior: if n_1, n_2 are two integers satisfying n_1 + n_2 \geq C(x_1,x_2), n_1 \geq C(x_1 | x_2), n_2 \geq C(x_2 | x_1), then for i \in {1,2}, E_i on input x_i and n_i outputs a string of length n_i + \polylog n such that D on input E_1(x_1), E_2(x_2) reconstructs (x_1,x_2) with high probability (where C(x) denotes the plain Kolmogorov complexity of x, and C(x \mid y) is the complexity of x conditioned by y). Our main result is more general, as it deals with the compression of any constant number of correlated strings. It is an analog in the framework of algorithmic information theory of the classic Slepian-Wolf Theorem, a fundamental result in network information theory, in which x_1 and x_2 are realizations of two discrete random variables formed by drawing independently n times from a joint distribution. Also, in the classical result, the decompressor needs to know the joint distribution of the sources. In our result no type of independence is assumed and the decompressor does not have any apriori information about the sources that are compressed, and it still is the case that distributed compression is on a par with centralized compression.

Citations (13)

Summary

We haven't generated a summary for this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (1)

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube