Emergent Mind

Sampling Random Colorings of Sparse Random Graphs

(1707.03796)
Published Jul 12, 2017 in cs.DM and math.CO

Abstract

We study the mixing properties of the single-site Markov chain known as the Glauber dynamics for sampling $k$-colorings of a sparse random graph $G(n,d/n)$ for constant $d$. The best known rapid mixing results for general graphs are in terms of the maximum degree $\Delta$ of the input graph $G$ and hold when $k>11\Delta/6$ for all $G$. Improved results hold when $k>\alpha\Delta$ for graphs with girth $\geq 5$ and $\Delta$ sufficiently large where $\alpha\approx 1.7632\ldots$ is the root of $\alpha=\exp(1/\alpha)$; further improvements on the constant $\alpha$ hold with stronger girth and maximum degree assumptions. For sparse random graphs the maximum degree is a function of $n$ and the goal is to obtain results in terms of the expected degree $d$. The following rapid mixing results for $G(n,d/n)$ hold with high probability over the choice of the random graph for sufficiently large constant~$d$. Mossel and Sly (2009) proved rapid mixing for constant $k$, and Efthymiou (2014) improved this to $k$ linear in~$d$. The condition was improved to $k>3d$ by Yin and Zhang (2016) using non-MCMC methods. Here we prove rapid mixing when $k>\alpha d$ where $\alpha\approx 1.7632\ldots$ is the same constant as above. Moreover we obtain $O(n{3})$ mixing time of the Glauber dynamics, while in previous rapid mixing results the exponent was an increasing function in $d$. As in previous results for random graphs our proof analyzes an appropriately defined block dynamics to "hide" high-degree vertices. One new aspect in our improved approach is utilizing so-called local uniformity properties for the analysis of block dynamics. To analyze the "burn-in" phase we prove a concentration inequality for the number of disagreements propagating in large blocks.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.