Emergent Mind

Recursive Error Reduction for Regular Branching Programs

(2309.04551)
Published Sep 8, 2023 in cs.CC and cs.DS

Abstract

In a recent work, Chen, Hoza, Lyu, Tal and Wu (FOCS 2023) showed an improved error reduction framework for the derandomization of regular read-once branching programs (ROBPs). Their result is based on a clever modification to the inverse Laplacian perspective of space-bounded derandomization, which was originally introduced by Ahmadinejad, Kelner, Murtagh, Peebles, Sidford and Vadhan (FOCS 2020). In this work, we give an alternative error reduction framework for regular ROBPs. Our new framework is based on a binary recursive formula from the work of Chattopadhyay and Liao (CCC 2020), that they used to construct weighted pseudorandom generators (WPRGs) for general ROBPs. Based on our new error reduction framework, we give alternative proofs to the following results for regular ROBPs of length $n$ and width $w$, both of which were proved in the work of Chen et al. using their error reduction: $\bullet$ There is a WPRG with error $\varepsilon$ that has seed length $\tilde{O}(\log(n)(\sqrt{\log(1/\varepsilon)}+\log(w))+\log(1/\varepsilon)).$ $\bullet$ There is a (non-black-box) deterministic algorithm which estimates the expectation of any such program within error $\pm\varepsilon$ with space complexity $\tilde{O}(\log(nw)\cdot\log\log(1/\varepsilon)).$ (This was first proved in the work of Ahmadinejad et al., but the proof by Chen et al. is simpler.) Because of the binary recursive nature of our new framework, both of our proofs are based on a straightforward induction that is arguably simpler than the Laplacian-based proof in the work of Chen et al.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.