Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 42 tok/s
Gemini 2.5 Pro 53 tok/s Pro
GPT-5 Medium 17 tok/s Pro
GPT-5 High 13 tok/s Pro
GPT-4o 101 tok/s Pro
Kimi K2 217 tok/s Pro
GPT OSS 120B 474 tok/s Pro
Claude Sonnet 4 36 tok/s Pro
2000 character limit reached

Deterministically Maintaining a $(2+ε)$-Approximate Minimum Vertex Cover in $O(1/ε^2)$ Amortized Update Time (1805.03498v2)

Published 9 May 2018 in cs.DS

Abstract: We consider the problem of maintaining an (approximately) minimum vertex cover in an $n$-node graph $G = (V, E)$ that is getting updated dynamically via a sequence of edge insertions/deletions. We show how to maintain a $(2+\epsilon)$-approximate minimum vertex cover, "deterministically", in this setting in $O(1/\epsilon2)$ amortized update time. Prior to our work, the best known deterministic algorithm for maintaining a $(2+\epsilon)$-approximate minimum vertex cover was due to Bhattacharya, Henzinger and Italiano [SODA 2015]. Their algorithm has an update time of $O(\log n/\epsilon2)$. Recently, Bhattacharya, Chakrabarty, Henzinger [IPCO 2017] and Gupta, Krishnaswamy, Kumar, Panigrahi [STOC 2017] showed how to maintain an $O(1)$-approximation in $O(1)$-amortized update time for the same problem. Our result gives an "exponential" improvement over the update time of Bhattacharya et al. [SODA 2015], and nearly matches the performance of the "randomized" algorithm of Solomon [FOCS 2016] who gets an approximation ratio of $2$ and an expected amortized update time of $O(1)$. We derive our result by analyzing, via a novel technique, a variant of the algorithm by Bhattacharya et al. We consider an idealized setting where the update time of an algorithm can take any arbitrary fractional value, and use insights from this setting to come up with an appropriate potential function. Conceptually, this framework mimics the idea of an LP-relaxation for an optimization problem. The difference is that instead of relaxing an integral objective function, we relax the update time of an algorithm itself. We believe that this technique will find further applications in the analysis of dynamic algorithms.

Citations (38)

Summary

We haven't generated a summary for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Lightbulb On Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.