Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
162 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

First-Order Model-Checking in Random Graphs and Complex Networks (2006.14488v1)

Published 25 Jun 2020 in cs.DM

Abstract: Complex networks are everywhere. They appear for example in the form of biological networks, social networks, or computer networks and have been studied extensively. Efficient algorithms to solve problems on complex networks play a central role in today's society. Algorithmic meta-theorems show that many problems can be solved efficiently. Since logic is a powerful tool to model problems, it has been used to obtain very general meta-theorems. In this work, we consider all problems definable in first-order logic and analyze which properties of complex networks allow them to be solved efficiently. The mathematical tool to describe complex networks are random graph models. We define a property of random graph models called $\alpha$-power-law-boundedness. Roughly speaking, a random graph is $\alpha$-power-law-bounded if it does not admit strong clustering and its degree sequence is bounded by a power-law distribution with exponent at least $\alpha$ (i.e. the fraction of vertices with degree $k$ is roughly $O(k{-\alpha})$). We solve the first-order model-checking problem (parameterized by the length of the formula) in almost linear FPT time on random graph models satisfying this property with $\alpha \ge 3$. This means in particular that one can solve every problem expressible in first-order logic in almost linear expected time on these random graph models. This includes for example preferential attachment graphs, Chung-Lu graphs, configuration graphs, and sparse Erd\H{o}s-R\'{e}nyi graphs. Our results match known hardness results and generalize previous tractability results on this topic.

Citations (2)

Summary

We haven't generated a summary for this paper yet.