Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 44 tok/s
Gemini 2.5 Pro 41 tok/s Pro
GPT-5 Medium 13 tok/s Pro
GPT-5 High 15 tok/s Pro
GPT-4o 86 tok/s Pro
Kimi K2 208 tok/s Pro
GPT OSS 120B 447 tok/s Pro
Claude Sonnet 4 36 tok/s Pro
2000 character limit reached

Relating modularity maximization and stochastic block models in multilayer networks (1804.01964v2)

Published 5 Apr 2018 in cs.SI, math.PR, physics.data-an, and physics.soc-ph

Abstract: Characterizing large-scale organization in networks, including multilayer networks, is one of the most prominent topics in network science and is important for many applications. One type of mesoscale feature is community structure, in which sets of nodes are densely connected internally but sparsely connected to other dense sets of nodes. Two of the most popular approaches for community detection are to maximize an objective function called "modularity" and to perform statistical inference using stochastic block models. Generalizing work by Newman on monolayer networks (Physical Review E 94, 052315), we show in multilayer networks that maximizing modularity is equivalent, under certain conditions, to maximizing the posterior probability of community assignments under a suitably chosen stochastic block model. We derive versions of this equivalence for various types of multilayer structure, including temporal, multiplex, and multilevel networks. We consider cases in which the key parameters are constant, as well as ones in which they vary across layers; in the latter case, this yields a novel, layer-weighted version of the modularity function. Our results also help address a longstanding difficulty of multilayer modularity-maximization algorithms, which require the specification of two sets of tuning parameters that have been difficult to choose in practice. We show how to perform this parameter selection in a statistically-grounded way, and we demonstrate the effectiveness of our approach on both synthetic and empirical networks.

Citations (32)

Summary

We haven't generated a summary for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Lightbulb On Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube