Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 183 tok/s
Gemini 2.5 Pro 46 tok/s Pro
GPT-5 Medium 30 tok/s Pro
GPT-5 High 28 tok/s Pro
GPT-4o 82 tok/s Pro
Kimi K2 213 tok/s Pro
GPT OSS 120B 457 tok/s Pro
Claude Sonnet 4.5 36 tok/s Pro
2000 character limit reached

Composite Community-Aware Diversified Influence Maximization with Efficient Approximation (2209.03176v2)

Published 7 Sep 2022 in cs.SI and math.CO

Abstract: Influence Maximization (IM) is a famous topic in mobile networks and social computing, which aims at finding a small subset of users to maximize the influence spread through online information cascade. Recently, some careful researchers paid attention to diversity of information dissemination, especially community-aware diversity, and formulated the diversified IM problem. The diversity is ubiquitous in a lot of real-world applications, but they are all based on a given community structure. In social networks, we can form heterogeneous community structures for the same group of users according to different metrics. Therefore, how to quantify the diversity based on multiple community structures is an interesting question. In this paper, we propose the Composite Community-Aware Diversified IM (CC-DIM) problem, which aims at selecting a seed set to maximize the influence spread and the composite diversity over all possible community structures under consideration. To address the NP-hardness of CC-DIM problem, we adopt the technique of reverse influence sampling and design a random Generalized Reverse Reachable (G-RR) set to estimate the objective function. The composition of a random G-RR set is much more complex than the RR set used for the IM problem, which will lead to inefficiency of traditional sampling-based approximation algorithms. Because of this, we further propose a two-stage algorithm, Generalized HIST (G-HIST). It can not only return a $(1-1/e-\varepsilon)$ approximate solution with at least $(1-\delta)$ probability, but also improve the efficiency of sampling and ease the difficulty of searching by significantly reducing the average size of G-RR sets. Finally, we evaluate our G-HIST on real datasets against existing algorithms. The experimental results show the effectiveness of our proposed algorithm and its superiority over other baseline algorithms.

Citations (1)

Summary

We haven't generated a summary for this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube