Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 44 tok/s
Gemini 2.5 Pro 41 tok/s Pro
GPT-5 Medium 13 tok/s Pro
GPT-5 High 15 tok/s Pro
GPT-4o 86 tok/s Pro
Kimi K2 208 tok/s Pro
GPT OSS 120B 447 tok/s Pro
Claude Sonnet 4 36 tok/s Pro
2000 character limit reached

GENIUS: A Novel Solution for Subteam Replacement with Clustering-based Graph Neural Network (2211.04100v2)

Published 8 Nov 2022 in cs.SI, cs.AI, and cs.IR

Abstract: Subteam replacement is defined as finding the optimal candidate set of people who can best function as an unavailable subset of members (i.e., subteam) for certain reasons (e.g., conflicts of interests, employee churn), given a team of people embedded in a social network working on the same task. Prior investigations on this problem incorporate graph kernel as the optimal criteria for measuring the similarity between the new optimized team and the original team. However, the increasingly abundant social networks reveal fundamental limitations of existing methods, including (1) the graph kernel-based approaches are powerless to capture the key intrinsic correlations among node features, (2) they generally search over the entire network for every member to be replaced, making it extremely inefficient as the network grows, and (3) the requirement of equal-sized replacement for the unavailable subteam can be inapplicable due to limited hiring budget. In this work, we address the limitations in the state-of-the-art for subteam replacement by (1) proposing GENIUS, a novel clustering-based graph neural network (GNN) framework that can capture team network knowledge for flexible subteam replacement, and (2) equipping the proposed GENIUS with self-supervised positive team contrasting training scheme to improve the team-level representation learning and unsupervised node clusters to prune candidates for fast computation. Through extensive empirical evaluations, we demonstrate the efficacy of the proposed method (1) effectiveness: being able to select better candidate members that significantly increase the similarity between the optimized and original teams, and (2) efficiency: achieving more than 600 times speed-up in average running time.

Summary

We haven't generated a summary for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Lightbulb On Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.