Emergent Mind

Determinable and interpretable network representation for link prediction

(2206.05589)
Published Jun 11, 2022 in cs.SI and physics.data-an

Abstract

As an intuitive description of complex physical, social, or brain systems, complex networks have fascinated scientists for decades. Recently, to abstract a network's structural and dynamical attributes for utilization, network representation has been one focus, mapping a network or its substructures (like nodes) into a low-dimensional vector space. Since the current methods are mostly based on machine learning, a black box of an input-output data fitting mechanism, generally the space's dimension is indeterminable and its elements are not interpreted. Although massive efforts to cope with this issue have included, for example, automated machine learning by computer scientists and computational theory by mathematics, the root causes still remain unresolved. Given that, from a physical perspective, this article proposes two determinable and interpretable node representation methods. To evaluate their effectiveness and generalization, this article further proposes Adaptive and Interpretable ProbS (AIProbS), a network-based model that can utilize node representations for link prediction. Experimental results showed that the AIProbS can reach state-of-the-art precision beyond baseline models, and by and large it can make a good trade-off with machine learning-based models on precision, determinacy, and interpretability, indicating that physical methods could also play a large role in the study of network representation.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.