Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
166 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Recurrent Event Network: Autoregressive Structure Inference over Temporal Knowledge Graphs (1904.05530v4)

Published 11 Apr 2019 in cs.LG, cs.AI, cs.CL, and stat.ML

Abstract: Knowledge graph reasoning is a critical task in natural language processing. The task becomes more challenging on temporal knowledge graphs, where each fact is associated with a timestamp. Most existing methods focus on reasoning at past timestamps and they are not able to predict facts happening in the future. This paper proposes Recurrent Event Network (RE-NET), a novel autoregressive architecture for predicting future interactions. The occurrence of a fact (event) is modeled as a probability distribution conditioned on temporal sequences of past knowledge graphs. Specifically, our RE-NET employs a recurrent event encoder to encode past facts and uses a neighborhood aggregator to model the connection of facts at the same timestamp. Future facts can then be inferred in a sequential manner based on the two modules. We evaluate our proposed method via link prediction at future times on five public datasets. Through extensive experiments, we demonstrate the strength of RENET, especially on multi-step inference over future timestamps, and achieve state-of-the-art performance on all five datasets. Code and data can be found at https://github.com/INK-USC/RE-Net.

Citations (57)

Summary

  • The paper introduces RE-Net, an autoregressive model that predicts future events in temporal knowledge graphs by encoding past event sequences.
  • It features a recurrent event encoder and neighborhood aggregator that effectively capture temporal and structural dependencies.
  • Experiments on five datasets demonstrate state-of-the-art performance in multi-step temporal link prediction using metrics like MRR and Hit@10.

Recurrent Event Network: Autoregressive Structure Inference over Temporal Knowledge Graphs

The paper "Recurrent Event Network: Autoregressive Structure Inference over Temporal Knowledge Graphs" introduces a novel method for reasoning over temporal knowledge graphs (TKGs). Unlike traditional knowledge graphs, TKGs associate each fact with a timestamp, adding complexity to the task of knowledge graph reasoning. This paper makes significant strides in predicting future events in TKGs, a capability that has been underexplored.

Key Contributions

  1. Introduction of RE-Net: The proposed Recurrent Event Network (RE-Net) utilizes an autoregressive framework to predict future facts in TKGs. The model intelligently encodes past events and integrations within the graph, formulating future predictions based on learned patterns.
  2. Recurrent Event Encoder and Neighborhood Aggregator: RE-Net’s architecture includes a recurrent event encoder complemented by a neighborhood aggregator. These components allow for the encoding of past sequences and the modeling of concurrent events, capturing temporal and structural dependencies effectively.
  3. Sequential Inference: The model supports multi-step inference over new timestamps. It can predict events without requiring ground truths of preceding events, a feature distinct from existing models like Know-Evolve and DyRep.

Experimental Evaluation

The authors evaluated RE-Net using five datasets, focusing on temporal link prediction tasks. These datasets include varied data sources, such as ICEWS and GDELT. The model consistently achieved state-of-the-art performance, even in multi-step inference scenarios, significantly outperforming existing methods in terms of Mean Reciprocal Rank and Hit@10 metrics.

Implications and Future Directions

The introduction of RE-Net offers substantial implications both practically and theoretically. Practically, its ability to model temporal dynamics in knowledge graphs supports applications across numerous domains, from event prediction to complex temporal reasoning tasks. Theoretically, RE-Net’s autoregressive approach provides a framework for exploring more sophisticated temporal and relational dependencies.

Looking forward, further exploration could involve improving computational efficiency and extending the model to handle lasting events beyond discrete occurrences. The modularity of the RE-Net architecture also provides opportunities for integrating additional data forms or hybrid approaches, potentially expanding its applicability in diverse AI contexts.

Conclusion

This paper presents a significant advancement in the field of temporal knowledge graph reasoning by introducing RE-Net. Through leveraging autoregressive principles and a robust architectural framework, RE-Net sets a new benchmark for the prediction of future interactions in temporal knowledge graphs.

Github Logo Streamline Icon: https://streamlinehq.com