Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 64 tok/s
Gemini 2.5 Pro 50 tok/s Pro
GPT-5 Medium 30 tok/s Pro
GPT-5 High 35 tok/s Pro
GPT-4o 77 tok/s Pro
Kimi K2 174 tok/s Pro
GPT OSS 120B 457 tok/s Pro
Claude Sonnet 4 37 tok/s Pro
2000 character limit reached

Efficient RDF Graph Storage based on Reinforcement Learning (2010.11538v1)

Published 22 Oct 2020 in cs.DB

Abstract: Knowledge graph is an important cornerstone of artificial intelligence. The construction and release of large-scale knowledge graphs in various fields pose new challenges to knowledge graph data management. Due to the maturity and stability, relational database is also suitable for RDF data storage. However, the complex structure of RDF graph brings challenges to storage structure design for RDF graph in the relational database. To address the difficult problem, this paper adopts reinforcement learning (RL) to optimize the storage partition method of RDF graph based on the relational database. We transform the graph storage into a Markov decision process, and develop the reinforcement learning algorithm for graph storage design. For effective RL-based storage design, we propose the data feature extraction method of RDF tables and the query rewriting priority policy during model training. The extensive experimental results demonstrate that our approach outperforms existing RDF storage design methods.

Citations (1)
List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-Up Questions

We haven't generated follow-up questions for this paper yet.