Emergent Mind

Abstract

We study online graph queries that retrieve nearby nodes of a query node from a large network. To answer such queries with high throughput and low latency, we partition the graph and process the data in parallel across a cluster of servers. State-of-the-art distributed graph querying systems place each graph partition on a separate server, where query answering over that partition takes place. This design has two major disadvantages. First, the router needs to maintain a fixed routing table. Hence, these systems are less flexible with respect to query routing, fault tolerance, and graph updates. Second, the graph data must be partitioned such that the workload across the servers is balanced, and the inter-machine communication is minimized. In addition, it is required to update the existing partitions based on workload changes over graph nodes. However, graph partitioning, online monitoring of workloads, and dynamically updating the graph partitions are expensive. In this work, we mitigate both these problems by decoupling graph storage from query processors, and by developing smart routing strategies that improve the cache locality in query processors. Since a query processor is no longer assigned any fixed part of the graph, it is equally capable of handling any request, thus facilitating load balancing and fault tolerance. On the other hand, due to our smart routing strategies, query processors can effectively leverage their cache contents, reducing the overall impact of how the graph is partitioned across storage servers. A detailed experimental evaluation with several real-world, large graph datasets demonstrates that our proposed framework, gRouting - even with simple hash partitioning of the data - achieves up to an order of magnitude better query throughput compared to existing graph querying systems that employ expensive graph partitioning and re-partitioning strategies.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.