Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Scaling Out Acid Applications with Operation Partitioning (1804.01942v1)

Published 5 Apr 2018 in cs.DC and cs.DB

Abstract: OLTP applications with high workloads that cannot be served by a single server need to scale out to multiple servers. Typically, scaling out entails assigning a different partition of the application state to each server. But data partitioning is at odds with preserving the strong consistency guarantees of ACID transactions, a fundamental building block of many OLTP applications. The more we scale out and spread data across multiple servers, the more frequent distributed transactions accessing data at different servers will be. With a large number of servers, the high cost of distributed transactions makes scaling out ineffective or even detrimental. In this paper we propose Operation Partitioning, a novel paradigm to scale out OLTP applications that require ACID guarantees. Operation Partitioning indirectly partitions data across servers by partitioning the application's operations through static analysis. This partitioning of operations yields to a lock-free Conveyor Belt protocol for distributed coordination, which can scale out unmodified applications running on top of unmodified database management systems. We implement the protocol in a system called Elia and use it to scale out two applications, TPC-W and RUBiS. Our experiments show that Elia can increase maximum throughput by up to 4.2x and reduce latency by up to 58.6x compared to MySQL Cluster while at the same time providing a stronger isolation guarantee (serializability instead of read committed).

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Habib Saissi (3 papers)
  2. Marco Serafini (17 papers)
  3. Neeraj Suri (18 papers)

Summary

We haven't generated a summary for this paper yet.