Emergent Mind

Abstract

Byte-addressable non-volatile main memory (NVM) demands transactional mechanisms to access and manipulate data on NVM atomically. Those transaction mechanisms often employ a logging mechanism (undo logging or redo logging). However, the logging mechanisms can bring large runtime overhead (8%-49% in our evaluation), and 41%-78% of the overhead attributes to the frequent cache-line flushing. Such large overhead significantly diminishes the performance benefits offered by NVM. In this paper, we introduce a new method to reduce the overhead of cache-line flushing for logging-based transactions. Different from the traditional method that works at the program level and leverages program semantics to reduce the logging overhead, we introduce architecture awareness. In particular, we do not flush certain cache blocks, as long as they are estimated to be eliminated out of the cache because of the hardware caching mechanism (e.g., the cache replacement algorithm). Furthermore, we coalesce those cache blocks with low dirtiness to improve the efficiency of cache-line flushing. We implement an architecture-aware, high performance transaction runtime system for persistent memory, Archapt. Our results show that comparing with an undo logging (PMDK) and a redo logging (Mnemosyne), Archapt reduces cache-line flushing by 66% and improves system throughput by 19% on average (42% at most). Our crash tests with four hardware caching policies show that Archapt provides a strong guarantee on crash consistency.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.