Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Verifying and Optimizing Compact NUMA-Aware Locks on Weak Memory Models (2111.15240v2)

Published 30 Nov 2021 in cs.OS

Abstract: Developing concurrent software is challenging, especially if it has to run on modern architectures with Weak Memory Models (WMMs) such as ARMv8, Power, or RISC-V. For the sake of performance, WMMs allow hardware and compilers to aggressively reorder memory accesses. To guarantee correctness, developers have to carefully place memory barriers in the code to enforce ordering among critical memory operations. While WMM architectures are growing in popularity, identifying the necessary and sufficient barriers of complex synchronization primitives is notoriously difficult. Unfortunately, publications often consider barriers to be just implementation details and omit them. In this technical note, we report our efforts in verifying the correctness of the Compact NUMA-Aware (CNA) lock algorithm on WMMs. The CNA lock is of special interest because it has been proposed as a new slowpath for Linux qspinlock, the main spinlock in Linux. Besides determining a correct and efficient set of barriers for the original CNA algorithm on WMMs, we investigate the correctness of Linux qspinlock and the latest Linux CNA patch (v15) on the memory models LKMM, ARMv8, and Power. Surprisingly, we have found that Linux qspinlock and, consequently, Linux CNA are incorrect according to LKMM, but are still correct when compiled to ARMv8 or Power.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Antonio Paolillo (19 papers)
  2. Hernán Ponce-de-León (5 papers)
  3. Thomas Haas (2 papers)
  4. Diogo Behrens (2 papers)
  5. Rafael Chehab (1 paper)
  6. Ming Fu (7 papers)
  7. Roland Meyer (41 papers)
Citations (2)

Summary

We haven't generated a summary for this paper yet.