Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
166 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Declarative Linearizability Proofs for Descriptor-Based Concurrent Helping Algorithms (2307.04653v1)

Published 10 Jul 2023 in cs.LO and cs.DC

Abstract: Linearizability is a standard correctness criterion for concurrent algorithms, typically proved by establishing the algorithms' linearization points. However, relying on linearization points leads to proofs that are implementation-dependent, and thus hinder abstraction and reuse. In this paper we show that one can develop more declarative proofs by foregoing linearization points and instead relying on a technique of axiomatization of visibility relations. While visibility relations have been considered before, ours is the first study where the challenge is to formalize the helping nature of the algorithms. In particular, we show that by axiomatizing the properties of separation between events that contain bunches of help requests, we can extract what is common for high-level understanding of several descriptor-based helping algorithms of Harris et al. (RDCSS, MCAS, and optimizations), and produce novel proofs of their linearizability that share significant components.

Citations (1)

Summary

We haven't generated a summary for this paper yet.