Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
149 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Taming Modal Impredicativity: Superlazy Reduction (0810.2891v2)

Published 16 Oct 2008 in cs.LO

Abstract: Pure, or type-free, Linear Logic proof nets are Turing complete once cut-elimination is considered as computation. We introduce modal impredicativity as a new form of impredicativity causing the complexity of cut-elimination to be problematic from a complexity point of view. Modal impredicativity occurs when, during reduction, the conclusion of a residual of a box b interacts with a node that belongs to the proof net inside another residual of b. Technically speaking, superlazy reduction is a new notion of reduction that allows to control modal impredicativity. More specifically, superlazy reduction replicates a box only when all its copies are opened. This makes the overall cost of reducing a proof net finite and predictable. Specifically, superlazy reduction applied to any pure proof nets takes primitive recursive time. Moreover, any primitive recursive function can be computed by a pure proof net via superlazy reduction.

Citations (3)

Summary

We haven't generated a summary for this paper yet.