Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 64 tok/s
Gemini 2.5 Pro 42 tok/s Pro
GPT-5 Medium 22 tok/s Pro
GPT-5 High 28 tok/s Pro
GPT-4o 78 tok/s Pro
Kimi K2 211 tok/s Pro
GPT OSS 120B 458 tok/s Pro
Claude Sonnet 4.5 37 tok/s Pro
2000 character limit reached

Abstracting Gradual Typing Moving Forward: Precise and Space-Efficient (Technical Report) (2010.14094v2)

Published 27 Oct 2020 in cs.PL

Abstract: Abstracting Gradual Typing (AGT) is a systematic approach to designing gradually-typed languages. Languages developed using AGT automatically satisfy the formal semantic criteria for gradual languages identified by Siek et al. [2015]. Nonetheless, vanilla AGT semantics can still have important shortcomings. First, a gradual language's runtime checks should preserve the space-efficiency guarantees inherent to the underlying static and dynamic languages. To the contrary, the default operational semantics of AGT break proper tail calls. Second, a gradual language's runtime checks should enforce basic modular type-based invariants expected from the static type discipline. To the contrary, the default operational semantics of AGT may fail to enforce some invariants in surprising ways. We demonstrate this in the $\text{GTFL}_\lesssim$ language of Garcia et al. [2016]. This paper addresses both problems at once by refining the theory underlying AGT's dynamic checks. Garcia et al. [2016] observe that AGT involves two abstractions of static types: one for the static semantics and one for the dynamic semantics. We recast the latter as an abstract interpretation of subtyping itself, while gradual types still abstract static types. Then we show how forward-completeness [Giacobazzi and Quintarelli 2001] is key to supporting both space-efficient execution and reliable runtime type enforcement.

Citations (14)

Summary

We haven't generated a summary for this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.