Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 37 tok/s
Gemini 2.5 Pro 41 tok/s Pro
GPT-5 Medium 10 tok/s Pro
GPT-5 High 15 tok/s Pro
GPT-4o 84 tok/s Pro
Kimi K2 198 tok/s Pro
GPT OSS 120B 448 tok/s Pro
Claude Sonnet 4 31 tok/s Pro
2000 character limit reached

High-Level Representation of Benchmark Families for Petri Games (1904.05621v1)

Published 11 Apr 2019 in cs.GT and cs.LO

Abstract: Petri games have been introduced as a multi-player game model representing causal memory to address the synthesis of distributed systems. For Petri games with one environment player and an arbitrary bounded number of system players, deciding the existence of a safety strategy is EXPTIME-complete. This result forms the basis of the tool ADAM that implements an algorithm for the synthesis of distributed controllers from Petri games. To evaluate the tool, it has been checked on a series of parameterized benchmarks from manufacturing and workflow scenarios. In this paper, we introduce a new possibility to represent benchmark families for the distributed synthesis problem modeled with Petri games. It enables the user to specify an entire benchmark family as one parameterized high-level net. We describe example benchmark families as a high-level version of a Petri game and exhibit an instantiation yielding a concrete 1-bounded Petri game. We identify improvements either regarding the size or the functionality of the benchmark families by examining the high-level Petri games.

Citations (4)

Summary

We haven't generated a summary for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Lightbulb On Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.