Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 39 tok/s
Gemini 2.5 Pro 49 tok/s Pro
GPT-5 Medium 12 tok/s Pro
GPT-5 High 18 tok/s Pro
GPT-4o 91 tok/s Pro
Kimi K2 191 tok/s Pro
GPT OSS 120B 456 tok/s Pro
Claude Sonnet 4 37 tok/s Pro
2000 character limit reached

Mercenary punishment in structured populations (2111.04480v2)

Published 8 Nov 2021 in physics.soc-ph and cs.GT

Abstract: Punishing those who refuse to participate in common efforts is a known and intensively studied way to maintain cooperation among self-interested agents. But this act is costly, hence punishers who are generally also engaged in the original joint venture, become vulnerable, which jeopardizes the effectiveness of this incentive. As an alternative, we may hire special players, whose only duty is to watch the population and punish defectors. Such a policelike or mercenary punishment can be maintained by a tax-based fund. If this tax is negligible, a cyclic dominance may emerge among different strategies. When this tax is relevant then this solution disappears. In the latter case, the fine level becomes a significant factor that determines whether punisher players coexist with cooperators or alternatively with defectors. The maximal average outcome can be reached at an intermediate cost value of punishment. Our observations highlight that we should take special care when such kind of punishment and accompanying tax are introduced to reach a collective goal.

Citations (34)

Summary

We haven't generated a summary for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Lightbulb On Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.