Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 147 tok/s
Gemini 2.5 Pro 42 tok/s Pro
GPT-5 Medium 33 tok/s Pro
GPT-5 High 28 tok/s Pro
GPT-4o 81 tok/s Pro
Kimi K2 190 tok/s Pro
GPT OSS 120B 449 tok/s Pro
Claude Sonnet 4.5 36 tok/s Pro
2000 character limit reached

Motivating Time-Inconsistent Agents: A Computational Approach (1601.00479v1)

Published 4 Jan 2016 in cs.CC and cs.DS

Abstract: In this paper we investigate the computational complexity of motivating time-inconsistent agents to complete long term projects. We resort to an elegant graph-theoretic model, introduced by Kleinberg and Oren, which consists of a task graph $G$ with $n$ vertices, including a source $s$ and target $t$, and an agent that incrementally constructs a path from $s$ to $t$ in order to collect rewards. The twist is that the agent is present-biased and discounts future costs and rewards by a factor $\beta\in [0,1]$. Our design objective is to ensure that the agent reaches $t$ i.e.\ completes the project, for as little reward as possible. Such graphs are called motivating. We consider two strategies. First, we place a single reward $r$ at $t$ and try to guide the agent by removing edges from $G$. We prove that deciding the existence of such motivating subgraphs is NP-complete if $r$ is fixed. More importantly, we generalize our reduction to a hardness of approximation result for computing the minimum $r$ that admits a motivating subgraph. In particular, we show that no polynomial-time approximation to within a ratio of $\sqrt{n}/4$ or less is possible, unless ${\rm P}={\rm NP}$. Furthermore, we develop a $(1+\sqrt{n})$-approximation algorithm and thus settle the approximability of computing motivating subgraphs. Secondly, we study motivating reward configurations, where non-negative rewards $r(v)$ may be placed on arbitrary vertices $v$ of $G$. The agent only receives the rewards of visited vertices. Again we give an NP-completeness result for deciding the existence of a motivating reward configuration within a fixed budget $b$. This result even holds if $b=0$, which in turn implies that no efficient approximation of a minimum $b$ within a ration grater or equal to $1$ is possible, unless ${\rm P}={\rm NP}$.

Citations (20)

Summary

We haven't generated a summary for this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube