Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 56 tok/s
Gemini 2.5 Pro 39 tok/s Pro
GPT-5 Medium 15 tok/s Pro
GPT-5 High 16 tok/s Pro
GPT-4o 99 tok/s Pro
Kimi K2 155 tok/s Pro
GPT OSS 120B 476 tok/s Pro
Claude Sonnet 4 38 tok/s Pro
2000 character limit reached

LTL-Based Non-Markovian Inverse Reinforcement Learning (2110.13616v2)

Published 26 Oct 2021 in cs.FL

Abstract: The successes of reinforcement learning in recent years are underpinned by the characterization of suitable reward functions. However, in settings where such rewards are non-intuitive, difficult to define, or otherwise error-prone in their definition, it is useful to instead learn the reward signal from expert demonstrations. This is the crux of inverse reinforcement learning (IRL). While eliciting learning requirements in the form of scalar reward signals has been shown to effective, such representations lack explainability and lead to opaque learning. We aim to mitigate this situation by presenting a novel IRL method for eliciting declarative learning requirements in the form of a popular formal logic -- Linear Temporal Logic (LTL) -- from a set of traces given by the expert policy. A key novelty of the proposed approach is quantitative semantics of satisfaction of an LTL formula by a word that, following Occam's razor principle, incentivizes simpler explanations. Given a sample $S=(P,N)$ consisting of positive traces $P$ and negative traces $N$, the proposed algorithms automate the search for a formula $\varphi$ which provides the simplest explanation (in the $GF$ fragment of LTL) of the samples. We have implemented this approach as an open-source tool QuantLearn to perform logic-based non-Markovian IRL. Our results demonstrate the feasibility of the proposed approach in eliciting intuitive LTL-based reward signals from noisy data.

Citations (1)
List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-Up Questions

We haven't generated follow-up questions for this paper yet.