Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Intervention Efficient Algorithm for Two-Stage Causal MDPs (2111.00886v1)

Published 1 Nov 2021 in cs.LG and cs.AI

Abstract: We study Markov Decision Processes (MDP) wherein states correspond to causal graphs that stochastically generate rewards. In this setup, the learner's goal is to identify atomic interventions that lead to high rewards by intervening on variables at each state. Generalizing the recent causal-bandit framework, the current work develops (simple) regret minimization guarantees for two-stage causal MDPs, with parallel causal graph at each state. We propose an algorithm that achieves an instance dependent regret bound. A key feature of our algorithm is that it utilizes convex optimization to address the exploration problem. We identify classes of instances wherein our regret guarantee is essentially tight, and experimentally validate our theoretical results.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Rahul Madhavan (12 papers)
  2. Aurghya Maiti (7 papers)
  3. Gaurav Sinha (18 papers)
  4. Siddharth Barman (65 papers)
Citations (1)

Summary

We haven't generated a summary for this paper yet.