Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Optimization Methods for Interpretable Differentiable Decision Trees in Reinforcement Learning (1903.09338v5)

Published 22 Mar 2019 in cs.LG and stat.ML

Abstract: Decision trees are ubiquitous in machine learning for their ease of use and interpretability. Yet, these models are not typically employed in reinforcement learning as they cannot be updated online via stochastic gradient descent. We overcome this limitation by allowing for a gradient update over the entire tree that improves sample complexity affords interpretable policy extraction. First, we include theoretical motivation on the need for policy-gradient learning by examining the properties of gradient descent over differentiable decision trees. Second, we demonstrate that our approach equals or outperforms a neural network on all domains and can learn discrete decision trees online with average rewards up to 7x higher than a batch-trained decision tree. Third, we conduct a user study to quantify the interpretability of a decision tree, rule list, and a neural network with statistically significant results ($p < 0.001$).

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Andrew Silva (18 papers)
  2. Taylor Killian (9 papers)
  3. Ivan Dario Jimenez Rodriguez (8 papers)
  4. Sung-Hyun Son (5 papers)
  5. Matthew Gombolay (61 papers)
Citations (12)

Summary

We haven't generated a summary for this paper yet.