Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Particle Value Functions (1703.05820v1)

Published 16 Mar 2017 in cs.LG and cs.AI

Abstract: The policy gradients of the expected return objective can react slowly to rare rewards. Yet, in some cases agents may wish to emphasize the low or high returns regardless of their probability. Borrowing from the economics and control literature, we review the risk-sensitive value function that arises from an exponential utility and illustrate its effects on an example. This risk-sensitive value function is not always applicable to reinforcement learning problems, so we introduce the particle value function defined by a particle filter over the distributions of an agent's experience, which bounds the risk-sensitive one. We illustrate the benefit of the policy gradients of this objective in Cliffworld.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Chris J. Maddison (47 papers)
  2. Dieterich Lawson (12 papers)
  3. George Tucker (45 papers)
  4. Nicolas Heess (139 papers)
  5. Arnaud Doucet (161 papers)
  6. Andriy Mnih (25 papers)
  7. Yee Whye Teh (162 papers)
Citations (15)

Summary

We haven't generated a summary for this paper yet.