Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 44 tok/s
Gemini 2.5 Pro 41 tok/s Pro
GPT-5 Medium 13 tok/s Pro
GPT-5 High 15 tok/s Pro
GPT-4o 86 tok/s Pro
Kimi K2 208 tok/s Pro
GPT OSS 120B 447 tok/s Pro
Claude Sonnet 4 36 tok/s Pro
2000 character limit reached

Platelet Inventory Management with Approximate Dynamic Programming (2307.09395v2)

Published 18 Jul 2023 in eess.SY and cs.SY

Abstract: We study a stochastic perishable inventory control problem with endogenous (decision-dependent) uncertainty in shelf-life of units. Our primary motivation is determining ordering policies for blood platelets. Determining optimal ordering quantities is a challenging task due to the short maximum shelf-life of platelets (3-5 days after testing) and high uncertainty in daily demand. We formulate the problem as an infinite-horizon discounted Markov Decision Process (MDP). The model captures salient features observed in our data from a network of Canadian hospitals and allows for fixed ordering costs. We show that with uncertainty in shelf-life, the value function of the MDP is non-convex and key structural properties valid under deterministic shelf-life no longer hold. Hence, we propose an Approximate Dynamic Programming (ADP) algorithm to find approximate policies. We approximate the value function using a linear combination of basis functions and tune the parameters using a simulation-based policy iteration algorithm. We evaluate the performance of the proposed policy using extensive numerical experiments in parameter regimes relevant to the platelet inventory management problem. We further leverage the ADP algorithm to evaluate the impact of ignoring shelf-life uncertainty. Finally, we evaluate the out-of-sample performance of the ADP algorithm in a case study using real data and compare it to the historical hospital performance and other benchmarks. The ADP policy can be computed online in a few minutes and results in more than 50% lower expiry and shortage rates compared to the historical performance. In addition, it performs better or as well as an exact policy that ignores uncertainty in shelf-life and becomes hard to compute for larger instance of the problem.

Summary

We haven't generated a summary for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Lightbulb On Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube