Monitoring with Limited Information

成果类型:
Article
署名作者:
Iancu, Dan Andrei; Trichakis, Nikolaos; Yoon, Do Young
署名单位:
Stanford University; INSEAD Business School; Massachusetts Institute of Technology (MIT); Massachusetts Institute of Technology (MIT)
刊物名称:
MANAGEMENT SCIENCE
ISSN/ISSBN:
0025-1909
DOI:
10.1287/mnsc.2020.3736
发表日期:
2021
页码:
4233-4251
关键词:
Robust Optimization monitoring optimal stopping problem
摘要:
We consider a system with an evolving state that can be stopped at any time by a decision maker (DM), yielding a state-dependent reward. The DM does not observe the state except for a limited number of monitoring times, which he must choose, in conjunction with a suitable stopping policy, to maximize his reward. Dealing with these types of stopping problems, which arise in a variety of applications from healthcare to finance, often requires excessive amounts of data for calibration purposes and prohibitive computational resources. To overcome these challenges, we propose a robust optimization approach, whereby adaptive uncertainty sets capture the information acquired through monitoring. We consider two versions of the problem-static and dynamic-depending on how the monitoring times are chosen. We show that, under certain conditions, the same worst-case reward is achievable under either static or dynamic monitoring. This allows recovering the optimal dynamic monitoring policy by resolving static versions of the problem. We discuss cases when the static problem becomes tractable and highlight conditions when monitoring at equidistant times is optimal. Lastly, we showcase our framework in the context of a healthcare problem (monitoring heart-transplant patients for cardiac allograft vasculopathy), where we design optimal monitoring policies that substantially improve over the status quo recommendations.