Optimistic Monte Carlo Tree Search with Sampled Information Relaxation Dual Bounds
成果类型:
Article
署名作者:
Jiang, Daniel R.; Al-Kanj, Lina; Powell, Warren B.
署名单位:
Pennsylvania Commonwealth System of Higher Education (PCSHE); University of Pittsburgh; Princeton University
刊物名称:
OPERATIONS RESEARCH
ISSN/ISSBN:
0030-364X
DOI:
10.1287/opre.2019.1939
发表日期:
2020
页码:
1678-1697
关键词:
Monte Carlo tree search
dynamic programming
information relaxation
摘要:
Monte Carlo tree search (MCTS), most famously used in game-play artificial intelligence (e.g., the game of Go), is a well-known strategy for constructing approximate solutions to sequential decision problems. Its primary innovation is the use of a heuristic, known as a default policy, to obtain Monte Carlo estimates of downstream values for states in a decision tree. This information is used to iteratively expand the tree toward regions of states and actions that an optimal policy might visit. However, to guarantee convergence to the optimal action, MCTS requires the entire tree to be expanded asymptotically. In this paper, we propose a new optimistic tree search technique called primal-dual MCTS that uses sampled information relaxation upper bounds on potential actions to make tree expansion decisions, creating the possibility of ignoring parts of the tree that stem from highly suboptimal choices. The core contribution of this paper is to prove that despite converging to a partial decision tree in the limit, the recommended action from primal-dual MCTS is optimal. The new approach shows promise when used to optimize the behavior of a single driver navigating a graph while operating on a ride-sharing platform. Numerical experiments on a real data set of taxi trips in New Jersey suggest that primal-dual MCTS improves on standard MCTS (upper confidence trees) and other policies while exhibiting a reduced sensitivity to the size of the action space.
来源URL: