On the Convergence Rate of MCTS for the Optimal Value Estimation in Markov Decision Processes
成果类型:
Article
署名作者:
Chang, Hyeong Soo
署名单位:
Sogang University
刊物名称:
IEEE TRANSACTIONS ON AUTOMATIC CONTROL
ISSN/ISSBN:
0018-9286
DOI:
10.1109/TAC.2025.3538807
发表日期:
2025
页码:
4788-4793
关键词:
convergence
Complexity theory
training
Search problems
Data mining
computational modeling
Artificial intelligence
Approximation algorithms
Upper bound
uncertainty
Markov decision process (MDP)
Monte-Carlo tree search (MCTS)
multiarmed bandit (MAB)
upper confidence bound 1 (UCB1)
upper confidence bound applied to trees (UCT)
摘要:
A recent theoretical analysis of a Monte-Carlo tree search (MCTS) method properly modified from the upper confidence bound applied to trees (UCT) algorithm established a surprising result, due to a great deal of empirical successes reported from heuristic usage of UCT with relevant adjustments for various problem domains in the literature, that its rate of convergence of the expected absolute error to zero is O(1/root n) in estimating the optimal value at an initial state in a finite-horizon Markov decision process (MDP), where n is the number of simulations. We strengthen this dispiriting slow convergence result by arguing within a simpler algorithmic framework in the perspective of MDP, apart from the usual MCTS description, that the simpler strategy, called upper confidence bound 1 (UCB1) for multiarmed bandit problems, when employed as an instance of MCTS by setting UCB1's arm set to be the policy set of the underlying MDP, has an asymptotically faster convergence-rate of O(ln n/n). We also point out that the UCT-based MCTS in general has the time and space complexities that depend on the size of the state space in the worst case, which contradicts the original design spirit of MCTS. Unless heuristically used, UCT-based MCTS has yet to have theoretical supports for its applicabilities.