Beyond Exact Gradients: Convergence of Stochastic Soft-Max Policy Gradient Methods With Entropy Regularization
成果类型:
Article
署名作者:
Ding, Yuhao; Zhang, Junzi; Lee, Hyunin; Lavaei, Javad
署名单位:
University of California System; University of California Berkeley; University of California System; University of California Berkeley
刊物名称:
IEEE TRANSACTIONS ON AUTOMATIC CONTROL
ISSN/ISSBN:
0018-9286
DOI:
10.1109/TAC.2025.3540965
发表日期:
2025
页码:
5129-5144
关键词:
entropy
CONVERGENCE
Complexity theory
Stochastic processes
gradient methods
trajectory
probability distribution
mirrors
ELECTRONIC MAIL
Approximation algorithms
Policy gradient (PG)
reinforcement learning (RL)
stochastic approximation
摘要:
Entropy regularization is an efficient technique for encouraging exploration and preventing a premature convergence of (vanilla) policy gradient (PG) methods in reinforcement learning (RL). However, the theoretical understanding of entropy-regularized RL algorithms has been limited. In this article, we revisit the classical entropy-regularized PG methods with the soft-max policy parametrization, whose convergence has so far only been established assuming access to exact gradient oracles. To go beyond this scenario, we propose the first set of (nearly) unbiased stochastic PG estimators with trajectory-level entropy regularization, with one being an unbiased visitation measure-based estimator and the other one being a nearly unbiased yet more practical trajectory-based estimator. We prove that although the estimators themselves are unbounded in general due to the additional logarithmic policy rewards introduced by the entropy term, the variances are uniformly bounded. We then propose a two-phase stochastic PG algorithm that uses a large batch size in the first phase to overcome the challenge of the stochastic approximation due to the noncoercive landscape, and uses a small batch size in the second phase by leveraging the curvature information around the optimal policy. We establish a global optimality convergence result and a sample complexity of O-similar to((1)/(epsilon)2) for the proposed algorithm. Our result is the first global convergence and sample complexity results for the stochastic entropy-regularized vanilla PG method.