Sample-Efficient Reinforcement Learning With Temporal Logic Objectives: Leveraging the Task Specification to Guide Exploration

成果类型:
Article
署名作者:
Kantaros, Yiannis; Wang, Jun
署名单位:
Washington University (WUSTL)
刊物名称:
IEEE TRANSACTIONS ON AUTOMATIC CONTROL
ISSN/ISSBN:
0018-9286
DOI:
10.1109/TAC.2024.3484290
发表日期:
2025
页码:
2873-2888
关键词:
Modeling computational modeling robots Probabilistic logic Markov decision processes Heuristic algorithms Complexity theory uncertainty Learning automata Stochastic processes reinforcement learning (RL) Stochastic systems temporal logic planning
摘要:
In this article, we address the problem of learning optimal control policies for systems with uncertain dynamics and high-level control objectives specified as linear temporal logic (LTL) formulas. Uncertainty is considered in the workspace structure and the outcomes of control decisions giving rise to an unknown Markov decision process (MDP). Existing reinforcement learning (RL) algorithms for LTL tasks typically rely on exploring a product MDP state-space uniformly (using e.g., an $\epsilon$-greedy policy) compromising sample-efficiency. This issue becomes more pronounced as the rewards get sparser and the MDP size or the task complexity increase. In this article, we propose an accelerated RL algorithm that can learn control policies significantly faster than competitive approaches. Its sample-efficiency relies on a novel task-driven exploration strategy that biases exploration toward directions that may contribute to task satisfaction. We provide theoretical analysis and extensive comparative experiments demonstrating the sample-efficiency of the proposed method. The benefit of our method becomes more evident as the task complexity or the MDP size increases.