Federated Offline Reinforcement Learning
成果类型:
Article
署名作者:
Zhou, Doudou; Zhang, Yufeng; Sonabend-W, Aaron; Wang, Zhaoran; Lu, Junwei; Cai, Tianxi
署名单位:
Harvard University; Harvard T.H. Chan School of Public Health; Northwestern University; Harvard University; Harvard Medical School
刊物名称:
JOURNAL OF THE AMERICAN STATISTICAL ASSOCIATION
ISSN/ISSBN:
0162-1459
DOI:
10.1080/01621459.2024.2310287
发表日期:
2024
页码:
3152-3163
关键词:
guidelines
摘要:
Evidence-based or data-driven dynamic treatment regimes are essential for personalized medicine, which can benefit from offline reinforcement learning (RL). Although massive healthcare data are available across medical institutions, they are prohibited from sharing due to privacy constraints. Besides, heterogeneity exists in different sites. As a result, federated offline RL algorithms are necessary and promising to deal with the problems. In this article, we propose a multi-site Markov decision process model that allows for both homogeneous and heterogeneous effects across sites. The proposed model makes the analysis of the site-level features possible. We design the first federated policy optimization algorithm for offline RL with sample complexity. The proposed algorithm is communication-efficient, which requires only a single round of communication interaction by exchanging summary statistics. We give a theoretical guarantee for the proposed algorithm, where the suboptimality for the learned policies is comparable to the rate as if data is not distributed. Extensive simulations demonstrate the effectiveness of the proposed algorithm. The method is applied to a sepsis dataset in multiple sites to illustrate its use in clinical settings. Supplementary materials for this article are available online including a standardized description of the materials available for reproducing the work.
来源URL: