Deep Reinforcement Learning for Online Assortment Customization: A Data-Driven Approach

成果类型:
Article; Early Access
署名作者:
Li, Tao; Wang, Chenhao; Wang, Yao; Tang, Shaojie; Chen, Ningyuan
署名单位:
Xi'an Jiaotong University; Hong Kong University of Science & Technology; Tongji University; Xi'an Jiaotong University; State University of New York (SUNY) System; University at Buffalo, SUNY; University of Toronto
刊物名称:
PRODUCTION AND OPERATIONS MANAGEMENT
ISSN/ISSBN:
1059-1478
DOI:
10.1177/10591478251351737
发表日期:
2025
关键词:
Online Assortment customization Deep Reinforcement Learning simulation Reusable Products
摘要:
When a platform has limited inventory, it is important to have a variety of products available for each customer while managing the remaining stock. To maximize revenue over the long term, the assortment policy needs to take into account the complex purchasing behavior of customers whose arrival orders and preferences may be unknown. We propose a data-driven approach for dynamic assortment planning that utilizes historical customer arrivals and transaction data. To address the challenge of online assortment customization, we use a Markov decision process framework and employ a model-free deep reinforcement learning (DRL) approach to solve the online assortment policy because of the computational challenge. Our method uses a specially designed deep neural network (DNN) model to create assortments while observing the inventory constraints, and an advantage actor-critic algorithm to update the parameters of the DNN model, with the help of a simulator built from the historical transaction data. To evaluate the effectiveness of our approach, we conduct simulations using both a synthetic data set generated with a pre-determined customer type distribution and ground-truth choice model, as well as a real-world data set. Our extensive experiments demonstrate that our approach produces significantly higher long-term revenue compared to some existing methods and remains robust under various practical conditions. We also demonstrate that our approach can be easily adapted to a more general problem that includes reusable products, where customers might return purchased items. In this setting, we find that our approach performs well under various usage time distributions.