Data-Driven Newsvendor Problems Regularized by a Profit Risk Constraint
成果类型:
Article
署名作者:
Lin, Shaochong; Chen, Youhua (Frank); Li, Yanzhi; Shen, Zuo-Jun Max
署名单位:
City University of Hong Kong; City University of Hong Kong; University of Hong Kong; University of Hong Kong; University of California System; University of California Berkeley
刊物名称:
PRODUCTION AND OPERATIONS MANAGEMENT
ISSN/ISSBN:
1059-1478
DOI:
10.1111/poms.13635
发表日期:
2022
页码:
1630-1644
关键词:
risk-averse newsvendor
data-driven newsvendor
value-at-risk constraint
Machine Learning
摘要:
We study a risk-averse newsvendor problem where demand distribution is unknown. The focal product is new, and only the historical demand information of related products is available. The newsvendor aims to maximize its expected profit subject to a profit risk constraint. We develop a model with a value-at-risk constraint and propose a data-driven approximation to the theoretical risk-averse newsvendor model. Specifically, we use machine learning methods to weight the similarity between the new product and the previous ones based on covariates. The sample-dependent weights are then embedded to approximate the expected profit and the profit risk constraint. We show that the data-driven risk-averse newsvendor solution entails a closed-form quantile structure and can be efficiently computed. Finally, we prove that this data-driven solution is asymptotically optimal. Experiments based on real data and synthetic data demonstrate the effectiveness of our approach. We observe that under data-driven decision-making, the average realized profit may benefit from a stronger risk aversion, contrary to that in the theoretical risk-averse newsvendor model. In fact, even a risk-neutral newsvendor can benefit from incorporating a risk constraint under data-driven decision-making. This situation is due to the value-at-risk constraint that effectively plays a regularizing role (via reducing the variance of order quantities) in mitigating issues of data-driven decision-making, such as sampling error and model misspecification. However, the above-mentioned effects diminish with the increase in the size of the training data set, as the asymptotic optimality result implies.
来源URL: