MNL-Bandit: A Dynamic Learning Approach to Assortment Selection

成果类型:
Article
署名作者:
Agrawal, Shipra; Avadhanula, Vashist; Goyal, Vineet; Zeevi, Assaf
署名单位:
Columbia University; Columbia University
刊物名称:
OPERATIONS RESEARCH
ISSN/ISSBN:
0030-364X
DOI:
10.1287/opre.2018.1832
发表日期:
2019
页码:
1453-1485
关键词:
choice model optimization
摘要:
We consider a dynamic assortment selection problem where in every round the retailer offers a subset (assortment) of N substitutable products to a consumer, who selects one of these products according to a multinomial logit (MNL) choice model. The retailer observes this choice, and the objective is to dynamically learn the model parameters while optimizing cumulative revenues over a selling horizon of length T. We refer to this exploration-exploitation formulation as the MNL-Bandit problem. Existing methods for this problem follow an explore-then-exploit approach, which estimates parameters to a desired accuracy and then, treating these estimates as if they are the correct parameter values, offers the optimal assortment based on these estimates. These approaches require certain a priori knowledge of separability, determined by the true parameters of the underlying MNL model, and this in turn is critical in determining the length of the exploration period. (Separability refers to the distinguishability of the true optimal assortment from the other suboptimal alternatives.) In this paper, we give an efficient algorithm that simultaneously explores and exploits, without a priori knowledge of any problem parameters. Furthermore, the algorithm is adaptive in the sense that its performance is near optimal in the wellseparated case as well as the general parameter setting where this separation need not hold.
来源URL: