Adaptive Data Acquisition for Personalized Recommendations with Optimality Guarantees on Short-Form Video Platforms
成果类型:
Article; Early Access
署名作者:
Cao, Tunyu; Leng, Yan
署名单位:
University of Texas System; University of Texas Austin
刊物名称:
MANAGEMENT SCIENCE
ISSN/ISSBN:
0025-1909
DOI:
10.1287/mnsc.2022.01130
发表日期:
2025
关键词:
pure exploration
information acquisition
transductive learning
-form video platforms
recommender systems
摘要:
The recent surge in the popularity of short-form video (SFV) on digital platforms has led to massive numbers of videos and ever-evolving topics. As a result, the task of making personalized recommendations has become increasingly challenging. We introduce a new pure exploration problem on SFV platforms: finding a (K, EH, EL)-optimal set that includes all recommendations within the EL-optimality gap and that excludes those beyond the EH-optimality gap relative to the best arm with a capacity limit of K. To solve this problem, we propose an algorithm called adaptive acquisition tree (AAT). AAT jointly accounts for user preference heterogeneity and high-dimensional product characteristics. It adaptively segments users and then, learns a personalized transductive policy that can be used on partially observed or even unobserved card types to accommodate the dynamic trends on SFV platforms. We derive the sample complexity required to identify a (K, EH, EL)-optimal set. Our method's efficiency is validated through numerical tests using data from the NetEase platform. Our results reveal that the proposed policy performs significantly better than several state-of-the-art benchmarks across four transductive scenarios for both spotlight recommendations (i.e., best-arm identifications) and (K, EH, EL)-optimal set recommendations. Compared with the best benchmarks for the best card and (K, EH, EL)-optimal set recommendations, our approach can elevate the average rewards (measured by view time) by 30% (to 100%) and 43% (to 56%), respectively. Given the increasing popularity and uniqueness of SFVs and more broadly, user-generated content, our method offers significant academic and practical merit.