Targeted Automation and Sustaining Human-AI Learning

成果类型:
Article; Early Access
署名作者:
Imdahl, Christina; Schmidt, William; Hoberg, Kai
署名单位:
Eindhoven University of Technology; Emory University; Kuhne Logistics University
刊物名称:
PRODUCTION AND OPERATIONS MANAGEMENT
ISSN/ISSBN:
1059-1478
DOI:
10.1177/10591478251381899
发表日期:
2025
关键词:
Targeted Automation Human-AI Learning Human-AI Teaming Empirical Research Inventory management
摘要:
In many decision processes, a decision maker or planner must review and optionally adjust the recommendations that are generated by a decision support system (DSS). When the DSS is well-tuned to its task, adjustments by a planner can be rare and may even degrade the DSS's performance. Targeted automation could address these inefficiencies by predicting whether a planner will adjust a recommendation and improve the performance of the system. The remaining recommendations can be automated. However, as more recommendations are automated, fewer will receive planner input. This may starve the prediction model of the observations it needs for retraining. To maintain predictive performance, we must therefore address the loss that automation imposes on the model's ability to learn from a planner's decisions over time. Using 4 years of procurement ordering data from our research partner, a large materials handling equipment manufacturer, we develop and train a series of machine learning classifiers that predict individual instances in which a planner will improve a DSS-generated procurement order decision. We mitigate the performance erosion that automation engenders by structuring the selection of the model's classification threshold similar to a newsvendor problem, accounting for the value of learning and balancing the costs and benefits of under or over automating. In our setting, this approach automates around 84% of all DSS recommendations while retaining three times more planner improvements than random automation. The models maintain their predictive performance over time, despite losing automated outcomes for retraining and substantial dataset shift. Our research contributes to a broader debate on the allocation of decision authority between humans and algorithms, and creates a framework for targeted automation in an operational setting that balances the net benefits of automation versus the long-term benefits of algorithmic learning.