Eliciting Human Judgment for Prediction Algorithms
成果类型:
Article
署名作者:
Ibrahim, Rouba; Kim, Song-Hee; Tong, Jordan
署名单位:
University of London; University College London; University of Southern California; University of Wisconsin System; University of Wisconsin Madison
刊物名称:
MANAGEMENT SCIENCE
ISSN/ISSBN:
0025-1909
DOI:
10.1287/mnsc.2020.3856
发表日期:
2021
页码:
2314-2325
关键词:
Laboratory experiments
behavioral operations
random error
elicitation
forecasting
prediction
DISCRETION
expert input
private information
JUDGMENT
aggregation
摘要:
Even when human point forecasts are less accurate than data-based algorithm predictions, they can still help boost performance by being used as algorithm inputs. Assuming one uses human judgment indirectly in this manner, we propose changing the elicitation question from the traditional direct forecast (DF) to what we call the private information adjustment (PIA): how much the human thinks the algorithm should adjust its forecast to account for information the human has that is unused by the algorithm. Using stylized models with and without random error, we theoretically prove that human random error makes eliciting the PIA lead to more accurate predictions than eliciting the DF. However, this DF-PIA gap does not exist for perfectly consistent forecasters. The DFPIA gap is increasing in the random error that people make while incorporating public information (data that the algorithm uses) but is decreasing in the random error that people make while incorporating private information (data that only the human can use). In controlled experiments with students and Amazon Mechanical Turk workers, we find support for these hypotheses.