Algorithmic Transparency with Strategic Users
成果类型:
Article
署名作者:
Wang, Qiaochu; Huang, Yan; Jasin, Stefanus; Singh, Param Vir
署名单位:
Carnegie Mellon University; University of Michigan System; University of Michigan
刊物名称:
MANAGEMENT SCIENCE
ISSN/ISSBN:
0025-1909
DOI:
10.1287/mnsc.2022.4475
发表日期:
2023
页码:
2297-2317
关键词:
algorithmic transparency
game theory
Machine Learning
strategic classification
signaling game
摘要:
Should firms that apply machine learning algorithms in their decision making make their algorithms transparent to the users they affect? Despite the growing calls for algorithmic transparency, most firms keep their algorithms opaque, citing potential gaming by users that may negatively affect the algorithm's predictive power. In this paper, we develop an analytical model to compare firm and user surplus with and without algorithmic transparency in the presence of strategic users and present novel insights. We identify a broad set of conditions under which making the algorithm transparent actually benefits the firm. We show that, in some cases, even the predictive power of the algorithmcan increase if the firm makes the algorithm transparent. By contrast, users may not always be better off under algorithmic transparency. These results hold evenwhen the predictive power of the opaque algorithmcomes largely from correlational features and the cost for users to improve them is minimal. We show that these insights are robust under several extensions of the main model. Overall, our results show that firms should not always view manipulation by users as bad. Rather, they should use algorithmic transparency as a lever tomotivate users to invest inmore desirable features.