Explaining Why the Computer Says No: Algorithmic Transparency Affects the Perceived Trustworthiness of Automated Decision-Making
成果类型:
Article
署名作者:
Grimmelikhuijsen, Stephan
署名单位:
Utrecht University
刊物名称:
PUBLIC ADMINISTRATION REVIEW
ISSN/ISSBN:
0033-3352
DOI:
10.1111/puar.13483
发表日期:
2023
页码:
241-262
关键词:
artificial-intelligence
procedural justice
BLACK-BOX
GOVERNMENT
DISCRETION
big
摘要:
Algorithms based on Artificial Intelligence technologies are slowly transforming street-level bureaucracies, yet a lack of algorithmic transparency may jeopardize citizen trust. Based on procedural fairness theory, this article hypothesizes that two core elements of algorithmic transparency (accessibility and explainability) are crucial to strengthening the perceived trustworthiness of street-level decision-making. This is tested in one experimental scenario with low discretion (a denied visa application) and one scenario with high discretion (a suspicion of welfare fraud). The results show that: (1) explainability has a more pronounced effect on trust than the accessibility of the algorithm; (2) the effect of algorithmic transparency not only pertains to trust in the algorithm itself but also-partially-to trust in the human decision-maker; (3) the effects of algorithmic transparency are not robust across decision context. These findings imply that transparency-as-accessibility is insufficient to foster citizen trust. Algorithmic explainability must be addressed to maintain and foster trustworthiness algorithmic decision-making.