Relationship Between Trust in the Artificial Intelligence Creator and Trust in Artificial Intelligence Systems: The Crucial Role of Artificial Intelligence Alignment and Steerability

成果类型:
Article
署名作者:
Saffarizadeh, Kambiz; Keil, Mark; Maruping, Likoebe
署名单位:
University of Texas System; University of Texas Arlington; University System of Georgia; Georgia State University
刊物名称:
JOURNAL OF MANAGEMENT INFORMATION SYSTEMS
ISSN/ISSBN:
0742-1222
DOI:
10.1080/07421222.2024.2376382
发表日期:
2024
页码:
645-681
关键词:
recommendation agents MODERATING ROLE ai distrust machines context people satisfaction PERSPECTIVE performance
摘要:
This paper offers a novel perspective on trust in artificial intelligence (AI) systems, focusing on the transfer of user trust in AI creators to trust in AI systems. Using the agentic information systems (IS) framework, we investigate the role of AI alignment and steerability in trust transference. Through four randomized experiments, we probe three key alignment-related attributes of AI systems: creator-based steerability, user-based steerability, and autonomy. Results indicate that creator-based steerability amplifies trust transference from AI creator to AI system, while user-based steerability and autonomy diminish it. Our findings suggest that AI alignment efforts should consider the entity with which the AI goals and values should be aligned and highlight the need for research to theorize from a triadic view encompassing the user, the AI system, and its creator. Given the diversity in individual goals and values, we recommend that developers move beyond the prevailing one-size-fits-all alignment strategy. Our findings contribute to trust transference theory by highlighting the boundary conditions under which trust transference breaks down or holds in the emerging human-AI environment.