People who share encounters with racism are silenced online by humans and machines, but a guideline-reframing intervention holds promise
成果类型:
Article
署名作者:
Lee, Cinoo; Gligoric, Kristina; Kalluri, Pratyusha Ria; Harrington, Maggie; Durmus, Esin; Sanchez, Kiara L.; San, Nay; Tse, Danny; Zhao, Xuan; Hamedani, MarYam G.; Markus, Hazel Rose; Jurafsky, Dan; Eberhardt, Jennifer L.
署名单位:
Stanford University; Stanford University; Stanford University; Dartmouth College; Stanford University; Stanford University
刊物名称:
PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA
ISSN/ISSBN:
0027-14637
DOI:
10.1073/pnas.2322764121
发表日期:
2024-09-17
关键词:
self-disclosure
intergroup
DISCRIMINATION
PSYCHOLOGY
stereotype
IDENTITY
distance
manage
BIAS
摘要:
Are members of marginalized communities silenced on social media when they share personal experiences of racism? Here, we investigate the role of algorithms, humans, and platform guidelines in suppressing disclosures of racial discrimination. In a field study of actual posts from a neighborhood-based social media platform, we find that when users talk about their experiences as targets of racism, their posts are disproportionately flagged for removal as toxic by five widely used moderation algorithms from major online platforms, including the most recent large language models. We show that human users disproportionately flag these disclosures for removal as well. Next, in a follow-up experiment, we demonstrate that merely witnessing such suppression negatively influences how Black Americans view the community and their place in it. Finally, to address these challenges to equity and inclusion in online spaces, we introduce a mitigation strategy: a guideline-reframing intervention that is effective at reducing silencing behavior across the political spectrum.