Debiasing Watermarks for Large Language Models via Maximal Coupling

成果类型:
Article; Early Access
署名作者:
Xie, Yangxinyu; Li, Xiang; Mallick, Tanwi; Su, Weijie; Zhang, Ruixun
署名单位:
University of Pennsylvania; University of Pennsylvania; United States Department of Energy (DOE); Argonne National Laboratory; Peking University; Peking University; University of Oxford; University of Oxford
刊物名称:
JOURNAL OF THE AMERICAN STATISTICAL ASSOCIATION
ISSN/ISSBN:
0162-1459
DOI:
10.1080/01621459.2025.2520455
发表日期:
2025
关键词:
higher criticism
摘要:
Watermarking language models is essential for distinguishing between human and machine-generated text and thus maintaining the integrity and trustworthiness of digital communication. We present a novel green/red list watermarking approach that partitions the token set into green and red lists, subtly increasing the generation probability for green tokens. To correct token distribution bias, our method employs maximal coupling, using a uniform coin flip to decide whether to apply bias correction, with the result embedded as a pseudorandom watermark signal. Theoretical analysis confirms this approach's unbiased nature and robust detection capabilities. Experimental results show that it outperforms prior techniques by preserving text quality while maintaining high detectability, and it demonstrates resilience to targeted modifications aimed at improving text quality. This research provides a promising watermarking solution for language models, balancing effective detection with minimal impact on text quality. Supplementary materials for this article are available online, including a standardized description of the materials available for reproducing the work.