LEARN THEN TEST: CALIBRATING PREDICTIVE ALGORITHMS TO ACHIEVE RISK CONTROL
成果类型:
Article
署名作者:
Angelopoulos, Anastasios N.; Bates, Stephen; Candes, Emmanuel J.; Jordan, Michael, I; Lei, Lihua
署名单位:
University of California System; University of California Berkeley; Massachusetts Institute of Technology (MIT); Stanford University; Stanford University
刊物名称:
ANNALS OF APPLIED STATISTICS
ISSN/ISSBN:
1932-6157
DOI:
10.1214/24-AOAS1998
发表日期:
2025
页码:
1641-1662
关键词:
sums
摘要:
We introduce a framework for calibrating machine learning models to satisfy finite-sample statistical guarantees. Our calibration algorithms work with any model and (unknown) data-generating distribution and do not require retraining. The algorithms address, among other examples, false discovery rate control in multilabel classification, intersection-over-union control in instance segmentation, and simultaneous control of the type-1 outlier error and confidence set coverage in classification or regression. Our main insight is to reframe risk control as multiple hypothesis testing, enabling different mathematical arguments. We demonstrate our algorithms with detailed worked examples in computer vision and tabular medical data. The computer vision experiments demonstrate the utility of our approach in calibrating state-of-the-art predictive architectures that have been deployed widely, such as the detectron2 object detection system.
来源URL: