Classification with imperfect training labels
成果类型:
Article
署名作者:
Cannings, Timothy, I; Fan, Yingying; Samworth, Richard J.
署名单位:
University of Edinburgh; University of Southern California; University of Cambridge
刊物名称:
BIOMETRIKA
ISSN/ISSBN:
0006-3444
DOI:
10.1093/biomet/asaa011
发表日期:
2020
页码:
311330
关键词:
Support vector machines
DISCRIMINANT-ANALYSIS
initial samples
noise
Consistency
rates
CONVERGENCE
摘要:
We study the effect of imperfect training data labels on the performance of classification methods. In a general setting, where the probability that an observation in the training dataset is mislabelled may depend on both the feature vector and the true label, we bound the excess risk of an arbitrary classifier trained with imperfect labels in terms of its excess risk for predicting a noisy label. This reveals conditions under which a classifier trained with imperfect labels remains consistent for classifying uncorrupted test data points. Furthermore, under stronger conditions, we derive detailed asymptotic properties for the popular k-nearest neighbour, support vector machine and linear discriminant analysis classifiers. One consequence of these results is that the k-nearest neighbour and support vector machine classifiers are robust to imperfect training labels, in the sense that the rate of convergence of the excess risk of these classifiers remains unchanged; in fact, our theoretical and empirical results even show that in some cases, imperfect labels may improve the performance of these methods. The linear discriminant analysis classifier is shown to be typically inconsistent in the presence of label noise unless the prior probabilities of the classes are equal. Our theoretical results are supported by a simulation study.