ROBUST SENSIBLE ADVERSARIAL LEARNING OF DEEP NEURAL NETWORKS FOR IMAGE CLASSIFICATION

成果类型:
Article
署名作者:
Kim, Jungeum; Wang, Xiao
署名单位:
Purdue University System; Purdue University
刊物名称:
ANNALS OF APPLIED STATISTICS
ISSN/ISSBN:
1932-6157
DOI:
10.1214/22-AOAS1637
发表日期:
2023
页码:
961-984
关键词:
摘要:
The idea of robustness is central and critical to modern statistical analymany studies have shown that DNNs are vulnerable to adversarial attacks. Making imperceptible changes to an image can cause DNN models to make the wrong classification with high confidence, such as classifying a benign mole as a malignant tumor and a stop sign as a speed limit sign. The tradeoff between robustness and standard accuracy is common for DNN models. In this paper we introduce sensible adversarial learning and demonstrate the synergistic effect between pursuits of standard natural accuracy and robustness. Specifically, we define a sensible adversary, which is useful for learning a robust model, while keeping high natural accuracy. We theoretically establish that the Bayes classifier is the most robust multiclass classifier with the 0 - 1 loss under sensible adversarial learning. We propose a novel and efficient algorithm that trains a robust model using implicit loss truncation. We apply sensible adversarial learning for large-scale image classification to a handwritten digital image dataset, called MNIST, and an object recognition colored image dataset, called CIFAR10. We have performed an extensive comparative study to compare our method with other competitive methods. Our experiments empirically demonstrate that our method is not sensitive to its hyperparameter and does not collapse even with a small model capacity while promoting robustness against various attacks and keeping high natural accuracy. The sensible adversarial learning software is available as a Python package at https://github.com/JungeumKim/SENSE.
来源URL: