Prevalence of simplex compression in adversarial deep neural networks
成果类型:
Article
署名作者:
Cao, Yang; Chen, Yanbo; Liu, Weiwei
署名单位:
Wuhan University; Wuhan University; Wuhan University; Wuhan University
刊物名称:
PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA
ISSN/ISSBN:
0027-14068
DOI:
10.1073/pnas.2421593122
发表日期:
2025-04-25
关键词:
摘要:
Neural collapse (NC) reveals that the last layer of the network can capture data representations, leading to similar outputs for examples within the same class, while outputs for examples from different classes form a simplex equiangular tight frame (ETF) structure. This phenomenon has garnered significant attention due to its implications on the intrinsic properties of neural networks. Interestingly, we observe a simplex compression phenomenon in NC, where the geometric size of the simplex ETF reduces under adversarial training, with the degree of compression increasing as the perturbation radius grows. We provide empirical evidence supporting the existence of simplex compression across a wide range of models and datasets. Furthermore, we establish a rigorous theoretical framework that explains our experimental observations, offering insights into NC under adversarial conditions.