Hamiltonian Deep Neural Networks Guaranteeing Nonvanishing Gradients by Design

成果类型:
Article
署名作者:
Galimberti, Clara Lucia; Furieri, Luca; Xu, Liang; Ferrari-Trecate, Giancarlo
署名单位:
Swiss Federal Institutes of Technology Domain; Ecole Polytechnique Federale de Lausanne; Shanghai University
刊物名称:
IEEE TRANSACTIONS ON AUTOMATIC CONTROL
ISSN/ISSBN:
0018-9286
DOI:
10.1109/TAC.2023.3239430
发表日期:
2023
页码:
3155-3162
关键词:
Computer architecture Neural Networks mathematical models training iron Deep learning Benchmark testing Deep neural networks (DNNs) Hamiltonian systems ordinary differential equations (ODE) discretization
摘要:
Deep neural networks (DNNs) training can be difficult due to vanishing and exploding gradients during weight optimization through backpropagation. To address this problem, we propose a general class of Hamiltonian DNNs (H-DNNs) that stem from the discretization of continuous-time Hamiltonian systems and include several existing DNN architectures based on ordinary differential equations. Our main result is that a broad set of H-DNNs ensures nonvanishing gradients by design for an arbitrary network depth. This is obtained by proving that, using a semi-implicit Euler discretization scheme, the backward sensitivity matrices involved in gradient computations are symplectic. We also provide an upper bound to the magnitude of sensitivity matrices and show that exploding gradients can be controlled through regularization. The good performance of H-DNNs is demonstrated on benchmark classification problems, including image classification with the MNIST dataset.