Event

Robust learning with implicit residual networks

Dr. Viktor Reshniak
Dr. Viktor Reshniak

Abstract:  In this effort, we propose a new deep architecture utilizing residual blocks inspired by implicit discretization schemes. As opposed to the standard feed-forward networks, the outputs of the proposed implicit residual blocks are defined as the fixed points of the appropriately chosen nonlinear transformations. We show that this choice leads to the improved stability of both forward and backward propagations, has a favorable impact on the generalization power and allows to control the robustness of the network with only a few hyperparameters. In addition, the proposed reformulation of ResNet does not introduce new parameters and can potentially lead to a reduction in the number of required layers due to improved forward stability. Finally, we derive the memory-efficient training algorithm, propose a stochastic regularization technique and provide numerical results in support of our findings.

Speaker’s BioViktor Reshniak is a Staff Mathematician in the Computational and Applied Mathematics (CAM) group at ORNL.  He received his Ph.D. in Computational Science from the Middle Tennessee State University (MTSU) under the supervision of professors Yuri Melnikov and Abdul Khaliq. His work at MTSU was in the field of computational partial differential equations and numerical integration of stiff stochastic systems. After graduating from MTSU in 2017 he started a postdoctoral position in the CAM group where he worked with Clayton Webster on several projects in compressed sensing, image processing and machine learning. His current research at ORNL is primarily focused on the construction of new robust neural network architectures for scientific applications and the design of efficient image and data processing algorithms.

Last Updated: August 31, 2020 - 11:11 am