Project

A stochastic optimal control framework for quantifying and reducing uncertainties in deep learning

Project Status: Active

We propose to develop a stochastic optimal control framework for quantifying and reducing uncertainties in deep learning by exploiting the connection between probabilistic network architectures and optimal control of stochastic dynamical systems. Despite neural networks achieving impressive results in many machine learning tasks, current network models often produce unrealistic decisions due to the computational intractability of existing uncertainty quantification (UQ) methods in measuring uncertainties of very deep networks. As UQ is increasingly important to the safe use of deep learning in decision making for scientific applications, the computing capability developed in this effort will significantly advance the reliability of machine-learning assisted scientific predictions for DOE applications.

Research Team: Guannan Zhang (CSMD, ORNL, PI), Jiaxin Zhang (CSMD, ORNL), Hoang Tran (CSMD, ORNL), Miroslav Stoyanov (CSMD, ORNL), Sirui Bi (CSED, ORNL), Alan Tennant (MSTD, ORNL)

Funding Period: Sept. 2019 -- Aug. 2021

Publications:

  • Jiaxin Zhang, Sirui Bi, and Guannan Zhang, A stochastic approximate gradient ascent method for Bayesian experimental design with implicit models, submitted.
  • Jiaxin Zhang, Sirui Bi, and Guannan Zhang, A hybrid gradient method to designing Bayesian experiments for implicit models, NeurIPS Workshop on Machine Learning and the Physical Sciences, 2020.
  • Jiaxin Zhang, Sirui Bi, and Guannan Zhang, Scalable deep-learning-accelerated topology optimization for additively manufactured materials, NeurIPS Workshop on Machine Learning for Engineering Modeling, Simulation and Design, 2020. 
  • J. Zhang, X. Liu, S. Bi, J. Yi, G. Zhang, M. Eisenbach, Robust data-driven approach for predicting the configurational energy of high entropy alloys, Material & Design, 185 (5), pp. 108247, 2020.
  • Guannan Zhang, Jiaxin Zhang and Jacob Hinkle, Learning nonlinear level sets for dimensionality reduction in function approximation, Advances in Neural Information Processing Systems (NeurIPS), 32, pp. 13199-13208, 2019.
  • Xuping Xie, Guannan Zhang and Clayton Webster, Non-Intrusive Inference Reduced Order Model for Fluids Using Deep Multistep Neural Network, Mathematics, 7(8), pp. 757, 2019.

Activities:

  • Sirui Bi presented our work on "Scalable deep-learning-accelerated topology optimization for additively manufactured materials" at the ORNL AI Expo in July 2019.
  • Guannan Zhang and Jiaxin Zhang attended the 2019 Conference on Neural Information Processing Systems to present our paper "Learning nonlinear level sets for dimensionality reduction in function approximation".

Last Updated: November 6, 2020 - 11:17 am