**Project Status:** Active

We propose to develop a stochastic optimal control framework for quantifying and reducing uncertainties in deep learning by exploiting the connection between probabilistic network architectures and optimal control of stochastic dynamical systems. Despite neural networks achieving impressive results in many machine learning tasks, current network models often produce unrealistic decisions due to the computational intractability of existing uncertainty quantification (UQ) methods in measuring uncertainties of very deep networks. As UQ is increasingly important to the safe use of deep learning in decision making for scientific applications, the computing capability developed in this effort will significantly advance the reliability of machine-learning assisted scientific predictions for DOE applications.

**Principal Investigator: **Guannan Zhang (CSMD, ORNL)

**Senior Investigators**: Jiaxin Zhang (CSMD, ORNL), Hoang Tran (CSMD, ORNL), Miroslav Stoyanov (CSMD, ORNL), Sirui Bi (CSED, ORNL), Alan Tennant (MSTD, ORNL), Pei Zhang (CSED, ORNL)

**Funding Period**: Sept. 2019 -- Aug. 2021

**Publications**:

- Jiaxin Zhang, Sirui Bi, and Guannan Zhang, A stochastic approximate gradient ascent method for Bayesian experimental design with implicit models,
, 2021.**International Conference on Artificial Intelligence and Statistics (AISTATS)** - Jiaxin Zhang, Sirui Bi, and Guannan Zhang, A hybrid gradient method to designing Bayesian experiments for implicit models,
, 2020.**NeurIPS Workshop on Machine Learning and the Physical Sciences** - Jiaxin Zhang, Sirui Bi, and Guannan Zhang, Scalable deep-learning-accelerated topology optimization for additively manufactured materials,
, 2020.**NeurIPS Workshop on Machine Learning for Engineering Modeling, Simulation and Design** - J. Zhang, X. Liu, S. Bi, J. Yi, G. Zhang, M. Eisenbach, Robust data-driven approach for predicting the configurational energy of high entropy alloys,
, 185 (5), pp. 108247, 2020.*Material & Design* - Guannan Zhang, Jiaxin Zhang and Jacob Hinkle, Learning nonlinear level sets for dimensionality reduction in function approximation,
, 32, pp. 13199-13208, 2019.*Advances in Neural Information Processing Systems (NeurIPS)* - Xuping Xie, Guannan Zhang and Clayton Webster, Non-Intrusive Inference Reduced Order Model for Fluids Using Deep Multistep Neural Network,
, 7(8), pp. 757, 2019.**Mathematics**

**Activities**:

- In December 2020, Sirui Bi gave a presentation on our work "Accelerating Topology Optimization using Scalable Machine Learning" at
**The Conference on Machine Learning in Science and Engineering (MLSE 2020)***.* - In December 2020, Jiaxin Zhang gave a presentation on our work "A hybrid gradient method to designing Bayesian experiments for implicit models" at
**NeurIPS 2020 Workshop on Machine Learning and the Physical Sciences***.*[Download our poster] - In December 2020, Sirui Bi gave a presentation on our work "Scalable deep-learning-accelerated topology optimization for additively manufactured materials,
**NeurIPS 2020 Workshop on Machine Learning for Engineering Modeling, Simulation and Design**. [Download our poster] [A short video presentation] - In December 2019, Guannan Zhang and Jiaxin Zhang attended the
**2019 Conference on Neural Information Processing Systems****(NeurIPS 2019)**to present our paper "Learning nonlinear level sets for dimensionality reduction in function approximation". [Download our poster] - In July 2019, Sirui Bi presented our work on "Scalable deep-learning-accelerated topology optimization for additively manufactured materials" at the ORNL AI Expo.

Last Updated: March 3, 2021 - 2:40 pm