Project

A stochastic optimal control framework for quantifying and reducing uncertainties in deep learning

Project Status: Active

We propose to develop a stochastic optimal control framework for quantifying and reducing uncertainties in deep learning by exploiting the connection between probabilistic network architectures and optimal control of stochastic dynamical systems. Despite neural networks achieving impressive results in many machine learning tasks, current network models often produce unrealistic decisions due to the computational intractability of existing uncertainty quantification (UQ) methods in measuring uncertainties of very deep networks. As UQ is increasingly important to the safe use of deep learning in decision making for scientific applications, the computing capability developed in this effort will significantly advance the reliability of machine-learning assisted scientific predictions for DOE applications.

Principal Investigator: Guannan Zhang (CSMD, ORNL)

Senior Investigators: Jiaxin Zhang (CSMD, ORNL), Hoang Tran (CSMD, ORNL), Miroslav Stoyanov (CSMD, ORNL), Sirui Bi (CSED, ORNL), Alan Tennant (MSTD, ORNL), Pei Zhang (CSED, ORNL)

Funding Period: Sept. 2019 -- Aug. 2021

SAGABED
In Dec. 2020, Jiaxin Zhang gave a presentation on Bayesian experimental design at  NeurIPS 2020 Workshop on Machine Learning and the Physical Sciences.
topot

In Dec. 2020, Sirui Bi gave a presentation on DL-based topology design at NeurIPS 2020 Workshop on Machine Learning for Engineering Modeling, Simulation and Design.

RevNet

In Dec. 2019, Guannan Zhang gave a presentation on nonlinear dimensionality reduction using RevNets at NeurIPS 2019.

Prediction Interval for AI

In May 2021, Pei Zhang gave a presentation on our work "A prediction interval method for uncertainty quantification of regression models", at the ICLR Workshop on Deep Learning for Simulation.

Publications:

  • Y. Teng, Z. Wang, L. Ju, A. Gruber, and G. Zhang, Level set learning with pseudo-reversible neural networks for nonlinear dimension reduction in function approximation, SIAM Journal on Scientific Computing, submitted.
  • Pei Zhang, Siyan Liu, Dan Lu, Ramanan Sankaran and Guannan Zhang, A prediction interval method for uncertainty quantification of regression models, Proceedings of ICLR Workshop on Deep Learning for Simulation, 2021. 
  • Sirui Bi, Jiaxin Zhang and Guannan Zhang, Towards efficient uncertainty estimation in deep learning for robust energy prediction in materials chemistry, Proceedings of  ICLR Workshop on Deep Learning for Simulation, 2021.
  • Jiaxin Zhang, Sirui Bi, and Guannan Zhang, A Scalable Gradient Free Method for Bayesian Experimental Design with Implicit Models, Proceedings of The 24th International Conference on Artificial Intelligence and Statistics (AISTATS), PMLR 130:3745-3753, 2021.
  • Jiaxin Zhang, Sirui Bi, and Guannan Zhang, A hybrid gradient method to designing Bayesian experiments for implicit models, Proceedings of  NeurIPS Workshop on Machine Learning and the Physical Sciences, 2020.
  • Jiaxin Zhang, Sirui Bi, and Guannan Zhang, Scalable deep-learning-accelerated topology optimization for additively manufactured materials, Proceedings of NeurIPS Workshop on Machine Learning for Engineering Modeling, Simulation and Design, 2020. 
  • J. Zhang, X. Liu, S. Bi, J. Yi, G. Zhang, M. Eisenbach, Robust data-driven approach for predicting the configurational energy of high entropy alloys, Material & Design, 185 (5), pp. 108247, 2020.
  • Guannan Zhang, Jiaxin Zhang and Jacob Hinkle, Learning nonlinear level sets for dimensionality reduction in function approximation, Advances in Neural Information Processing Systems (NeurIPS), 32, pp. 13199-13208, 2019.
  • Xuping Xie, Guannan Zhang and Clayton Webster, Non-Intrusive Inference Reduced Order Model for Fluids Using Deep Multistep Neural Network, Mathematics, 7(8), pp. 757, 2019.

Activities:

  • In May 2021, Pei Zhang gave a presentation on our work "A prediction interval method for uncertainty quantification of regression models", at the ICLR Workshop on Deep Learning for Simulation.
  • In April 2021, Jiaxin Zhang gave a presentation on our work "A Scalable Gradient Free Method for Bayesian Experimental Design with Implicit Models" at the The 24th International Conference on Artificial Intelligence and Statistics (AISTATS 2021)
  • In December 2020, Sirui Bi gave a presentation on our work "Accelerating Topology Optimization using Scalable Machine Learning" at The Conference on Machine Learning in Science and Engineering (MLSE 2020).
  • In December 2020, Jiaxin Zhang gave a presentation on our work "A hybrid gradient method to designing Bayesian experiments for implicit models" at NeurIPS 2020 Workshop on Machine Learning and the Physical Sciences[Download our poster]
  • In December 2020, Sirui Bi gave a presentation on our work "Scalable deep-learning-accelerated topology optimization for additively manufactured materials, NeurIPS 2020 Workshop on Machine Learning for Engineering Modeling, Simulation and Design. [Download our poster] [A short video presentation]
  • In December 2019, Guannan Zhang and Jiaxin Zhang attended the 2019 Conference on Neural Information Processing Systems (NeurIPS 2019) to present our paper "Learning nonlinear level sets for dimensionality reduction in function approximation". [Download our poster]
  • In July 2019, Sirui Bi presented our work on "Scalable deep-learning-accelerated topology optimization for additively manufactured materials" at the ORNL AI Expo.

Last Updated: November 28, 2022 - 12:57 pm