-
We propose to develop a stochastic optimal control framework for quantifying and reducing uncertainties in deep learning by exploiting the connection between probabilistic network architectures and…
-
Project Summary: We propose to develop a scalable black-box training framework for scientific machine learning (SciML) models that are non-trainable with existing automatic differentiation-based…
-
The Sensei project is led by Wes Bethel from Lawrence Berkeley National Laboratory and involves participants from multiple laboratories and industries. This project takes aim at a set of…
-
Programming NVM as Persistent, High-Performance Main Memory
-
The FASTMath SciDAC Institute develops and deploys scalable mathematical algorithms and software tools for reliable simulation of complex physical phenomena and collaborates with application…
-
The Toolkit for Adaptive Stochastic Modeling and Non-Intrusive ApproximatioN is a robust library for high dimensional integration and interpolation as well as parameter calibration. The code consists…
-
Deep Learning is a sub-field of machine learning that focuses on learning features from data through multiple layers of abstraction. These features are learned with little human domain knowledge…
-
The ADIOS project.
-
US Department of Energy (DOE) leadership computing facilities are in the process of deploying extreme-scale high-performance computing (HPC) systems with the long-range goal of building exascale…
-
Developing predictive tools to understand the behavior of plasma-facing components in fusion reactors. CSMD contributions include HPC implementation, uncertainty quantification, and data analysis and…
-
Extreme-scale, high-performance computing (HPC) significantly advances discovery in fundamental scientific processes by enabling multiscale simulations that range from the very small, on quantum and…
-
2016
-
2015
-
2010