Event

Physics-Informed Neural Networks and Neural Operator Networks: Methods and Applications

Dr. Ameya Jagtap

Abstract: Traditional methods in scientific computation have made significant strides, yet they grapple with stringent requirements.  These include the need for precise knowledge of underlying physical laws, an accurate understanding of boundary and/or initial conditions, and the involvement of time-consuming workflows like mesh generation and lengthy simulations. Furthermore, these approaches struggle with high-dimensional problems governed by parameterized partial differential equations (PDEs), making them impractical.

Physics-informed machine learning (PIML) emerges as a promising solution to these challenges. In this presentation, we delve into a specific PIML approach known as physics-informed neural networks (PINNs).  We provide an overview of the current capabilities and limitations of PINNs, highlighting their efficacy in diverse applications compared to traditional methods.  Additionally, we explore extensions of the standard PINN method, such as conservative PINNs and extended PINNs, designed for handling big data and/or large models.  The discussion also encompasses various adaptive activation functions that can expedite the convergence of deep, physics-informed neural networks.

We also discuss some of the current advancements in deep operator networks (a type of neural operator), which learn the mapping between infinite-dimensional function spaces.  Traditional PDE solvers can be both time-consuming and computationally intensive, particularly when dealing with complex systems.  In contrast, neural operators have proven to outperform existing machine learning methodologies in solving PDEs.  Importantly, they achieve this improved performance while significantly reducing computational time compared to traditional numerical solvers. To this end, we explore the use of deep operator networks in addressing stiff chemical kinetics problems and employ a novel architecture for learning data-driven basis functions for mapping between discontinuous solutions.

Speaker’s Bio: Ameya Japtap is an Assistant Professor of Applied Mathematics (Research) at Brown University.  His academic journey includes obtaining PhD and master’s degrees in aerospace engineering from the Indian Institute of Science.  Subsequently, he served as a postdoctoral research fellow at the Tata Institute of Fundamental Research - Centre for Applicable Mathematics before joining Brown University for postdoctoral research in applied mathematics.

With an interdisciplinary background spanning mechanical/aerospace engineering, applied mathematics, and computation, his research focuses on developing data- and physics-driven scientific machine learning algorithms applicable to diverse problems in computational physics. He specializes in Scientific Machine Learning, Deep Learning, Data/Physics-driven Deep Learning Techniques, Multi-scale/Multi-Physics Simulations, Spectral/Finite Element Methods, WENO/DG Schemes, Domain Decomposition Techniques, etc.  He also has a keen interest in deep generative models, innovative artificial neural network architectures (including graph and quantum neural networks), natural language processing (foundation models and large language models), and brain-inspired computing.

Last Updated: January 29, 2024 - 2:53 pm