**Abstract**: Algorithms for Fast Fourier Transforms (FFT) have been cornerstones of scientific computing since the 1960s, when hallmark achievements such as Cooley–Tukey method have been used to analyze data reducing the computational cost from $O(N^2)$ to $O(N \log(N))$ operations. However, the algorithm requires roughly the same number of memory operations as floating point arithmetic and on modern computing hardware, the performance is memory bound. This presents numerous challenges when the data is distributed across the nodes of the supercomputer, which is a common case in large-scale scientific simulations. We will explore the latest advancements in methods for distributed FFTs.

**Speaker’s Bio**: Dr. Miroslav Stoyanov got his Ph.D. from Virginia Tech in 2009. After a post-doc at Florida State University, Dr. Stoyanov joined the Oak Ridge National Laboratory (ORNL) in 2012. His areas of research involve surrogate modeling, uncertainty quantification, high-dimensional approximation, and supercomputing. Dr. Stoyanov is also the lead developer of the ORNL Toolkit for Adaptive Stochastic Modeling and Non-Intrusive Approximation (Tasmanian) and the library for Highly Efficient Fast-Fourier Transforms at the Exascale (heFFTe).

Last Updated: March 7, 2023 - 2:55 pm