**Abstract**: During the past fifteen years, there has been a tremendous growth of research on randomized (sketching) algorithms for large-scale computations, such as matrix multiplication, least-squares, *singular value decomposition*, and low-rank approximation. However, when these algorithms are applied, the user rarely knows much about the error of the output. One source of this difficulty is that the literature has mostly studied error analysis in terms of worst-case theoretical error bounds. Due to the fact that these bounds often involve unknown parameters or conservative constants, they are seldom useful at a numerical level. In this talk, I will discuss a fundamentally different approach based on the statistical technique of bootstrapping, which makes it possible to numerically estimate the errors of randomized algorithms in a variety of settings. The talk will give an overview of my research on this topic from the last few years, and I will review background statistical concepts so that the talk is accessible to non-specialists.

**Speaker’s Bio**: Miles Lopes is an Associate Professor at the University of California (UC)- Davis, where he is a member of the Department of Statistics and the Graduate Group in Applied Mathematics. His main research interests are in high-dimensional inference and randomized algorithms, with an emphasis on bootstrap methods. Prior to joining UC Davis, he received his Ph.D. in Statistics at UC Berkeley, where he was a Department of Energy *Computational Science Graduate Fellowship* fellow.

Last Updated: November 14, 2022 - 10:38 am