Abstract: We discuss mathematical aspects of how to digitally represent information. Redundancy or oversampling is an important ingredient in many types of signal representations. For example, in the classical Shannon sampling theorem, redundancy provides design flexibility and robustness against noisy measurements. We shall discuss analog-to-digital conversion for redundant signal representations. We show error bounds which quantify how well different quantization methods, such as consistent reconstruction and Sigma-Delta quantization, utilize redundancy. Lastly, we discuss the problem of training neural networks with low-bit weights; we consider an approach based on stochastic Markov gradient descent (SMGD) and prove that the method performs well both theoretically and numerically.
Bio: Alex Powell is Professor of Mathematics at Vanderbilt University. He received his PhD in Mathematics from the University of Maryland, his BS in Electrical Engineering from Rutgers University, and spent two years as a postdoc in the Program for Applied and Computational Mathematics at Princeton University.
Last Updated: April 15, 2021 - 11:54 am