Event

A Dictionary Learning Algorithm with Bayesian Sparsity and Class Sparsity Priors

Dr. Alberto Bocchinfuso

Abstract: Dictionary learning algorithms allow us to solve inverse problems with the use of a dictionary storing the known possible outcomes of an experiment.  Given the measured result of the experiment, the algorithm aims to identify the entries of the dictionary that best match the measurements.  In this framework, it is required that the algorithm identifies only a few elements representing the measured experiment while dictionary compression mechanisms try to ensure an efficient reduction of the matching problem complexity.

 This talk will outline a new algorithm that takes advantage of many Bayesian inverse problem techniques to ensure an efficient solution to the dictionary learning problem.  First, the dictionary entries are divided into several classes and the sub-dictionary of every class is individually compressed; the error introduced by the compression is handled by the algorithm as a modeling error in the Bayesian framework.  After this pre-processing phase, the algorithm solves first a compressed reduced sub-problem on the class level and subsequently a deflated sub-problem to effectively find the solution.

 The class identification step takes advantage of a group sparsity algorithm to identify a set of few classes that are relevant to the solution and discard the classes that are not.  Finally, a deflated dictionary is composed only of the relevant classes and the solution is found with a sparsity-promoting Bayesian algorithm that uses the deflated dictionary.  The combined use of compression and deflation can lead to a substantial performance advantage over the use of a dictionary-matching algorithm that considers the full dictionary and practical examples show that the algorithm is suitable both for classification purposes and for the solution of the dictionary learning problem.

Speaker’s Bio: Alberto Bocchinfuso is a postdoctoral research associate in the Multiscale Methods and Dynamics group.  He received his Ph.D. from Case Western Reserve University in Applied Mathematics in 2023.  During his graduate studies, his research focused on reduced models and inverse problems, particularly using Bayesian techniques.  At Oak Ridge National Laboratory, he is working on the construction and improvement of surrogate models to simulate Tokamak operations using data science and statistical tools.

Last Updated: November 28, 2023 - 8:50 am