Event

Interpreting Neural Network Models through Graph Representation Learning

Dr. Seung-Hwan Lim

Abstract: In this seminar, we present an interpretable graph representation through a learning-based approach to predict if a model is worth training or not. We use an example of spiking neural network models for reinforcement learning in autonomous vehicle usage. Among variants of neural network models, spiking neural network models are widely used in neuromorphic computing and are more flexible than other variants of neural network models with respect to the connectivity between neurons, which makes them more suitable for a graph-based approach. In this study, we represent neural network models as graphs, use the Weisfeiler-Lehman graph kernel to map the graph-represented neural network in a vector space, and perform classification to classify graphs in the vector space. The advantages of our approach are: 1) interpretable -- able to identify influential motifs or substructure of neural network models to achieve the desired level of model performance; 2) intuitive -- able to perform visual and interactive analysis to organize the neural network model instances; and 3) task-independent -- able to flexibly choose downstream machine learning algorithms to further analyze model instances on top of their vector representation. In our evaluation with 5,000 spiking neural network models, we achieved 82% of testing accuracy to classify if an unseen model owns the potential to achieve the desired level of fitness score, based on the connectivity of neurons and their hyperparameters. This study shows a novel direction in interpreting black-box neural network models beyond traditional interpretability methods, such as feature importance analysis, allowing us to systematically compose new models beyond current neural architecture search and hyperparameter optimization techniques.  

Speaker’s Bio: Seung-Hwan Lim is a staff computer scientist in the Discrete Algorithms group at Oak Ridge National Laboratory since 2012. His research interests center around machine learning on graph data, graph representation learning, and high-performance computing systems for a variety of applications such as neurodynamics, bio-medical knowledge graphs, material science, and computing system operation. He has published papers at top venues in multiple domains relevant to data science such as machine learning algorithms at The Conference and Workshop on Neural Information Processing Systems, The IEEE International Conference on Data Mining, and the International Joint Conference on Neural Networks; machine learning applications at the Institute of Electrical and Electronics Engineers (IEEE) BigData, Information Visualization, and Expert Systems with applications; computing systems such as supercomputing, International Symposium on Parallel and Distributed Processing, Programming Language Design and Implementation, Machine Learning & Systems, SIGMETRICS, and IEEE MASCOTS. He was also part of three Gordon Bell finalists and two research and development 100-winner projects.  

Last Updated: November 28, 2022 - 10:53 am