Highlight

Bayesian-based Hyperparameter Optimization for Spiking Neuromorphic Systems

Bayesian-based Hyperparameter Optimization for Spiking Neuromorphic Systems
Comparing grid search with HP optimization: a. Grid search: 100 runs for each of the valid 240 different HP sets. b. Bayesian-based HP optimization: 10 runs for selected 40 HP combinations. Both techniques report the same optimum HP set.

Achievement

A team of researchers from Oak Ridge National Laboratory (ORNL) and a graduate student intern from Purdue University, and a professor at Purdue University developed a Bayesian-based hyperparameter optimization approach for spiking neuromorphic systems.  They showed how this optimization framework can lead to significant improvement in designing accurate neuromorphic computing systems. In particular, they showed that this hyperparameter optimization approach can discover the same optimal hyperparameter set for input encoding as a grid search, but with far fewer evaluations and far less time.

Significance and Impact

Designing a neuromorphic computing system involves selection of several hyperparameters that not only affect the accuracy of the framework, but also the energy efficiency and speed of inference and training. These hyperparameters might be inherent to the training of the spiking neural network (SNN), the input/output encoding of the real-world data to spikes, or the underlying neuromorphic hardware.  The presented framework provides a way to automatically discover an optimal hyperparameters with significantly fewer evaluations than a grid search. The researchers also showed the impact of hardware-specific hyperparameters on the performance of the system demonstrated that by optimizing these hyperparameters, significantly better application performance can be achieved.

Research Details

  • The research team proposed a simple, effective, and generalizable hyperparameter optimization algorithm that can be applied to any type of spiking neuromorphic algorithm or architecture.
  • They performed a sensitivity analysis on spiking neuromorphic system hyperparameters, discussing strategic role of some sets of hyperparameters on the system's final performance.
  • They showed that hyperparameters of a resilient training framework for spiking neuromorphic systems such as EONS have the least impact on the final performance of the system compared to the input encoding or hardware-specific hyperparameters.
  • To best of their knowledge, this work was the first time in the scientific literature that a hyperparameter optimization technique for spiking neuromorphic computing system was developed and that the effect of different types of hyperparameters on the overall performance of the system was analyzed.

Publication

Maryam Parsa, J. Parker Mitchell, Catherine D. Schuman, Robert M. Patton, Kaushik Roy, and Thomas E. Potok.  “Bayesian-based Hyperparameter Optimization for Spiking Neuromorphic Systems.” IEEE Big Data, Workshop on Energy-Efficient Machine Learning and Big Data Analytics

Overview

Designing a neuromorphic computing system involves selection of several hyperparameters that not only affect the accuracy of the framework, but also the energy efficiency and speed of inference and training. These hyperparameters might be inherent to the training of the spiking neural network (SNN), the input/output encoding of the real-world data to spikes, or the underlying neuromorphic hardware. In this work, the researchers presented a Bayesian-based hyperparameter optimization approach for spiking neuromorphic systems, and we show how this optimization framework can lead to significant improvement in designing accurate neuromorphic computing systems. They showed that this hyperparameter optimization approach can discover the same optimal hyperparameter set for input encoding as a grid search, but with far fewer evaluations and far less time.  They also showed the impact of hardware-specific hyperparameters on the performance of the system demonstrated that by optimizing these hyperparameters, significantly better application performance can be achieved.