Highlight

New ORNL Method Shows Promise for Optimizing AI Algorithms

The DGS-ES method (red arrow) points in the direction of the optimal solution for a given problem, whereas other techniques point in the wrong direction (blue arrow).
The DGS-ES method (red arrow) points in the direction of the optimal solution for a given problem, whereas other techniques point in the wrong direction (blue arrow).

Achievement

ORNL researchers developed a novel optimization method called the Directional Gaussian Smoothing-Evolutionary Strategy (DGS-ES) to enable machine learning and artificial intelligence algorithms to reach optimal solutions. The method can be scaled up efficiently on ORNL’s Summit supercomputer and provides scientists with a state-of-the-art resource for training reinforcement learning algorithms, which learn from experience to maximize rewards and minimize negative outcomes.

Significance and Impact

Scientific optimization problems in fields such as materials science and quantum computing are often impossible to solve with other methods because of their high dimensionality, but the problems have characteristics that make them particularly amenable to the DGS-ES approach, which dramatically reduces the time to solution in higher dimensions.
 

Research Details

  • A nonlocal gradient operator was defined based on directional Gaussian smoothing
  • The gradient operator was approximated by Gauss-Hermite quadrature rule
  • Theoretical analysis verifying the scalability of the DGS-ES method, i.e., the number of iteration needed for convergence is independent of the dimension for strongly convex functions. 

Overview

Because standard optimization methods, or evolution strategies, tend to have low accuracy and other limitations, the ORNL researchers designed the scalable DGS-ES method to better train machine learning and artificial intelligence algorithms by revealing black box processes and maintaining dimension independence, so that when scaled on a powerful high-performance computing resource such as the Summit supercomputer it will achieve faster times to solution when solving even the most complex problems. These characteristics result in a globally optimal solution instead of a locally optimal solution. 

The team introduced an entirely new type of evolution strategy, verified the strong scalability of the new method, and demonstrated how it performs in relation to high-dimensional benchmark optimization problems and a real-world material design problem concerning rocket shell manufacturing. After testing the performance of the DGS-ES method in comparison to other evolution strategies, the team is now applying it to reinforcement learning problems, a subset of machine learning in which an algorithm learns from experience and makes decisions without a human in the loop.  

Last Updated: April 15, 2020 - 1:53 pm