We developed an Evolution Strategy with Directional Gaussian Smoothing (DGS-ES) which exploits nonlocal searching to maximize/minimize high-dimensional non-convex blackbox functions. The main contributions of this effort include (i) development of the a new DGS-gradient operator and its Gauss-Hermite estimator, which introduces, for the first time, an accurate nonlocal searching technique into the family of Evolution Strategy (ES). (ii) Theoretical analysis verifies that the scalability of the DGS-ES method, i.e., the number of iterations needed for convergence, is independent of the dimension for convex functions. (iii) Demonstration of the DGS-ES method on both high-dimensional non-convex benchmark optimization problems, as well as a real-world material design problem for rocket shell manufacture. (iv) Massive parallelization: the DGS-ES method is suited to be scaled up to a large number of parallel workers. All the function evaluations within each iteration can be simulated totally in parallel, and each worker only needs to return a scalar to the master, such that the communication cost among workers is minimal.
J. Zhang, H. Tran, D. Lu, and G. Zhang, Enabling long-range exploration in minimization of multimodal functions, Proceedings of 37th Conference on Uncertainty in Artificial Intelligence (UAI), 2021 (https://arxiv.org/abs/2002.03001).
Significance and Impact
Scientific optimization problems in fields such as materials science and quantum computing are often impossible to solve with other methods because of their high dimensionality, but the problems have characteristics that make them particularly amenable to the DGS-ES approach, which dramatically reduces the time to solution in higher dimensions.
Last Updated: May 26, 2021 - 8:20 pm