Publication

Scalable Deep-Learning-Accelerated Topology Optimization for Additively Manufactured Materials

SDL-TO framework to accelerating TO by learning the iterative history data and mapping between given design and its gradient. Using local sampling strategy with distributed computing and training on multiple CPUs and GPUs, the learned surrogate gradient enables an online update instead of expensive computing using a high-fidelity simulator or solver.

Citation

Bi, Sirui, Jiaxin Zhang, and Guannan Zhang. "Scalable Deep-Learning-Accelerated Topology Optimization for Additively Manufactured Materials." NeurIPS 2020 Workshop on Machine Learning for Engineering Modeling, Simulation and Design, arXiv preprint arXiv:2011.14177 (2020).

Abstract

Topology optimization (TO) is a popular and powerful computational approach for designing novel structures, materials, and devices. Two computational challenges have limited the applicability of TO to a variety of industrial applications. First, a TO problem often involves a large number of design variables to guarantee sufficient expressive power. Second, many TO problems require a large number of expensive physical model simulations, and those simulations cannot be parallelized. To address these issues, we propose a general scalable deep-learning (DL) based TO framework, referred to as SDL-TO, which utilizes parallel schemes in high performance computing (HPC) to accelerate the TO process for designing additively manufactured (AM) materials. Unlike the existing studies of DL for TO, our framework accelerates TO by learning the iterative history data and simultaneously training on the mapping between the given design and its gradient. The surrogate gradient is learned by utilizing parallel computing on multiple CPUs incorporated with a distributed DL training on multiple GPUs. The learned TO gradient enables a fast online update scheme instead of an expensive update based on the physical simulator or solver. Using a local sampling strategy, we achieve to reduce the intrinsic high dimensionality of the design space and improve the training accuracy and the scalability of the SDL-TO framework. The method is demonstrated by benchmark examples and AM materials design for heat conduction. The proposed SDL-TO framework shows competitive performance compared to the baseline methods but significantly reduces the computational cost by a speed up of around 8.6x over the standard TO implementation.

Read Publication

Last Updated: December 5, 2020 - 3:22 pm