Computational Physics Group

Karel Matous



Home

People

Publications

Research

Collaborators

Acknowledgments

Links

News

Courses


Computational Homogenization at Extreme Scales


M. Mosby and K. Matous

Department of Aerospace and Mechanical Engineering
University of Notre Dame
Notre Dame, IN, 46556, USA.


Abstract

   
    Multi-scale simulations at extreme scales in terms of both physical length scales and computational resources are presented. In this letter, we introduce a hierarchically parallel computational homogenization solver that employs hundreds of thousands of computing cores and resolves O(105) in material length scales (from O(cm) to O(100 nm)). Simulations of this kind are essential in understanding the multi-scale essence of many natural and synthetically made materials. Thus, we present a simulation consisting of 53.8 Billion finite elements with 28.1 Billion nonlinear equations that is solved on 393,216 computing cores (786,432 threads). The excellent parallel performance of the computational homogenization solver is demonstrated by a strong scaling test from 4,096 to 262,114 cores. A fully coupled multi-scale damage simulation shows a complex crack profile at the micro-scale and the macroscopic crack tunneling phenomenon. Such large and predictive simulations are an important step towards Virtual Materials Testing and can aid in development of new material formulations with extreme properties. Furthermore, the high computational efficiency of our computational homogenization solver holds great promise for utilizing the next generation of exascale parallel computing platforms that are expected to accelerate computations through orders of magnitude increase in parallelism rather than speed of each processor.
    

Conclusions

    
   We have presented extreme scale computations, in terms of both physical scales and computing resources, containing ∼54B finite elements, over 28B nonlinear DOFs and executed on ∼400 thousand computing cores. Such large and detailed simulations are necessary for better understanding of complex
(i.e., rate-dependent) multi-scale material behavior under nontrivial loading conditions. Moreover, with co-designed experiments and properly validated constitutive models, such large predictive simulations can be the basis of “Virtual Materials Testing” standards, and aid in development of new material formulations with extreme properties. These large simulations were enabled by a hierarchically parallel CH solver with excellent computational scaling behavior. The high scalability of the CH method makes it a promising approach for efficiently utilizing the future massively parallel exascale platforms.

Acknowledgments


This work was supported by the Department of Energy, National Nuclear Security Administration, under the award no. DE-NA0002377. We would also like to acknowledge computational resources from the 2015 ASCR Leadership Computing Challenge (ALCC) under project number CSC188.


Download the paper here                                                                    
© 2015 Notre Dame and Dr. Karel Matous