Computational Physics Group
Hierarchically parallel coupled finite strain multiscale solver for modeling heterogeneous layers
M. Mosby and K. Matous
Department of Aerospace and Mechanical Engineering
University of Notre Dame
Notre Dame, IN, 46556, USA.
We develop a three-dimensional, hierarchically parallel, finite strain multiscale solver capable of computing nonlinear multiscale solutions with over 1 Billion finite elements and over 574 Million nonlinear equations on only 1,552 computing cores. In the vein of FE2, we use the nested iterative procedure and devote the solver to multiscale cohesive modeling of heterogeneous hyperelastic layers. The hierarchically parallel multiscale solver takes advantage of a client-server non-blocking communication matrix that limits latency, starvation and overhead by overlaying computations at different scales. We perform simulations of real-scale engineering devices and bridge O(106) in length-scales, spanning from O(101) mm to O(101) nm in spatial resolution. Verification of the hierarchically parallel solver is provided together with a mesh convergence study. Moreover, we report on the scaling performance.
We have developed a fully coupled, multiscale cohesive solver for modeling heterogeneous layers. The hierarchically parallel multiscale solver is based on a point-to-point non-blocking client-server communication scheme and is scalable to many thousands of distributed computing cores. We show that the well established FE2 method can be practical to accurately simulate engineering scale domains with
resolution from O(101) nm to O(101) mm (O(106) in spatial scales). Engineering devices of this size can easily be tested experimentally to provide validation data for the new hierarchically parallel FE2 multiscale framework. Such a validated framework lends itself to predictive scientific studies and Virtual Materials Testing in order to develop reduced order models for application in industry-ready modeling and simulation tools. A rigorous verification and mesh refinement study is performed and shows excellent agreement between DNM and FE2 solutions. Using the verified multiscale framework, we solve an example with over 1 Billion finite elements and over 574 Million nonlinear equations on only 1,552 computing cores. The solver provides the nonlinear solution to ~370,241 unknowns per processing core. Additionally, the proposed high-performance solver shows excellent strong scaling performance. With this parallel multiscale solver in hand, future studies can now be performed to investigate the influence of boundary conditions as well as microstructure morphology on the overall behavior of bonded systems.