SC²S Colloquium - November 27, 2012

From Sccswiki
Jump to navigation Jump to search
Date: November 27, 2012
Room: 02.09.023
Time: 3 pm, s.t.

Jens Zudrop, InvasIC invited talk: Massively parallel simulations on Octree based meshes

Applied Supercomputing in Engineering, German Research School for Simulation Sciences

The Spectral Discontinuous Galerkin method on Octrees:

During the last decade numerical simulation techniques evolved from mathematical playgrounds to practical tools for scientist and engineers. Clearly, this evolution is also a consequence of increasing computational power. However, it can be observed that in the last years the rise in compute power is due to growing parallelism. Necessarily, the numerical simulation tools have to be suited well for these massively parallel simulation settings.

Most of the numerical methods are based on a triangulation of the simulation domain and the numerical solver uses basic mesh operations like neighbor lookup. On very large systems with more than 100 thousand cores already such a simple mesh operation can be very time consuming. In the first part of the talk we present an Octree based mesh framework that circumvents these problems and allows for a fully parallel mesh setup with a minimum amount of synchronization. In our approach, we linearize the Octree based mesh by a space filling curve and decompose the domain by means of this linearized element list. By this approach, mesh operations like neighbor lookup become completely local. We show that the framework is able to scale up to complete, state of the art HPC systems and achieves a very high efficiency.

In the second part of the presentation we focus on numerical methods based on our Octree framework. The Discontinuous Galerkin method is a very prominent example of such techniques, which works well for hyper- bolic conservation laws. One of the strengths of this numerical scheme is its ability to achieve high orders and to deal with non-conforming element refinement. Furthermore, we use the equations of electrodynamics and inviscid, compressible fluid flow as examples for linear and nonlinear conservation laws and discuss implementation aspects with respect to them. Finally, we present the scalability and performance results of such a solver on state of the art HPC systems.

Simon Zimny, InvasIC invited talk: Efficiency and scalability on SuperMUC: Lattice Boltzmann Methods on complex geometries

Applied Supercomputing in Engineering, German Research School for Simulation Sciences

Simulating blood flow in patient specific intracranial aneurysms:

The Lattice Boltzmann Method (LBM) is a promising alternative to the classical approaches of solving the Navier-Stokes equations numerically. Due to its explicit and local nature it is highly suitable for massive parallization and by this also on simulating fluid flows in extremly complex geometries like the blood flow.

In recent years the treatment process of intracranial aneurysms (IA) has been studied at length and increased significantly . The use of stents to change the flow properties of the blood in order to trigger the occlusion of IA is a promising way. As a prerequisite for implementing a coupled simulation for thrombus formation in IA, the flow of blood and especially the change of flow patterns due to the insertion of one or multiple stent(s) has to be analysed in detail. In order to resolve the highly complex geometry of IA and individual struts of a stent in detail, the mesh resolution has to be sufficiently high, resulting in a huge number of elements (in the order of 107 − 109 ). Simulating the blood flow in geometries like these in a reasonable time requires large computational power. Although an efficiently parallizable numerical method like the LBM including techniques like local grid refinement are dedicated to reduce the computational power tremendously, the use of HPC-systems, like the superMUC, is still necessary. The Adaptable Poly-Engineering Simulator (Apes) framework provides the full toolchain from the mesh creation (Seeder) over the blood flow simulations using the LBM (Musubi) to the postprocessing (Harvester). The LBM solver, Musubi, is highly efficient and scalable up to more than 100 thousand cores and by this optimal for fluid flow simulations in complex geometries.

The talk will be split up in two parts. At first the Musubi solver is introduced in the context of the Apes simulation framework including implemented techniques like local grid refinement and other features. After this the performance of the Musubi code on the superMUC will be discussed based on a simple testcase and a full 3D simulation of flow through highly complex patient specific geometries like an IA as described.