SC²S Colloquium - November 27, 2012

From Sccswiki
Revision as of 19:17, 10 November 2012 by Schreibm (talk | contribs) (Created page with "{| class="wikitable" |- | '''Date:''' || November 27, 2012 |- | '''Room:''' || 02.09.023 |- | '''Time:''' || 3 pm, s.t. |- |} == Simon Zimny: Efficiency and scalability on S...")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search
Date: November 27, 2012
Room: 02.09.023
Time: 3 pm, s.t.


Simon Zimny: Efficiency and scalability on SuperMUC: Lattice Boltzmann Methods on complex geometries

Applied Supercomputing in Engineering, German Research School for Simulation Sciences

Simulating blood flow in patient specific intracranial aneurysms:

The Lattice Boltzmann Method (LBM) is a promising alternative to the classical approaches of solving the Navier-Stokes equations numerically. Due to its explicit and local nature it is highly suitable for massive parallization and by this also on simulating fluid flows in extremly complex geometries like the blood flow. In recent years the treatment process of intracranial aneurysms (IA) has been studied at length and increased significantly . The use of stents to change the flow properties of the blood in order to trigger the occlusion of IA is a promising way. As a prerequisite for implementing a coupled simulation for thrombus formation in IA, the flow of blood and especially the change of flow patterns due to the insertion of one or multiple stent(s) has to be analysed in detail. In order to resolve the highly complex geometry of IA and individual struts of a stent in detail, the mesh resolution has to be sufficiently high, resulting in a huge number of elements (in the order of 107 − 109 ). Simulating the blood flow in geometries like these in a reasonable time requires large computational power. Although an efficiently parallizable numerical method like the LBM including techniques like local grid refinement are dedicated to reduce the computational power tremendously, the use of HPC-systems, like the superMUC, is still necessary. The Adaptable Poly-Engineering Simulator (Apes) framework provides the full toolchain from the mesh creation (Seeder) over the blood flow simulations using the LBM (Musubi) to the postprocessing (Harvester). The LBM solver, Musubi, is highly efficient and scalable up to more than 100 thousand cores and by this optimal for fluid flow simulations in complex geometries. The talk will be split up in two parts. At first the Musubi solver is introduced in the context of the Apes simulation framework including implemented techniques like local grid refinement and other features. After this the performance of the Musubi code on the superMUC will be discussed based on a simple testcase and a full 3D simulation of flow through highly complex patient specific geometries like an IA as described.