Difference between revisions of "Running Research and Development Projects"

From Sccswiki
Jump to navigation Jump to search
 
(171 intermediate revisions by 21 users not shown)
Line 1: Line 1:
 +
<!--
 
= Excellence Initiative: IGSSE =
 
= Excellence Initiative: IGSSE =
 +
-->
  
== Distributed stochastic simulation for the hydroelastic analysis of very large floating structures ==
 
  
{| class="wikitable"
+
<!--
|-
+
= Bayern Excellent: MAC@IGSSE =
| '''Project type''' || IGSSE Project Team
 
|-
 
| '''Funded by''' || Excellence Initiative of the German federal and state
 
governments
 
|-
 
| '''Begin''' || Oktober 2008
 
|-
 
| '''End''' || September 2011
 
|-
 
| '''Leader''' || [[Univ.-Prof. Dr. Hans-Joachim Bungartz]], [[Dr. rer. nat. Miriam Mehl]]
 
|-
 
| '''Staff''' || [[Bernhard Gatzhammer, M.Sc]], [[Dipl.-Inf. Marion Bendig]]
 
|-
 
| '''Contact person''' || [[Dr. rer. nat. Miriam Mehl]]
 
|-
 
| '''Co-operation partner''' || Prof.Dr. Ernst Rank, Dr. Ralf-Peter
 
Mundani, PD Dr. Alexander D&uuml;ster, Prof. PhD Chien Ming Wang (Singapur), SOFiSTiK AG (Oberschleißheim)
 
|}
 
 
 
'''Brief description'''<br><br>
 
 
 
Very large floating structures (VLFS) are more and more employed by a number of countries in creating land space from the ocean. These “swimming islands” are of pontoon-type and benefit from high stability, low manufacturing costs, and easy maintenance. Owing to their much larger dimensions in length than in depth, the VLFS are relatively flexible and, thus, VLFSs have to be robustly designed against wave-induced deformations and stresses. As such a reliability analysis involves many uncertainties, efficient methods have to be developed that allow for both the modelling of uncertain behaviour and the handling of the computational complexity. In this project, the main objective focuses on the development and implementation of a prototype for the hydroelastic analysis of VLFS. Therefore, stochastic finite elements are subject of choice for the planned reliability analysis over huge sets of different structural properties, while sophisticated techniques of modern grid computing should tackle the computational problem of such complex parameter studies.
 
 
 
= Excellence Initiative: IAS =
 
 
 
The [http://www.ias.tum.de Institute for Advanced Study (IAS)] of Technische Universität München is the centerpiece of [http://portal.mytum.de/tum/exzellenzinitiative/zukunftskonzept/index_html TUM’s institutional strategy] to promote top-level research in the so-called Excellence Initiative by the German federal and state governments.
 
 
 
== HPC - Tackling the Multi-Challenge ==
 
  
 
{| class="wikitable"
 
{| class="wikitable"
 
|-  
 
|-  
| '''Project type''' || IAS focus group
+
| '''Project type''' || F&ouml;rderprogramm ”Bayern exzellent”:
 
|-
 
|-
| '''Funded by''' || Excellence Initiative of the German federal and state and governments
+
| '''Funded by''' || Bavarian state government, Technische Universit&auml;t M&uuml;nchen
 
|-
 
|-
| '''Begin''' || 2010
+
| '''Begin''' || 2008
 
|-
 
|-
| '''End''' || 2013
+
| '''End''' || 2015
 
|-
 
|-
 
| '''Leader''' || [[Univ.-Prof. Dr. Hans-Joachim Bungartz]]
 
| '''Leader''' || [[Univ.-Prof. Dr. Hans-Joachim Bungartz]]
 
|-
 
|-
| '''Staff''' || [[Dr. rer. nat. habil. Miriam Mehl]], [[Dr. rer. nat. Dirk Pflüger]], [[Christoph Kowitz, M.Sc.]]
+
| '''Contact person''' || [[Univ.-Prof. Dr. Hans-Joachim Bungartz]], [[Univ.-Prof. Dr. Michael Bader]]
|-
 
| '''Contact person''' || [[Dr. rer. nat. habil. Miriam Mehl]]
 
 
|-  
 
|-  
| '''Co-operation partner''' || Prof. George Biros (Georgia, USA), Markus Hegland (Canberra, Australia)
+
| '''Co-operation partner''' || see website: [http://www.mac.tum.de Munich Centre of Advanced Computing (MAC)]
 
|}
 
|}
  
'''Brief description'''<br><br>
+
'''Brief description'''<br>
 
 
High-performance computing (HPC) is a thriving cross-sectional research field of utmost relevance in science and engineering. Actually, scientific progress is more and more depending on insight gained by computational research. With the increased technological potential, however, the requirements are growing, too – which leads to several computational challenges, which are all
 
related to some “multi-X” notion: multi-disciplinary, multi-physics, multi-scale, multi-dimensional, multi-level, multi-core. This focus group will
 
primarily address the three topic multi-physics (mp), multi-dimensional (md), and multi-core (mc).
 
<br>
 
The interplay of these three subtopics is straightforward: Both mp and md are among the usual suspects that need and, thus, drive HPC technology and mc; mp frequently appears in the context of optimisation or parameter identification or estimation – thriving topics of current md research; and present as well as future mc technology is inspired by algorithmic patterns, as provided by mp and md. Hence, it is not only reasonable to address mp, md, and mc in an integral way, it is essential, and this IAS focus group offers the unique chance of doing this at a very high international level.
 
 
 
 
 
= Bayern Excellent: MAC@IGSSE =
 
  
 
The [http://www.mac.tum.de Munich Centre of Advanced Computing (MAC)] is a research consortium which has been established at TUM to bundle research activities related to computational science and engineering (CSE) as well as high-performance computing (HPC) - across disciplines, across departments, and across institutions. In MAC, seven of TUM's departments and other Munich research institutions (Ludwig-Maximilians-Universität, Max-Planck insititutes, the Leibniz Supercomputing Centre of the Bavarian Academy of Sciences and Humanities) as well as TUM's international partners such as KAUST, the King Abdullah University of Science and Technology, join their forces to ensure the sustainable usage of current and future HPC architectures for the most relevant and most challenging CSE applications.
 
The [http://www.mac.tum.de Munich Centre of Advanced Computing (MAC)] is a research consortium which has been established at TUM to bundle research activities related to computational science and engineering (CSE) as well as high-performance computing (HPC) - across disciplines, across departments, and across institutions. In MAC, seven of TUM's departments and other Munich research institutions (Ludwig-Maximilians-Universität, Max-Planck insititutes, the Leibniz Supercomputing Centre of the Bavarian Academy of Sciences and Humanities) as well as TUM's international partners such as KAUST, the King Abdullah University of Science and Technology, join their forces to ensure the sustainable usage of current and future HPC architectures for the most relevant and most challenging CSE applications.
Line 127: Line 89:
 
| '''Funded by''' || Bavarian state government, Technische Universit&auml;t M&uuml;nchen
 
| '''Funded by''' || Bavarian state government, Technische Universit&auml;t M&uuml;nchen
 
|-
 
|-
| '''Begin''' || 2008
+
| '''Begin''' || 2009
 
|-
 
|-
| '''End''' || 2012
+
| '''End''' || 2013
 
|-
 
|-
 
| '''Leader''' || [http://wwwwestermann.in.tum.de/people/Westermann Prof. Dr. Rüdiger Westermann]<br>subproject: [[Univ.-Prof. Dr. Hans-Joachim Bungartz]]
 
| '''Leader''' || [http://wwwwestermann.in.tum.de/people/Westermann Prof. Dr. Rüdiger Westermann]<br>subproject: [[Univ.-Prof. Dr. Hans-Joachim Bungartz]]
Line 143: Line 105:
  
 
The goal of this project is to design and prototype a scalable infrastructure for computational steering. It will be targeted for the computational engineering domain, which allows to leverage existing cooperative developments as a starting point and to use real-world data that is representative in size, modality, and structure to what is available in other scientific areas like geology or biology. The infrastructure implements a processing pipeline ranging from scalable data processing workflows to interactive visualisation and human-computer interaction in virtual and augmented reality environments.
 
The goal of this project is to design and prototype a scalable infrastructure for computational steering. It will be targeted for the computational engineering domain, which allows to leverage existing cooperative developments as a starting point and to use real-world data that is representative in size, modality, and structure to what is available in other scientific areas like geology or biology. The infrastructure implements a processing pipeline ranging from scalable data processing workflows to interactive visualisation and human-computer interaction in virtual and augmented reality environments.
 +
-->
  
= Excellence Initiative: MAC@KAUST =
+
<!-- = TUM-KAUST Strategic Partnership: MAC@KAUST =
  
== Simulation of CO2 Sequestration ==
 
  
{| class="wikitable"
+
==  High Performance Visual Computing ==
|-
 
| '''Project type''' || Strategic Partnership with the King Abdullah University of Science and Technology (KAUST)]
 
|-
 
| '''Funded by''' || KAUST
 
|-
 
| '''Begin''' || 2009
 
|-
 
| '''End''' || 2013
 
|-
 
| '''Leader''' || [[Univ.-Prof. Dr. Hans-Joachim Bungartz]]
 
|-
 
| '''Staff''' || [http://www.mac.tum.de see Munich Centre of Advanced Computing]
 
|-
 
| '''Contact person''' || [http://www5.in.tum.de/wiki/index.php/Dr._rer._nat._Tobias_Weinzierl Tobias Weinzierl]
 
|-
 
| '''Co-operation partner''' || Prof. Dr. Dr.-Ing. habil. Arndt Bode (Computer Architecture), Prof. Dr. Martin Brokate (Numerical Mathematics and Control Theory), Prof. Dr. Drs. h.c.Karl-Heinz Hoffmann (Numerical Mathematics and Control Theory), Prof. Dr.-Ing. Michael Manhart (Hydromechanics), Prof. Dr. Michael Ulbrich (Mathematical Optimisation)
 
|}
 
 
 
'''Brief description'''<br><br>
 
 
 
The goal of this project is to design and investigate novel approaches to modelling and simulation of CO2 sequestration processes, in particular in the context of enhanced oil recovery. The project will involve both fine-grain simulations - with all related aspects from multi-phase schemes via numerical algorithmics to high-performance computing issues - and homogenization approaches to efficiently capture the fine-grain effects on the macro-scale. For that, groups with expertise in flow physics, mathematical modelling, numerical analysis, numerical algorithmics, optimisation and inverse problems, and high-performance computing and HPC systems join their forces. Topics addressed will cover multi-scale modelling and homogenisation, fully-resolved pore-scale simulation, constrained optimisation of the sequestration process, enhanced numerics and parallelisation, and HPC implementation.
 
 
 
 
 
==  Virtual Arabia ==
 
  
 
{| class="wikitable"
 
{| class="wikitable"
 
|-  
 
|-  
| '''Project type''' || Strategic Partnership with the King Abdullah University of Science and Technology (KAUST)]
+
| '''Project type''' || Strategic Partnership with the King Abdullah University of Science and Technology (KAUST)
 
|-
 
|-
 
| '''Funded by''' || KAUST
 
| '''Funded by''' || KAUST
 
|-
 
|-
| '''Begin''' || 2009
+
| '''Begin''' || 2012
 
|-
 
|-
| '''End''' || 2013
+
| '''End''' || 2015
 
|-
 
|-
| '''Leader''' || [http://www5.in.tum.de/wiki/index.php/Dr._rer._nat._Tobias_Weinzierl Tobias Weinzierl]
+
| '''Leader''' || [[Philipp Neumann]], [http://www5.in.tum.de/wiki/index.php/Dr._rer._nat._Tobias_Weinzierl Tobias Weinzierl]
 
|-
 
|-
 
| '''Staff''' || [http://www.mac.tum.de see Munich Centre of Advanced Computing]
 
| '''Staff''' || [http://www.mac.tum.de see Munich Centre of Advanced Computing]
Line 190: Line 128:
 
| '''Contact person''' || [[Univ.-Prof. Dr. Hans-Joachim Bungartz]]
 
| '''Contact person''' || [[Univ.-Prof. Dr. Hans-Joachim Bungartz]]
 
|-  
 
|-  
| '''Co-operation partner''' || Prof. Dr. Dr.-Ing. habil. Arndt Bode (Computer Architecture), Prof. Gudrun Klinker, Ph.D. (Augmented Reality), Prof. Dr. Ernst Rank (Computation in Engineering), Prof. Dr. Rüdiger Westermann (Computer Graphics & Visualization)
+
| '''Co-operation partner''' || Prof. Dr. Ernst Rank (Computation in Engineering), Prof. Shuyu Sun (KAUST)
 +
|-
 
|}
 
|}
  
 
'''Brief description'''<br><br>
 
'''Brief description'''<br><br>
  
The goal of this project is to develop a virtual environment for the interactive visual exploration of Saudi Arabia. In contrast to virtual globe viewers like Google Earth, this environment will allow the user to look both above and underneath the earth surface in an integrated way. It will, thus, provide interactive means for the visual exploration of 3D geological structures and dynamic seismic processes as well as atmospheric processes and effects or built or planned infrastructure. The specific techniques required to support such functionality will be integrated into a generic infrastructure for visual computing. The project will cooperate with the KAUST 3D Modelling and Visualisation Centre and the KAUST Computational Earth Sciences Centre.
+
The project combines fundamental methodological research in the field of high performance computing (HPC) in a unique way with data exploration on HPC devices and the question how to cross-fertilize seamlessly into applications used at KAUST and TUM to obtain new insight from supercomputing - today and in the upcoming exascale age.  
  
= G8-Initiative: Nuclear Fusion Simulations at Exascale ([http://www.nu-fuse.com/ Nu-FuSe]) =
+
It comprises three major goals: First, we ensure the sustainability of some work conducted under the umbrella of “Simulating CO2 Sequestration” (1), as codes stemming from KAUST faculty but extended by TUM project members and associates are prepared for the upcoming generation of supercomputers besides the KAUST facilities. Second, we combine visualization techniques and supercomputing paving the way to interactive, immersive simulation and computational steering. This endeavor both brings together insights from "Virtual Arabia" (2) and researchers with a supercomputing and algorithmic affinity and it uses synergies from both KAUST’s visualization and supercomputing laboratories. Such an endeavor will pay off for future research both at KAUST and TUM, when insight is not obtained in a batch mode as it is today, but problems and phenomena have to be studied interactively. Third, we integrate research results obtained with TUM codes into KAUST applications as well as codes from the “Virtual Arabia” project, i.e. we demonstrate the broader applicability of work done under the umbrella of the KAUST-TUM special partnership projects.
 +
-->
  
{| class="wikitable"
+
= DFG: German Research Foundation =
|-
 
| '''Project type''' || G8 Research Councils Initiative on Multilateral Research Funding
 
|-
 
| '''Funded by''' || G8 group of leading industrial nations
 
|-
 
| '''Begin''' || July 2011
 
|-
 
| '''End''' || April 2015
 
|-
 
| '''Leader''' || [[Univ.-Prof. Dr. Hans-Joachim Bungartz]]
 
|-
 
| '''Staff''' || [[Dr. rer. nat. Tobias Neckel]]
 
|-
 
| '''Contact person''' || [[Dr. rer. nat. Tobias Neckel]]
 
|-
 
| '''Co-operation partner''' ||  [http://www.ipp.mpg.de/~fsj/ Prof. Frank Jenko] ( Max-Planck Institut für Plasmaphysik, IPP)
 
|}
 
 
 
'''Brief description'''<br><br>
 
The G8 project Nu-FuSE is an international project looking to significantly improve computational modelling capabilities to the level required by the new generation of fusion reactors. The focus is on three specific scientific areas: fusion plasma; the materials from which fusion reactors are built; and the physics of the plasma edge. This will require computing at the “exascale” level across a range of simulation codes, collaborating together to work towards full integrated fusion tokamak modelling.
 
  
To exploit upcoming exascale systems effectively for fusion modelling creates significant challenges around scaling, resiliency, result validation and programmability. This project will be focusing on meeting these challenges by improving the performance and scaling of community modelling codes to enable simulations orders of magnitude larger than are currently undertaken.
+
== Research Software Sustainability ==
  
= [http://www.ipp.mpg.de/ippcms/eng/job/hepp/index.html HEPP]: International Helmholtz Graduate School for Plasma Physics =
+
=== preDOM – Domestication of the Coupling Library preCICE ===
  
 
{| class="wikitable"
 
{| class="wikitable"
 
|-
 
|-
| '''Project type''' || Helmholtz Graduate School Scholarship
+
| '''Funded by''' || DFG
 
|-
 
|-
| '''Funded by''' || Helmholtz Gemeinschaft
+
| '''Begin''' || 2018
 
|-
 
|-
| '''Begin''' || November 2011
+
| '''End''' || 2021
 
|-
 
|-
| '''End''' || October 2014
+
| '''Leader''' || [[Dr. rer. nat. Benjamin Uekermann]], [[Univ.-Prof. Dr. Hans-Joachim Bungartz]]
 
|-
 
|-
| '''Leader''' || [[Univ.-Prof. Dr. Hans-Joachim Bungartz]]
+
| '''Staff''' ||  
|-
 
| '''Staff''' || [[Dr. rer. nat. Tobias Neckel]]
 
 
|-
 
|-
| '''Contact person''' || [[Dr. rer. nat. Tobias Neckel]]
+
| '''Contact person''' || [[Dr. rer. nat. Benjamin Uekermann]]
|-
 
| '''Co-operation partner''' ||  [http://www.ipp.mpg.de/~fsj/ Prof. Frank Jenko] ( Max-Planck Institut für Plasmaphysik, IPP)
 
 
|}
 
|}
  
 
'''Brief description'''<br><br>
 
'''Brief description'''<br><br>
The fundamental equations used to understand and predict various phenomena in plasma physics share a very important feature: They are all nonlinear. This implies that analytical techniques - although also very important - are limited in practice, calling for a numerical approach.
 
Fortunately, the capabilities of modern supercomputers have reached a level which allows to tackle some outstanding open issues in theoretical plasma physics, including, e.g., turbulence, nonlinear magnetohydrodynamics, and plasma-wall interaction.
 
  
Given the multiscale nature of most problems of interest, advanced algorithms and efficient implementations on massively parallel platforms are usually required in order to tackle them. In this context, a close collaboration of theoretical plasma physicists with applied mathematicians and computer scientists can be of great benefit. Thus, state-of-the-art numerical techniques, hardware-aware implementation strategies, and scalable parallelization approaches are explored in terms of their potential to minimize the overall computational requirements and to maximize the reliability and robustness of the simulations.
+
The purpose of the proposed project is to domesticate preCICE – to make preCICE usable without
 +
support by the developer team. To achieve this goal, usability and documentation of preCICE have to
 +
be improved significantly. Marketing and sustainability strategies are required to build-up
 +
awareness of and trust in the software in the community. In addition, best practices on how to make
 +
a scientific software prototype usable for a wide academic range, can be derived and shall be applied
 +
to similar software projects.
  
 +
Reference: [http://www.precice.org/ preCICE Webpage], [https://github.com/precice preCICE Source Code]
  
= DFG - German Research Foundation =
+
=== SeisSol-CoCoReCS – SeisSol as a Community Code for Reproducible Computational Seismology ===
 
 
== Numerical Aspects of the Simulation of Quantum Many-body Systems ==
 
  
 
{| class="wikitable"
 
{| class="wikitable"
 
|-
 
|-
| '''Project type''' || QCCC project
+
| '''Funded by''' || DFG
 
|-
 
|-
| '''Funded by''' || Quantum Computing, Control and Communication (QCCC)
+
| '''Begin''' || 2018
 
|-
 
|-
| '''Begin''' || January 2008
+
| '''End''' || 2021
 
|-
 
|-
| '''End''' || December 2012
+
| '''Leader''' || [[Univ.-Prof. Dr. Michael Bader]], Dr. Anton Frank, ([http://www.lrz.de/ LRZ]), [https://www.geophysik.uni-muenchen.de/Members/gabriel Dr. Alice-Agnes Gabriel (LMU)]
 
|-
 
|-
| '''Leader''' || [[Univ.-Prof. Dr. Thomas Huckle]]
+
| '''Staff''' || [[Ravil Dorozhinskii, M.Sc.]], [[Lukas Krenz, M.Sc.]], [[Carsten Uphoff]]
 
|-
 
|-
| '''Staff''' || [[Dipl.-Math. Konrad Waldherr]]
+
| '''Contact person''' || [[Univ.-Prof. Dr. Michael Bader]]
|-
 
| '''Contact person''' || [[Univ.-Prof. Dr. Thomas Huckle]]
 
|-
 
| '''Co-operation partner''' ||  Dr. Thomas Schulte-Herbrueggen (Chemistry, TUM)
 
 
|}
 
|}
  
 
'''Brief description'''<br><br>
 
'''Brief description'''<br><br>
 +
The project is funded as part of DFG's initiative to support sustainable research software. In the CoCoReCS project, we will improve several issues that impede a wider adoption of the earthquake simulation software [http://www.seissol.org/ SeisSol]. This includes improvements to the workflows for CAD and meshing, establishing better training and introductory material and the setup of an infrastructure to reproduce test cases, benchmarks and user-provided simulation scenarios.
  
In the last years a growing attention has been dedicated to many body quantum systems from the point of view of quantum information. Indeed, after the initial investigation of simple systems as single or two qubits, the needs of understanding the characteristics of a realistic quantum information device leads necessary to the study of many body quantum systems. These studies are also driven by the very fast development of experiments which in the last years reach the goal of coherent control of a few qubits (ion traps, charge qubits, etc...) with a roadmap for further scaling and improvement of coherent control and manipulation techniques. Also, new paradigm of performing quantum information tasks, such as quantum information transfer, quantum cloning and others, without direct control of the whole quantum system but using our knowledge of it has increased the need of tools to understand in details the behaviour of many body quantum system as we find them in nature. These new goals of the quantum information community lead to an unavoidable exchange of knowledge with other communities that already have the know-how and the insight to address such problems; for example the condensed matter, computational physics or quantum chaos communities. Applying known techniques and developing new ones from a quantum information perspective have already produced fast and unexpected developments in these fields. The comprehension of many body quantum systems ranging from few qubits to the thermodynamical limit is thus needed and welcome not only to develop useful quantum information devices, but it will lead us to a better understanding of the quantum world.
+
== Priority Program 1648 SPPEXA - Software for Exascale Computing ==
Reference: [http://www5.in.tum.de/pub/CompQuantTensorNetwork.pdf Computations in Quantum Tensor Networks]
 
  
 
+
=== Coordination Project ===
== SFB-TRR 89: Invasive Computing ==
 
  
 
{| class="wikitable"
 
{| class="wikitable"
Line 286: Line 199:
 
| '''Funded by''' || DFG
 
| '''Funded by''' || DFG
 
|-
 
|-
| '''Begin''' || Mid 2010
+
| '''Begin''' || 2012
 
|-
 
|-
| '''End''' || 1st phase in mid 2014
+
| '''End''' || 2020
 
|-
 
|-
 
| '''Leader''' || [[Univ.-Prof. Dr. Hans-Joachim Bungartz]]
 
| '''Leader''' || [[Univ.-Prof. Dr. Hans-Joachim Bungartz]]
 
|-
 
|-
| '''Staff''' || [[Dipl.-Inf. Martin Schreiber]], [[Dr. rer. nat. Tobias Neckel]], [[Dr. rer. nat. Tobias Weinzierl]]
+
| '''Staff''' || [[Severin Reiz]]
 
|-
 
|-
 
| '''Contact person''' || [[Univ.-Prof. Dr. Hans-Joachim Bungartz]]
 
| '''Contact person''' || [[Univ.-Prof. Dr. Hans-Joachim Bungartz]]
Line 299: Line 212:
 
'''Brief description'''<br><br>
 
'''Brief description'''<br><br>
  
In the proposed CRC/Transregio, we intend to investigate a completely novel paradigm for designing and programming future parallel computing systems called invasive computing. The main idea and novelty of invasive computing is to introduce resource-aware programming support in the sense that a given program gets the ability to explore and dynamically spread its computations to neighbour processors similar to a phase of invasion, then to execute portions of code of high parallelism degree in parallel based on the available (invasible) region on a given multi-processor architecture. Afterwards, once the program terminates or if the degree of parallelism should be lower again, the program may enter a retreat phase, deallocate resources and resume execution again, for example, sequentially on a single processor. In order to support this idea of self-adaptive and resource-aware programming, not only new programming concepts, languages, compilers and operating systems are necessary but also revolutionary architectural changes in the design of MPSoCs (Multi-Processor Systems-on-a-Chip) must be provided so to efficiently support invasion, infection and retreat operations involving concepts for dynamic processor, interconnect and memory reconfiguration.
+
The Priority Programme (SPP) SPPEXA is different from other SPP with respect to its genesis, its
 +
volume, its funding via DFG's Strategy Fund, with respect to the range of disciplines involved, and
 +
to a clear strategic orientation towards a set of time-critical objectives. Therefore, despite its
 +
distributed structure, SPPEXA also resembles a Collaborative Research Centre to a large extent.
 +
Its successful implementation and evolution will require both more and more intense structural measures. The Coordination Project comprises all intended SPPEXAwide activities, including steering and coordination, internal and international collaboration and
 +
networking, and educational activities.
  
Reference: [http://invasic.informatik.uni-erlangen.de/ Transregional Collaborative Research Centre 89 - Invasive Computing]
+
Reference: [http://www.sppexa.de Priority Program 1648 SPPEXA - Software for Exascale Computing ]
  
= BMBF: HPC Software for Scalable, Parallel Hardware =  
+
=== ExaFSA - Exascale Simulation of Fluid-Structure-Acoustics Interaction ===
 
 
The two following BMBF projects were established within the BMBF call "HPC Software for Scalable, Parallel Hardware" in 2008.
 
 
 
 
 
== Highly Scalable Eigenvalue Solvers for Petaflop Applications (ELPA) ==
 
 
 
[http://elpa.rzg.mpg.de/ Website of the project]
 
  
 
{| class="wikitable"
 
{| class="wikitable"
|-
 
| '''Project type''' || BMBF-Projekt; "HPC Software für skalierbare Parallelrechner"
 
 
|-
 
|-
| '''Funded by''' || BMBF
+
| '''Funded by''' || DFG
 
|-
 
|-
| '''Begin''' || 2008
+
| '''Begin''' || 2012
 
|-
 
|-
| '''End''' || 2012
+
| '''End''' || 2019
 
|-
 
|-
| '''Leader''' || Rechenzentrum Garching, Dr. Hermann Lederer
+
| '''Leader''' || [https://www.ipvs.uni-stuttgart.de/abteilungen/sgs/abteilung/mitarbeiter/Miriam.Mehl Prof. Dr. Miriam Mehl]
 
|-
 
|-
| '''Staff''' || [[Thomas Auckenthaler]], [[Univ.-Prof. Dr. Michael Bader]], [[Univ.-Prof. Dr. Hans-Joachim Bungartz]], [[Univ.-Prof. Dr. Thomas Huckle]]  
+
| '''Staff''' || [[Dr. rer. nat. Benjamin Uekermann]], [[Benjamin Rüth]]
 
|-
 
|-
| '''Contact person''' || [[Thomas Auckenthaler]]
+
| '''Contact person''' || [https://www.ipvs.uni-stuttgart.de/abteilungen/sgs/abteilung/mitarbeiter/Miriam.Mehl Prof. Dr. Miriam Mehl]
|-
 
| '''Co-operation partners''' || Rechenzentrum Garching (Dr. H. Lederer),<br> Bergische Universität Wuppertal, Lehrstuhl für Angewandte Informatik (Prof. A. Frommer, Prof. B. Lang),<br> Fritz-Haber-Institut, Berlin, Abt. Theorie (Prof. M. Scheffler, Dr. V. Blum),<br> Max-Planck-Institut für Mathematik in den Naturwissenschaften, Leipzig, Abt. Komplexe Strukturen in Biologie und Kognition (Prof. J. Jost),<br> IBM Deutschland GmbH
 
 
|}
 
|}
  
 
'''Brief description'''<br><br>
 
'''Brief description'''<br><br>
  
The ELPA project will develop highly scalable solvers for Eigenvalue problems. Primary goal will be the design and implementation of a highly scalable direct Eigensolver for large, dense, symmetric matrices. Integration of the respective code into a respective library is planned. In addition, the use of iterative solvers for specific Eigenproblems will also be investigated.
+
In scientific computing, an increasing need for ever more detailed insights and optimization leads to improved models often including several physical effects described by different types of equations. The complexity of the corresponding solver algorithms and implementations typically leads to coupled simulations reusing existing software codes for different physical phenomena (multiphysics simulations) or for different parts of the simulation pipeline such as grid handling, matrix assembly, system solvers, and visualization. Accuracy requirements can only be met with a high spatial and temporal resolution making exascale computing a necessary technology to address runtime constraints for realistic scenarios. However, running a multicomponent simulation efficiently on massively parallel architectures is far more challenging than the parallelization of a single simulation code. Open questions range from suitable load balancing strategies over bottleneck-avoiding communication, interactive visualization for online analysis of results, synchronization of several components to parallel numerical coupling schemes. We intend to tackle these challenges for fluid-structure-acoustics interactions, which are extremely costly due to the large range of scales. Specifically, this requires innovative surface and volume coupling numerics between the different solvers as well as sophisticated dynamical load balancing and in-situ coupling and visualization methods.  
  
 +
Reference: [https://ipvs.informatik.uni-stuttgart.de/SGS/EXAFSA/ ExaFSA Webpage], [http://www.precice.org/ preCICE Webpage], [https://github.com/precice preCICE Source Code]
  
== Innovative HPC-Methoden und Einsatz für hochskalierbare Molekulare Simulation (IMEMO) ==
+
=== EXAHD - An Exa-Scalable Two-Level Sparse Grid Approach for Higher-Dimensional Problems in Plasma Physics and Beyond ===
  
 
{| class="wikitable"
 
{| class="wikitable"
|-
 
| '''Project type''' || BMBF-Projekt; "HPC Software für skalierbare Parallelrechner"
 
 
|-
 
|-
| '''Funded by''' || BMBF
+
| '''Funded by''' || DFG
 
|-
 
|-
| '''Begin''' || 2008
+
| '''Begin''' || 2012
|-
 
| '''End''' || 2012
 
 
|-
 
|-
| '''Leader''' || Prof. Dr.-Ing. Michael Resch, HLRS, Universität Stuttgart
+
| '''End''' || 2020
|-
 
| '''Staff''' || [[Martin Buchholz]], [[Ekaterina Elts, M.Sc]], [[Wolfgang Eckhardt]], [[Univ.-Prof. Dr. Michael Bader]], [[Univ.-Prof. Dr. Hans-Joachim Bungartz]]
 
|-
 
| '''Contact person''' || [[Martin Buchholz]]
 
|-
 
| '''Co-operation partners''' || Institut für Techno- und Wirtschaftsmathematik (ITWM) an der Fraunhofer Gesellschaft (Dr. Franz-Josef Pfreundt),<br> Höchstleistungsrechenzentrum (HLRS) der Universität Stuttgart (Prof. Dr.-Ing. Michael Resch),<br> Lehrstuhl für Thermodynamik (LTD) an der Universität Kaiserslautern (Prof. Dr.-Ing. Hans Hasse),<br>Lehrstuhl für Thermodynamik und Energietechnik (ThEt) an der Universität Paderborn (Prof. Dr.-Ing. Jadran Vrabec)
 
|}
 
 
 
'''Brief description'''<br><br>
 
 
 
Within the IMEMO project, our SCCS group will develop efficient algorithms for the parallelisation of large-scale molecular simulations. One of the main questions is the dynamical load balancing in settings where strong imbalances occur, such as during condensation processes, where the distribution of molecules in different parts of the computational domain will vary over several orders of magnitude. A further important focus is the development of hierarchical parallel algorithms on highly parallel clusters of manycore processors.
 
 
 
= BMBF: Program Math =
 
 
 
== Non-Linear Characterization and Analysis of FEM Simulation Results for Motor-Car Components and Crash Tests (SIMDATA-NL) ==
 
 
 
{| class="wikitable"
 
|-
 
| '''Project type''' || [http://www.bmbf.de/foerderungen/13918.php BMBF support program: Mathematics for innovations in the Industrial and Service Sectors]
 
|-
 
| '''Funded by''' || BMBF
 
|-
 
| '''Begin''' || July 2010
 
|-
 
| '''End''' || June 2013
 
 
|-
 
|-
 
| '''Leader''' || [[Univ.-Prof. Dr. Hans-Joachim Bungartz]]
 
| '''Leader''' || [[Univ.-Prof. Dr. Hans-Joachim Bungartz]]
 
|-
 
|-
| '''Staff''' || [[Benjamin Peherstorfer, M.Sc]], [[Dr. rer. nat. Dirk Pflüger]]
+
| '''Staff''' || [[Michael_Obersteiner,_M.Sc.|Michael Obersteiner]]
 
|-
 
|-
| '''Contact person''' || [[Dr. rer. nat. Dirk Pflüger]]
+
| '''Contact person''' || [[Univ.-Prof. Dr. Hans-Joachim Bungartz]]
|-
 
| '''Co-operation partner''' || [http://wissrech.ins.uni-bonn.de/main/ Prof. Dr. Michael Griebel (INS, Bonn)]
 
[http://www-m4.ma.tum.de/index.en.html Prof. Dr. Claudia Czado (Mathematical Statistics, TU München)],
 
[http://www.math.tu-berlin.de/~garcke/ Dr. Jochen Garcke (Institute of Mathematics, TU Berlin)], [http://www.scai.fraunhofer.de/en.html Clemens-August Thole, Prof. Dr. Ulrich Trottenberg (SCAI, St. Augustin)],
 
AUDI AG, PDTec AG, Volkswagen AG
 
 
|}
 
|}
  
 
'''Brief description'''<br><br>
 
'''Brief description'''<br><br>
  
The project aims at the extraction of the (few) effective dimensions in high-dimensional simulation data in the context of automotive design. Linear methods, like the principal component analysis, alone are not sufficient for many of those applications due to significant non-linear effects. Therefore, they will be complemented by methods that are able to resolve nonlinear relationships, especially by means of sparse grid discretizations.
+
Higher-dimensional problems (i.e., beyond four dimensions) appear in medicine, finance, and plasma physics, posing a challenge for tomorrow's HPC. As an example application, we consider turbulence simulations for plasma fusion with one of the leading codes, GENE, which promises to advance science on the way to carbon-free energy production. While higher-dimensional applications involve a huge number of degrees of freedom such that exascale computing gets necessary, mere domainde composition approaches for their parallelization are infeasible since the communication explodes with increasing dimensionality. Thus, to ensure high scalability beyond domain decomposition, a second major level of parallelism has to be provided. To this end, we propose to employ the sparse grid combination scheme, a model reduction approach for higher-dimensional problems. It computes the desired solution via a combination of smaller, anisotropic and independent simulations, and thus provides this extra level of parallelization. In its randomized asynchronous and iterative version, it will break the communication bottleneck in exascale computing, achieving full scalability. Our two-level methodology enables novel approaches to scalability (ultra-scalable due to numerically decoupled subtasks), resilience (fault and outlier detection and even compensation without the need of recomputing), and load balancing (high-level compensation for insufficiencies on the application level).  
  
= EU: DEISA DECI 7 - DiParTS =
+
Reference: [http://www.sppexa.de Priority Program 1648 SPPEXA - Software for Exascale Computing  ]
 
 
{| class="wikitable"
 
|-
 
| '''Project type''' || HPC/Grid Project
 
|-
 
| '''Funded by''' || DEISA
 
|-
 
| '''Begin''' || July 2010
 
|-
 
| '''End''' || April 2011
 
|-
 
| '''Leader''' || [[Univ.-Prof. Dr. Hans-Joachim Bungartz]], [[Dr. rer. nat. Tobias Weinzierl]]
 
|-
 
| '''Staff''' || [[Dipl.-Inf. Atanas Atanasov]], [[Dipl.-Inf. Kristof Unterweger]]
 
|-
 
| '''Contact person''' || [[Dr. rer. nat. Tobias Weinzierl]]
 
|-
 
| '''Co-operation partner''' ||  Dr.-Ing. Ionel Muntean (TU Cluj-Napoca), King Abdullah University of Science and Technology (KAUST)
 
|}
 
 
 
'''Brief description'''<br><br>
 
 
 
The DiParTS project (Distributed Particle Transport Simulation in a Grid-like HPC CFD Environment) numerically studies particles dispersed in non-stationary fluids within tube-like geometries on the micro-scale, where the fluid and, as a consequence, the particles are stimulated by an oscillating pressure. The particles’ long-time behaviour due to the pressure oscillations, i.e. their averaged movement on the long-term time-scale, allows us to draw conclusions, for example, on the causes of particle sedimentary deposition and centrifugal particle separation in several applications, as the particles exhibit a drift along the stimulation amplitude. Here, classical fluid-structure interaction phenomena interplay with Brownian motion and particle-wall interaction. In a preceding DEISA project, we already studied simplified experimental setups on the short-time time-scale. Despite some promising and interesting insights from a fluid-dynamics point of view, the full simulation of the situation described above however proved to be far from solvable with today’s computing power. Due to this proposal, we nevertheless will broaden the horizon of computability, as we switch from a fully coupled system to an approach where the fluid simulation without particles on an extremely fine spatial and temporal resolution is cut into small time intervals, these chunks of computational challenges are deployed to supercomputers, and the fluid fields are coarsened spatially before the supercomputer streams the data back to the scientist’s local workstation where it is post-processed, i.e. the Brownian motion and the particles’ effect are remotely added to the flow field after the fluid dynamics time step has terminated. The extreme computing power spent on this waterfall process – in particular on the fine-scale fluid dynamics simulation – will yield new insights on the long-time behaviour of the overall simulation setup, while the approach is validated simultaneously by a comparison of a fully-coupled fluid-interaction setting with the decoupled simulation for several small time steps.
 
 
 
[http://www.deisa.eu/science/deci/projects2010-2011/DiParTS Official DEISA webpage]
 
 
 
= EU: Tempus CANDI =
 
  
 +
== SFB-TRR 89: Invasive Computing ==
  
 
{| class="wikitable"
 
{| class="wikitable"
 
|-
 
|-
| '''Project type''' || EU Tempus Project
+
| '''Funded by''' || DFG
 
|-
 
|-
| '''Funded by''' || EU
+
| '''Begin''' || Mid 2010
|-
 
| '''Begin''' || January 2010
 
 
|-
 
|-
| '''End''' || December 2013
+
| '''End''' || 3rd phase in mid 2022
 
|-
 
|-
| '''Leader''' || University Vienna
+
| '''Leader''' || [[Univ.-Prof. Dr. Hans-Joachim Bungartz]] (D3), [[Univ.-Prof. Dr. Michael Bader]] (A4)
 
|-
 
|-
| '''Staff''' || [[Univ.-Prof. Dr. Hans-Joachim Bungartz]], Univ.-Prof. Dr. Ernst W. Mayr, Univ.-Prof. Dr. Helmut Seidl, [[Dr. rer. nat. Tobias Weinzierl]]
+
| '''Staff''' || [[Santiago Narvaez, M.Sc.]], [[Emily Mo-Hellenbrand, M.Sc.]], [[Alexander Pöppl, M.Sc.]], [[Dr. rer. nat. Tobias Neckel]], [[Dr. rer. nat. Philipp Neumann]]; former staff: [[Dr. rer. nat. Martin Schreiber]]  
 
|-
 
|-
| '''Contact person''' || [[Dr. rer. nat. Tobias Weinzierl]]
+
| '''Contact person''' || [[Univ.-Prof. Dr. Hans-Joachim Bungartz]] (D3), [[Univ.-Prof. Dr. Michael Bader]] (A4)
|-  
 
| '''Co-operation partner''' ||  [http://www.candi.uz/ see official webpage]
 
 
|}
 
|}
  
 
'''Brief description'''<br><br>
 
'''Brief description'''<br><br>
  
The CANDI project will develop both the infrastructure for e-Learning / Retraining, and the skills necessary to transfer existing courses and curricula to an e-Learning environment. The project is set up in a way to address multiple problems simultaneously:
+
In the CRC/Transregio "Invasive Computing", we investigate a novel paradigm for designing and programming future parallel computing systems - called invasive computing. The main idea and novelty of invasive computing is to introduce resource-aware programming support in the sense that a given program gets the ability to explore and dynamically spread its computations to neighbour processors similar to a phase of invasion, then to execute portions of code of high parallelism degree in parallel based on the available (invasible) region on a given multi-processor architecture. Afterwards, once the program terminates or if the degree of parallelism should be lower again, the program may enter a retreat phase, deallocate resources and resume execution again, for example, sequentially on a single processor. In order to support this idea of self-adaptive and resource-aware programming, not only new programming concepts, languages, compilers and operating systems are necessary but also revolutionary architectural changes in the design of MPSoCs (Multi-Processor Systems-on-a-Chip) must be provided so to efficiently support invasion, infection and retreat operations involving concepts for dynamic processor, interconnect and memory reconfiguration.
  
* Most obviously, CANDI will help to educate large numbers of students. Additional costs for the infrastructure will be modest, since no new buildings are necessary, existing teaching \ personnel can be employed, and only modest investment in computer infrastructure is necessary.
+
Reference: [http://invasic.informatik.uni-erlangen.de/ Transregional Collaborative Research Centre 89 - Invasive Computing]
* CANDI will help to narrow the gap between the education level in central universities and the provinces.
 
* CANDI will train the local university staff in systematic and effective use of e-Learning, presentation technology, and related didactic skills. Existing e-Learning approaches we saw in Central Asia mostly involve electronic versions of course notes on the internet.
 
* Importantly, CANDI will use e-Learning not only to teach students, but also to teach university staff, in particular at institutions in provincial cities. In fact, e-Learning will also become the main medium to teach e-Learning skills.
 
* CANDI will support the retraining of industry staff. On the other hand, CANDI will also open opportunities for industry to deliver applied courses and lectures to a university audience.
 
* CANDI will employ cheap open source solutions for e-Learning. In addition to these direct effects, CANDI will also have important positive indirect effects on universities and industries in Uzbekistan and Kazakhstan:
 
* CANDI will have a pilot phase where existing courses from European partners will be transferred into the e-Learning framework. Since these courses will reflect the state of the art in their respective areas (mostly Computer Science, Chemistry, Computational Science, Soft Skills), they will by their nature improve the quality of the curricula inside and outside of e-Learning.
 
* The establishment of standardized e-learning courses facilitates the convergence of different academic systems, and thus the possibility of a credit transfer system.
 
* CANDI will improve the English and soft skill knowledge of all participants, thereby improving the ability of Central Asian staff to achieve sustainability by international grants.
 
* By building the competence for e-Learning, CANDI will also contribute to the knowledge base in software engineering and programming in Uzbekistan and Kazakhstan.
 
 
 
= EU: Tempus Belgrad =
 
 
 
{| class="wikitable"
 
|-
 
| '''Project type''' || EU Tempus Project
 
|-
 
| '''Funded by''' || EU
 
|-
 
| '''Begin''' || 15.01.2009
 
|-
 
| '''End''' || 14.01.2012
 
|-
 
| '''Leader''' || Faculty of Mechanical Engineering, University of Belgrade
 
|-
 
| '''Staff''' || Prof. Dr.-Ing. Martin Gabi (Universität Karlsruhe), Prof. Dr. rer. nat. Ernst Rank (TUM), [[Univ.-Prof. Dr. Hans-Joachim Bungartz]] (TUM), Dr. Mihailo Ristic (Imperial College London), Prof. Dr. Javier Alvarez del Castillo (Universitat Politècnica de Catalunya)), The German University in Cairo - GUC, Prof. Dr. Milos Nedeljkovic (University of Belgrade), Prof. Dr. Milan Matijevic (University of Kragujevac), Prof. Dr. Dragan Lazic (University of Belgrade), Prof. Dr. Zarko Cojbasic (University of Nis)
 
|-
 
| '''Contact person''' || Prof. Milos Nedeljkovic
 
|-
 
| '''Co-operation partner''' || ASIIN e.V. (Düsseldorf), Andrej Vrbancic (Robotina doo, Slovenija), Prof. Dr. Radivoje Mitrovic (Ministry of Education, Serbia), National Tempus Office Serbia, Dr. Zaljko Despotovic (Institute "Mihajlo Pupin", Serbia), Rectorate of University of Belgrade, Biserka Ilic (Informatika doo, Serbia), Dusan Babic (IvDam Process Control doo, Serbia)
 
|}
 
 
 
= KONWIHR: Computational Steering of Complex Flow Simulations =
 
 
 
{| class="wikitable"
 
|-
 
| '''Project type''' || Kompetenznetzwerk f&uuml;r Technisch-Wissenschaftliches
 
Hoch- und H&ouml;chstleistungsrechnen in Bayern KONWIHR II
 
|-
 
| '''Funded by''' || BMBF
 
|-
 
| '''Begin''' || 2008
 
|-
 
| '''End''' || 2011
 
|-
 
| '''Leader''' || [[Univ.-Prof. Dr. Hans-Joachim Bungartz]], [[Dr. rer. nat. Miriam Mehl]]
 
|-
 
| '''Staff''' ||
 
|-
 
| '''Contact person''' || [[Dr. rer. nat. Miriam Mehl]]
 
|-
 
| '''Co-operation partner''' || Prof.Dr. Ernst Rank, Prof. Dr. Michael Manhart, Prof. Dr. Bernd Simeon, Prof. Dr. Peter Rentrop
 
|}
 
 
 
'''Brief description'''<br><br>
 
 
 
Computational Science and Engineering faces a continuous increase of speed of computers and availability of very fast networks. Yet, it seems that some opportunities offered by these ongoing developments are only used to a fraction for numerical simulation. Moreover, despite new possibilities in computer visualisation, virtual or augmented reality and collaboration models, most available engineering software still follows the classical way of a strict separation of pre-processing, computing and post-processing. In the previous work of the applicants of this proposal, some of the major obstructions for an interactive computation for complex simulation tasks in engineering sciences have been identified and partially removed. These were especially found in traditional software structures, in the definition of geometric models and boundary conditions, and in the often still very tedious work of generating computational meshes. A generic approach for collaborative computational steering has been developed, where pre- and post-processing are integrated with high-performance computing and which supports cooperation of workgroups being connected via the internet. Suitable numerical methods are at the core of this approach such as the Lattice Boltzmann method (LBM) for fluid flow simulation. The proposed project will extend this approach in various directions.
 
 
 
 
 
= ENB: =
 
 
 
== Bavarian Graduate School of Computational Engineering (BGCE) ==
 
 
 
[http://www.bgce.de Website of the BGCE]
 
 
 
{| class="wikitable"
 
|-
 
| '''Project type''' || Elite Study Program
 
|-
 
| '''Funded by''' || Elite Network of Bavaria
 
|-
 
| '''Begin''' || April 2005
 
|-
 
| '''End''' || April 2015
 
|-
 
| '''Leader''' || [[Univ.-Prof. Dr. Hans-Joachim Bungartz]]
 
|-
 
| '''Staff''' || [[Dr. rer. nat. Tobias Neckel]], [[Dipl.-Inf. Marion Bendig]]
 
|-
 
| '''Contact person''' || [[Dr. rer. nat. Tobias Neckel]]
 
|-
 
| '''Co-operation partner''' || International Master's Program Computational Science and Engineering (TUM)<br>
 
International Master's Program Computational Mechanics (TUM)<br>
 
International Master's Program Computational Engineering (U Erlangen)
 
|}
 
 
 
'''Brief description'''<br><br>
 
  
The Bavarian Graduate School of Computational Engineering is an association of the three Master programs: Computational Engineering (CE) at the University of Erlangen-Nürnberg, Computational Mechanics (COME), and Computational Science and Engineering (CSE), both at TUM. Funded by the Elitenetzwerk Bayern, the Bavarian Graduate School offers an Honours program for gifted and highly motivated students. The Honours program extends the regular Master's programs by several academic offers:
+
=== A4: Design-Time Characterisation and Analysis of Invasive Algorithmic Patterns ===
 +
* Phase 2 and 3 (2014-2022): [http://invasic.informatik.uni-erlangen.de/en/tp_a4_PhII.php see description of project A4 on the Invasic website]
  
* additional courses in the area of computational engineering, in particular block courses, and summer academies.
+
=== D3: Invasion for High Performance Computing ===
* Courses and seminars on "soft skills" - like communication skills, management, leadership, etc.
+
* Phases 1,2 and 3 (2010-2022): [http://invasic.informatik.uni-erlangen.de/en/tp_d3_PhII.php see description of project D3 on the Invasic website]
*an additional semester project closely connected to current research
 
  
Students who master the regular program with an above-average grade, and successfully finish the Honours program, as well, earn the academic degree "Master of Science with Honours".
+
= EU Horizon 2020 =
  
 
+
== An Exascale Hyperbolic PDE Engine ([http://www.exahype.eu/ ExaHyPE]) ==
= MISTI MIT-TUM Project: Combining Model Reduction with Sparse Grids into a Multifidelity Framework for Design, Control and Optimization =
 
 
 
[http://web.mit.edu/misti/index.html Webpage of MISTI]
 
  
 
{| class="wikitable"
 
{| class="wikitable"
 
|-
 
|-
| '''Project type''' || [http://web.mit.edu/misti/mit-germany/faculty/seed.html MISTI Germany Project]
+
| '''Project type''' || EU Horizon 2020, FET-PROACTIVE call ''Towards Exascale High Performance Computing'' (FETHPC)
 
|-
 
|-
| '''Funded by''' || MISTI
+
| '''Funded by''' || European Union’s Horizon 2020 research and innovation programme
 
|-
 
|-
| '''Begin''' || January 2012
+
| '''Begin''' || October 2015
 
|-
 
|-
| '''End''' || September 2013
+
| '''End''' || September 2019
 
|-
 
|-
| '''Leader''' || [[Univ.-Prof. Dr. Hans-Joachim Bungartz]]
+
| '''Leader''' || [[Univ.-Prof. Dr. Michael Bader]]
 
|-
 
|-
| '''Staff''' || [[Daniel Butnaru, M.Sc]], [[Benjamin Peherstorfer, M.Sc]]
+
| '''Staff''' || [[Dr. Anne Reinarz]], [[Jean-Matthieu Gallard]], [[Leonhard Rannabauer]], [[Philipp Samfass, M.Sc.]]; former staff: [[Dr. rer. nat. Vasco Varduhn]], [[Angelika Schwarz, M.Sc.]]
 
|-
 
|-
| '''Contact person''' || [[Univ.-Prof. Dr. Hans-Joachim Bungartz]]
+
| '''Contact person''' || [[Univ.-Prof. Dr. Michael Bader]]
 
|-  
 
|-  
| '''Co-operation partner''' || [http://web.mit.edu/kwillcox/www/index.html Univ.-Prof. Dr. Karen Willcox (MIT)]
+
| '''Co-operation partner''' || Prof. Michael Dumbser (Univ. Trento), Dr. Tobias Weinzierl (Durham University), Prof. Dr. Luciano Rezzolla (Fra nkfurt Institute for Advanced Studies), Prof. Dr. Heiner Igel and Dr. Alice Gabriel (LMU München), Robert Iberl (BayFor), Dr. Alexander Moskovsky (RSC Group); Prof. Dr. Arndt Bode (LRZ)
 
|}
 
|}
  
 
'''Brief description'''<br><br>
 
'''Brief description'''<br><br>
 +
The Horizon 2020 project ExaHyPE is an international collaborative project to develop an exascale-ready engine to solve hyperbolic partial differential equations. The engine will rely on high-order ADER-DG discretization (Arbitrary high-order DERivative Discontinuous Galerkin) on dynamically adaptive Cartesian meshes (building on the [http://www.peano-framework.org/ Peano framework] for adaptive mesh refinement).
  
Many engineering problems require repeated simulations in order to model and optimize a real life system. Such models are typically quite complex and a single solution usually involves a huge computational effort. If a large number of such expensive solutions is needed, the models become impractical and alternatives are sought, with the goal of enabling interactive and highly reliable high-accuracy simulations. Surrogate models mimic the behavior of the simulation model as closely as possible and are at the same time computationally much cheaper to evaluate. While certain surrogate methods exist and perform well for specific problems, their acceptance is slowed by their complex and intrusive manner. They need to be reconsidered for each problem class and are sensitive to the characteristics of the underlying simulation.
+
ExaHyPE focuses on grand challenges from computational seismology (earthquake simulation) and computational astrophysics (simulation of binary neutron star systems), but at the same time aims at developing a flexible engine to solve a wide range of hyperbolic PDE systems.
 
 
In this project we open a collaboration between MIT and TUM in the area of model reduction with an initial focus on non-intrusive methods. These treat the simulation as a black box and, based only on a number of snapshots, deliver an approximation which can than be efficiently queried. The joint work will combine MIT’s model-reduction techniques with TUM’s sparse grid methods with the goal of delivering a novel non-intrusive model reduction technique.
 
 
 
 
 
 
 
<!--
 
 
 
= OBSOLETE STRUCTURE: =
 
 
 
 
  
 +
See the [http://www.exahype.eu ExaHyPE website] for further information!
  
 
+
== Centre of Excellence for Exascale Supercomputing in the area of ​​the Solid Earth ([http://www.cheese-coe.eu/ ChEESE]) ==
= Munich Centre of Advanced Computing (MAC) =
 
 
 
The [http://www.mac.tum.de Munich Centre of Advanced Computing (MAC)] is a research consortium which has been established at TUM to bundle research activities related to computational science and engineering (CSE) as well as high-performance computing (HPC) - across disciplines, across departments, and across institutions. In MAC, seven of TUM's departments and other Munich research institutions (Ludwig-Maximilians-Universität, Max-Planck insititutes, the Leibniz Supercomputing Centre of the Bavarian Academy of Sciences and Humanities) as well as TUM's international partners such as KAUST, the King Abdullah University of Science and Technology, join their forces to ensure the sustainable usage of current and future HPC architectures for the most relevant and most challenging CSE applications.
 
 
 
SCCS is currently involved in three projects funded by the State of Bavaria and TUM, and in two projects funded by KAUST and TUM.
 
 
 
==  Efficient Parallel Strategies in Computational Modelling of Materials (MAC@IGSSE) ==
 
  
 
{| class="wikitable"
 
{| class="wikitable"
|-
 
| '''Project type''' || F&ouml;rderprogramm ”Bayern exzellent”: [http://www.mac.tum.de Munich Centre of Advanced Computing (MAC)]
 
 
|-
 
|-
| '''Funded by''' || Bavarian state government, Technische Universit&auml;t M&uuml;nchen
+
| '''Project type''' || EU Horizon 2020, INFRAEDI-02-2018 call ''Centres of Excellence on HPC''
 
|-
 
|-
| '''Begin''' || 2008
+
| '''Funded by''' || European Union’s Horizon 2020 research and innovation programme
 
|-
 
|-
| '''End''' || 2012
+
| '''Begin''' || November 2018
 
|-
 
|-
| '''Leader''' || [http://www.theochem.tu-muenchen.de/welcome/index.php?option=com_content&task=view&id=53 Prof. Dr. Dr. h.c. Notker Rösch]<br> subproject: [[Univ.-Prof. Dr. Hans-Joachim Bungartz]]
+
| '''End''' || October 2021
 
|-
 
|-
| '''Staff''' || [[Martin Roderus]]
+
| '''Leader''' || Barcelona Supercomputing Centre
 
|-
 
|-
| '''Contact person''' || [[Martin Roderus]]
+
| '''Staff''' || [[Ravil Dorozhinskii, M.Sc.]], [[Lukas Krenz, M.Sc.]], [[Leonhard Rannabauer, M.Sc.]], [[Jean-Matthieu Gallard, M.Sc.]]
|-
 
| '''Co-operation partner''' || Prof. Dr. Dr. h.c. Notker Rösch, Prof. Dr. Arndt Bode, Prof. Dr. Michael Gerndt, Prof. Dr. Heinz-Gerd Hegering
 
|}
 
 
 
'''Brief description'''<br><br>
 
 
 
The project will develop a new paradigm for the parallelisation of density functional theory (DFT) methods for electronic structure calculations and implement this new strategy. Advanced embedding techniques will account for environment effects (e.g. solvent, support) on a system, which requires a strong modularisation of the DFT approach, facilitating task specific parallelisation, memory management, and low-level optimisation. Efficiency will be further increased by dynamical adaptation of varying resource usage at module level and pooling of applications.
 
 
 
== A High-End Toolbox for Simulation and Optimisation of Multi-Physics PDE Models (MAC@IGSSE) ==
 
 
 
{| class="wikitable"
 
|-
 
| '''Project type''' || F&ouml;rderprogramm ”Bayern exzellent”: [http://www.mac.tum.de Munich Centre of Advanced Computing (MAC)]
 
|-
 
| '''Funded by''' || Bavarian state government, Technische Universit&auml;t M&uuml;nchen
 
|-
 
| '''Begin''' || 2008
 
|-
 
| '''End''' || 2012
 
|-
 
| '''Leader''' || [http://www-m1.ma.tum.de/bin/view/Lehrstuhl/MichaelUlbrich  Prof. Dr. Michael Ulbrich]<br> subproject: [[Univ.-Prof. Dr. Hans-Joachim Bungartz]], [[Dr. rer. nat. Miriam Mehl]]
 
 
|-
 
|-
| '''Staff''' || [[Janos Benk, M.Sc]]
+
| '''Contact person''' || [[Univ.-Prof. Dr. Michael Bader]]
|-
 
| '''Contact person''' || [[Dr. rer. nat. Miriam Mehl]]
 
 
|-  
 
|-  
| '''Co-operation partner''' || Prof. Dr. Michael Ulbrich, Prof. Dr. Martin Brokate, Prof. Dr. Ernst Rank, Prof. Dr. Ronald Hoppe (Augsburg)
+
| '''Co-operation partner''' || 14 participating institutes, see the [https://cheese-coe.eu/ ChEESE website] for details.  
 
|}
 
|}
  
 
'''Brief description'''<br><br>
 
'''Brief description'''<br><br>
 +
The ChEESE Center of Excellence will prepare flagship codes and enable services for Exascale supercomputing in the area of Solid Earth (SE). ChEESE will harness European institutions in charge of operational monitoring networks, tier-0 supercomputing centers, academia, hardware developers and third-parties from SMEs, Industry and public-governance. The scientific ambition is to prepare 10 flagship codes to address Exascale Computing Challenging (ECC) problems on computational seismology, magnetohydrodynamics, physical volcanology, tsunamis, and data analysis and predictive techniques for earthquake and volcano monitoring.
  
The project aims at bundling forces to overcome conceptional drawbacks of current simulation software and to make a big step towards a future generation of simulation and optimisation tools for complex systems. The goal is to develop a rapid prototyping HPC software platform for both simulation and optimisation. The design will be hierarchical, with high performance components on all levels, ranging from problem formulation via discretisation to numerics and parallelisation. Work will be interwoven with theoretical investigations of innovative numerical algorithms.
+
SCCS contributes [http://www.seissol.org SeisSol] and [http://www.exahype.org ExaHyPE] as flagship in ChEESE.
 +
See the [http://www.cheese-coe.eu ChEESE website] for further information!
  
== A Scalable Infrastructure for Computational Steering (MAC@IGSSE) ==
+
== [https://enerxico-project.eu/ ENERXICO] - Supercomputing and Energy for Mexico ==
  
 
{| class="wikitable"
 
{| class="wikitable"
|-
 
| '''Project type''' || F&ouml;rderprogramm ”Bayern exzellent”: [http://www.mac.tum.de Munich Centre of Advanced Computing (MAC)]
 
 
|-
 
|-
| '''Funded by''' || Bavarian state government, Technische Universit&auml;t M&uuml;nchen
+
| '''Project type''' || EU Horizon 2020, call ''FETHPC-01-2018 International Cooperation on HPC''
|-
 
| '''Begin''' || 2008
 
|-
 
| '''End''' || 2012
 
|-
 
| '''Leader''' || [http://wwwwestermann.in.tum.de/people/Westermann Prof. Dr. Rüdiger Westermann]<br>subproject: [[Univ.-Prof. Dr. Hans-Joachim Bungartz]]
 
|-
 
| '''Staff''' || [[Daniel Butnaru, M.Sc]]
 
|-
 
| '''Contact person''' || [[Univ.-Prof. Dr. Hans-Joachim Bungartz]]
 
|-
 
| '''Co-operation partner''' || Prof. Dr. Rüdiger Westermann, Prof. Bernd Brügge, Ph.D., Prof. Dr. Ernst Rank, Prof. Dr.-Ing. Wolfgang Wall
 
|}
 
 
 
'''Brief description'''<br><br>
 
 
 
The goal of this project is to design and prototype a scalable infrastructure for computational steering. It will be targeted for the computational engineering domain, which allows to leverage existing cooperative developments as a starting point and to use real-world data that is representative in size, modality, and structure to what is available in other scientific areas like geology or biology. The infrastructure implements a processing pipeline ranging from scalable data processing workflows to interactive visualisation and human-computer interaction in virtual and augmented reality environments.
 
 
 
==  Simulation of CO2 Sequestration (MAC@KAUST) ==
 
 
 
{| class="wikitable"
 
|-
 
| '''Project type''' || Strategic Partnership with the King Abdullah University of Science and Technology (KAUST)]
 
 
|-
 
|-
| '''Funded by''' || KAUST
+
| '''Funded by''' || European Union’s Horizon 2020 research and innovation programme
 
|-
 
|-
| '''Begin''' || 2009
+
| '''Begin''' || June 2019
 
|-
 
|-
| '''End''' || 2013
+
| '''End''' || June 2021
 
|-
 
|-
| '''Leader''' || [[Univ.-Prof. Dr. Hans-Joachim Bungartz]]
+
| '''Leader''' || Barcelona Supercomputing Centre
 
|-
 
|-
| '''Staff''' || [[Michael Lieb]]
+
| '''Staff''' || [[Dr. Anne Reinarz]], [[Sebastian Wolf, M.Sc.]]
 
|-
 
|-
| '''Contact person''' || [[Univ.-Prof. Dr. Hans-Joachim Bungartz]]
+
| '''Contact person''' || [[Univ.-Prof. Dr. Michael Bader]]
 
|-  
 
|-  
| '''Co-operation partner''' || Prof. Dr. Dr.-Ing. habil. Arndt Bode (Computer Architecture), Prof. Dr. Martin Brokate (Numerical Mathematics and Control Theory), Prof. Dr. Drs. h.c.Karl-Heinz Hoffmann (Numerical Mathematics and Control Theory), Prof. Dr.-Ing. Michael Manhart (Hydromechanics), Prof. Dr. Michael Ulbrich (Mathematical Optimisation)
+
| '''Co-operation partner''' || 16 participating institutes, see the [https://enerxico-project.eu/ ENERXICO website] for details.  
 
|}
 
|}
  
 
'''Brief description'''<br><br>
 
'''Brief description'''<br><br>
 +
ENERXICO is a collaborative research and innovation action that shall foster the collaboration between Europe and Mexico in supercomputing.
 +
ENERXICO will develop performance simulation tools that require exascale HPC and data intensive algorithms for different energy sources: wind energy production, efficient combustion systems for biomass-derived fuels (biogas) and exploration geophysics for hydrocarbon reservoirs. 
  
The goal of this project is to design and investigate novel approaches to modelling and simulation of CO2 sequestration processes, in particular in the context of enhanced oil recovery. The project will involve both fine-grain simulations - with all related aspects from multi-phase schemes via numerical algorithmics to high-performance computing issues - and homogenization approaches to efficiently capture the fine-grain effects on the macro-scale. For that, groups with expertise in flow physics, mathematical modelling, numerical analysis, numerical algorithmics, optimisation and inverse problems, and high-performance computing and HPC systems join their forces. Topics addressed will cover multi-scale modelling and homogenisation, fully-resolved pore-scale simulation, constrained optimisation of the sequestration process, enhanced numerics and parallelisation, and HPC implementation.
+
SCCS is mainly concerned with large-scale seismic simulations based on [http://www.seissol.org SeisSol] and [http://www.exahype.org ExaHyPE].
 +
See the [https://enerxico-project.eu/ ENERXICO website] for further information!
  
==  Virtual Arabia (MAC@KAUST) ==
+
= BMBF: Federal Ministry of Education and Research =
  
{| class="wikitable"
+
== ELPA-AEO - Eigenwert-Löser für PetaFlop-Anwendungen: Algorithmische Erweiterungen und Optimierungen ==
|-  
 
| '''Project type''' || Strategic Partnership with the King Abdullah University of Science and Technology (KAUST)]
 
|-
 
| '''Funded by''' || KAUST
 
|-
 
| '''Begin''' || 2009
 
|-
 
| '''End''' || 2013
 
|-
 
| '''Leader''' || [[Univ.-Prof. Dr. Hans-Joachim Bungartz]]
 
|-
 
| '''Staff''' || N.N.
 
|-
 
| '''Contact person''' || [[Univ.-Prof. Dr. Hans-Joachim Bungartz]]
 
|-
 
| '''Co-operation partner''' || Prof. Dr. Dr.-Ing. habil. Arndt Bode (Computer Architecture), Prof. Gudrun Klinker, Ph.D. (Augmented Reality), Prof. Dr. Ernst Rank (Computation in Engineering), Prof. Dr. Rüdiger Westermann (Computer Graphics & Visualization)
 
|}
 
 
 
'''Brief description'''<br><br>
 
 
 
The goal of this project is to develop a virtual environment for the interactive visual exploration of Saudi Arabia. In contrast to virtual globe viewers like Google Earth, this environment will allow the user to look both above and underneath the earth surface in an integrated way. It will, thus, provide interactive means for the visual exploration of 3D geological structures and dynamic seismic processes as well as atmospheric processes and effects or built or planned infrastructure. The specific techniques required to support such functionality will be integrated into a generic infrastructure for visual computing. The project will cooperate with the KAUST 3D Modelling and Visualisation Centre and the KAUST Computational Earth Sciences Centre.
 
 
 
= BMBF call "HPC Software for Scabale, Parallel Hardware" =
 
 
 
The two following BMBF projects were established within the BMBF call "HPC Software for Scalable, Parallel Hardware" in 2008.
 
 
 
== Highly Scalable Eigenvalue Solvers for Petaflop Applications (ELPA) ==
 
 
 
[http://elpa.rzg.mpg.de/ Website of the project]
 
  
 
{| class="wikitable"
 
{| class="wikitable"
 
|-  
 
|-  
| '''Project type''' || BMBF-Projekt; "HPC Software für skalierbare Parallelrechner"
+
| '''Project type''' || Fördermassnahme IKT 2020 - Höchstleistungsrechnen im Förderbereich: HPC
 
|-
 
|-
 
| '''Funded by''' || BMBF
 
| '''Funded by''' || BMBF
 
|-
 
|-
| '''Begin''' || 2008
+
| '''Begin''' || 2016
 
|-
 
|-
| '''End''' || 2012
+
| '''End''' || 2018
 
|-
 
|-
| '''Leader''' || Rechenzentrum Garching, Dr. Hermann Lederer
+
| '''Leader''' || Dr. Hermann Lederer, [[Univ.-Prof. Dr. Hans-Joachim Bungartz]]
 
|-
 
|-
| '''Staff''' || [[Thomas Auckenthaler]], [[Univ.-Prof. Dr. Michael Bader]], [[Univ.-Prof. Dr. Hans-Joachim Bungartz]], [[Univ.-Prof. Dr. Thomas Huckle]]  
+
| '''Staff''' || [[Univ.-Prof. Dr. Thomas Huckle]], [[Michael Rippl, M.Sc.]]
 
|-
 
|-
| '''Contact person''' || [[Thomas Auckenthaler]]
+
| '''Contact person''' || [[Univ.-Prof. Dr. Thomas Huckle]]
 
|-  
 
|-  
| '''Co-operation partners''' || Rechenzentrum Garching (Dr. H. Lederer),<br> Bergische Universität Wuppertal, Lehrstuhl für Angewandte Informatik (Prof. A. Frommer, Prof. B. Lang),<br> Fritz-Haber-Institut, Berlin, Abt. Theorie (Prof. M. Scheffler, Dr. V. Blum),<br> Max-Planck-Institut für Mathematik in den Naturwissenschaften, Leipzig, Abt. Komplexe Strukturen in Biologie und Kognition (Prof. J. Jost),<br> IBM Deutschland GmbH
+
| '''Co-operation partner''' || Dr. Hermann Lederer (Rechenzentrum MPG Garching), Prof. Dr. Bruno Lang (Universität Wuppertal), Prof. Dr. Karsten Reuter
 +
(Chemie, TUM), Dr. Christoph Scheuerer (TUM-Chemie), Fritz-Haber-Institut Berlin
 
|}
 
|}
  
 
'''Brief description'''<br><br>
 
'''Brief description'''<br><br>
 +
Übergeordnetes Ziel ist es, die Effizienz von Supercomputer-Simulationen zu steigern, für die die Lösung
 +
des Eigenwertproblems für dichte und Band-strukturierte symmetrische Matrizen zu einem entscheidenden
 +
Beitrag wird. Dies ist insbesondere bei Fragestellungen aus der Materialforschung, der biomolekularen
 +
Forschung und der Strukturdynamik der Fall. Aufbauend auf den Ergebnissen des ELPA-Vorhabens sollen
 +
im Rahmen dieses Vorhabens noch größere Probleme als bisher adressiert werden können, der mit der
 +
Simulation verbundene Rechenaufwand verringert und bei vorgegebener Genauigkeit und weiterhin hoher
 +
Software-Skalierbarkeit Ressourceneinsatz und Energieverbrauch reduziert werden.
  
The ELPA project will develop highly scalable solvers for Eigenvalue problems. Primary goal will be the design and implementation of a highly scalable direct Eigensolver for large, dense, symmetric matrices. Integration of the respective code into a respective library is planned. In addition, the use of iterative solvers for specific Eigenproblems will also be investigated.
 
  
 
+
== TaLPas: Task-basierte Lastverteilung und Auto-Tuning in der Partikelsimulation ==
== Innovative HPC-Methoden und Einsatz für hochskalierbare Molekulare Simulation (IMEMO) ==
 
  
 
{| class="wikitable"
 
{| class="wikitable"
|-
 
| '''Project type''' || BMBF-Projekt; "HPC Software für skalierbare Parallelrechner"
 
|-
 
| '''Funded by''' || BMBF
 
 
|-
 
|-
| '''Begin''' || 2008
+
| '''Project type''' || BMBF Programm: Grundlagenorientierte Forschung für HPC-Software im Hoch- und Höchstleistungsrechnen
|-
 
| '''End''' || 2012
 
|-
 
| '''Leader''' || Prof. Dr.-Ing. Michael Resch, HLRS, Universität Stuttgart
 
|-
 
| '''Staff''' || [[Martin Buchholz]], [[Ekaterina Elts, M.Sc]], [[Wolfgang Eckhardt]], [[Univ.-Prof. Dr. Michael Bader]], [[Univ.-Prof. Dr. Hans-Joachim Bungartz]]
 
|-
 
| '''Contact person''' || [[Martin Buchholz]]
 
|-
 
| '''Co-operation partners''' || Institut für Techno- und Wirtschaftsmathematik (ITWM) an der Fraunhofer Gesellschaft (Dr. Franz-Josef Pfreundt),<br> Höchstleistungsrechenzentrum (HLRS) der Universität Stuttgart (Prof. Dr.-Ing. Michael Resch),<br> Lehrstuhl für Thermodynamik (LTD) an der Universität Kaiserslautern (Prof. Dr.-Ing. Hans Hasse),<br>Lehrstuhl für Thermodynamik und Energietechnik (ThEt) an der Universität Paderborn (Prof. Dr.-Ing. Jadran Vrabec)
 
|}
 
 
 
'''Brief description'''<br><br>
 
 
 
Within the IMEMO project, our SCCS group will develop efficient algorithms for the parallelisation of large-scale molecular simulations. One of the main questions is the dynamical load balancing in settings where strong imbalances occur, such as during condensation processes, where the distribution of molecules in different parts of the computational domain will vary over several orders of magnitude. A further important focus is the development of hierarchical parallel algorithms on highly parallel clusters of manycore processors.
 
 
 
= Computational Steering of Complex Flow Simulations =
 
 
 
{| class="wikitable"
 
|-
 
| '''Project type''' || Kompetenznetzwerk f&uuml;r Technisch-Wissenschaftliches
 
Hoch- und H&ouml;chstleistungsrechnen in Bayern KONWIHR II
 
 
|-
 
|-
 
| '''Funded by''' || BMBF
 
| '''Funded by''' || BMBF
 
|-
 
|-
| '''Begin''' || 2008
+
| '''Begin''' || January 2017
 
|-
 
|-
| '''End''' || 2011
+
| '''End''' || June 2020
 
|-
 
|-
| '''Leader''' || [[Univ.-Prof. Dr. Hans-Joachim Bungartz]], [[Dr. rer. nat. Miriam Mehl]]
+
| '''Leader''' || [https://www5.in.tum.de/wiki/index.php/Univ.-Prof._Dr._Hans-Joachim_Bungartz Univ.-Prof. Dr. Hans-Joachim Bungartz, TUM], [https://wr.informatik.uni-hamburg.de/people/philipp_neumann, Philipp Neumann, Universität Hamburg]
 
|-
 
|-
| '''Staff''' ||  
+
| '''Staff''' || [[Univ.-Prof. Dr. Hans-Joachim Bungartz]], [[Nikola Tchipev, M.Sc.]], [[Steffen Seckler, M.Sc. (hons)]]
 
|-
 
|-
| '''Contact person''' || [[Dr. rer. nat. Miriam Mehl]]
+
| '''Contact person''' || [[Nikola Tchipev, M.Sc.]]
 
|-  
 
|-  
| '''Co-operation partner''' || Prof.Dr. Ernst Rank, Prof. Dr. Michael Manhart, Prof. Dr. Bernd Simeon, Prof. Dr. Peter Rentrop
+
| '''Co-operation partner''' || [https://wr.informatik.uni-hamburg.de/people/philipp_neumann Philipp Neumann, Universität Hamburg], [https://www.hlrs.de/de/about-us/organization/people/person/glass/ Colin W. Glass, HLRS/Universität Stuttgart], [https://www.visus.uni-stuttgart.de/institut/personen/wissenschaftliche-mitarbeiter/guido-reina.html Guido Reina, VISUS/Universität Stuttgart], [https://www.parallel.informatik.tu-darmstadt.de/team/felix-wolf/ Felix Wolf, TU Darmstadt], [http://thermo.mv.uni-kl.de/laboratory/staff/martin-horsch/ Martin Horsch, TU Kaiserslautern], [http://thet.uni-paderborn.de/mitarbeiter/vrabec/ Jadran Vrabec, Universität Paderborn]
 
|}
 
|}
  
 
'''Brief description'''<br><br>
 
'''Brief description'''<br><br>
 +
The main goal of TaLPas is to provide a solution to fast and robust simulation of many, potentially dependent particle systems in a distributed environment. This is required in many applications, including, but not limited to,
  
Computational Science and Engineering faces a continuous increase of speed of computers and availability of very fast networks. Yet, it seems that some opportunities offered by these ongoing developments are only used to a fraction for numerical simulation. Moreover, despite new possibilities in computer visualisation, virtual or augmented reality and collaboration models, most available engineering software still follows the classical way of a strict separation of pre-processing, computing and post-processing. In the previous work of the applicants of this proposal, some of the major obstructions for an interactive computation for complex simulation tasks in engineering sciences have been identified and partially removed. These were especially found in traditional software structures, in the definition of geometric models and boundary conditions, and in the often still very tedious work of generating computational meshes. A generic approach for collaborative computational steering has been developed, where pre- and post-processing are integrated with high-performance computing and which supports cooperation of workgroups being connected via the internet. Suitable numerical methods are at the core of this approach such as the Lattice Boltzmann method (LBM) for fluid flow simulation. The proposed project will extend this approach in various directions.
+
* sampling in molecular dynamics: so-called “rare events”, e.g. droplet formation, require a multitude of molecular dynamics simulations to investigate the actual conditions of phase transition,
 +
* uncertainty quantification: various simulations are performed using different parametrisations to investigate the sensitivity of the parameters on the actual solution,
 +
* parameter identification: given, e.g., a set of experimental data and a molecular model, an optimal set of model parameters needs to be found to fit the model to the experiment.
  
= Distributed stochastic simulation for the hydroelastic analysis of very large floating structures =
+
For this purpose, TaLPas targets
 +
* the development of innovative auto-tuning based particle simulation software in form of an open-source library to leverage optimal node-level performance. This will guarantee an optimal time-to-solution for small- to mid-sized particle simulations,
 +
* the development of a scalable task scheduler to yield an optimal distribution of potentially dependent simulation tasks on available HPC compute resources,
 +
* the combination of both auto-tuning based particle simulation and scalable task scheduler, augmented by an approach to resilience. This will guarantee robust, that is fault-tolerant, sampling evaluations on peta- and future exascale platforms.
  
{| class="wikitable"
+
For more details, see the [https://wr.informatik.uni-hamburg.de/research/projects/talpas/start project website].
|-
 
| '''Project type''' || IGSSE Project Team
 
|-
 
| '''Funded by''' || Excellence Initiative of the German federal and state
 
governments
 
|-
 
| '''Begin''' || Oktober 2008
 
|-
 
| '''End''' || September 2011
 
|-
 
| '''Leader''' || [[Univ.-Prof. Dr. Hans-Joachim Bungartz]], [[Dr. rer. nat. Miriam Mehl]]
 
|-
 
| '''Staff''' || [[Bernhard Gatzhammer, M.Sc]]
 
|-
 
| '''Contact person''' || [[Dr. rer. nat. Miriam Mehl]]
 
|-
 
| '''Co-operation partner''' || Prof.Dr. Ernst Rank, Dr. Ralf-Peter
 
Mundani, PD Dr. Alexander D&uuml;ster, Prof. PhD Chien Ming Wang (Singapur), SOFiSTiK AG (Oberschleißheim)
 
|}
 
 
 
'''Brief description'''<br><br>
 
 
 
Very large floating structures (VLFS) are more and more employed by a number of countries in creating land space from the ocean. These “swimming islands” are of pontoon-type and benefit from high stability, low manufacturing costs, and easy maintenance. Owing to their much larger dimensions in length than in depth, the VLFS are relatively flexible and, thus, VLFSs have to be robustly designed against wave-induced deformations and stresses. As such a reliability analysis involves many uncertainties, efficient methods have to be developed that allow for both the modelling of uncertain behaviour and the handling of the computational complexity. In this project, the main objective focuses on the development and implementation of a prototype for the hydroelastic analysis of VLFS. Therefore, stochastic finite elements are subject of choice for the planned reliability analysis over huge sets of different structural properties, while sophisticated techniques of modern grid computing should tackle the computational problem of such complex parameter studies.
 
 
 
= Numerical Simulation of Fluid-Structure-Interactions on Cartesian Grids =
 
 
 
[http://fsw.informatik.tu-muenchen.de/P6/index.php Website of the project]
 
  
 +
== Chameleon: Eine Taskbasierte Programmierumgebung zur Entwicklung reaktiver HPC Anwendungen ==
  
 
{| class="wikitable"
 
{| class="wikitable"
 
|-
 
|-
| '''Project type''' || Forschergruppe 493
+
| '''Project type''' || BMBF Programm: Grundlagenorientierte Forschung für HPC-Software im Hoch- und Höchstleistungsrechnen
 
|-
 
|-
| '''Funded by''' || German Research Foundation
+
| '''Funded by''' || BMBF
 
|-
 
|-
| '''Begin''' || August 2003
+
| '''Begin''' || April 2017
 
|-
 
|-
| '''End''' || March 2009
+
| '''End''' || March 2020
 
|-
 
|-
| '''Leader''' || [[Univ.-Prof. Dr. Hans-Joachim Bungartz]], [[Dr. rer. nat. Miriam Mehl]]
+
| '''Leader''' || [http://www.nm.ifi.lmu.de/~fuerling/ Dr. Karl Fürlinger, LMU], [http://www.nm.ifi.lmu.de/~kranzlm/ Prof. Dr. Dieter Kranzlmüller, LMU]
 
|-
 
|-
| '''Staff''' || [[Bernhard Gatzhammer, M.Sc]], [[Dr. rer. nat. Tobias Neckel]]
+
| '''Staff''' || [[Michael Bader|Univ.-Prof. Dr. Michael Bader]], [[Philipp Samfass]], [[Carsten Uphoff]]
 
|-
 
|-
| '''Contact person''' || [[Univ.-Prof. Dr. Hans-Joachim Bungartz]]
+
| '''Contact person''' || [[Michael Bader|Univ.-Prof. Dr. Michael Bader]]
 
|-  
 
|-  
| '''Co-operation partner''' || Prof.Dr. Krafczyk (Institut für Computeranwendungen im Bauingenieurwesen, TU Braunschweig)
+
| '''Co-operation partner''' || [http://www.itc.rwth-aachen.de/go/id/epvp/gguid/0xAAB88A63E195DA4E98AE006286D9839F/allou/1/ Dr. Christian Terboven, RWTH Aachen University]
Prof.Dr. E. Rank (Lehrstuhl für Bauinformatik, TU München)
 
 
|}
 
|}
  
 
'''Brief description'''<br><br>
 
'''Brief description'''<br><br>
 +
The project Chameleon develops a task-based programming environment for reactive applications. "Reactive" means that programmers can let application react to changing hardware conditions. Chameleon envisages three components that together with MPI and OpenMP facilitate reaktive applications:
 +
(1) A task-based environment that allows applications to better tolerate idle times and load imbalances across nodes. This environment will be implemented by extending the established programming models MPI and OpenMP.
 +
(2) A component for "performance introspection", which allows applications and runtime environment to gain information on the current, dynamic performance properties (using techniques and tools from performance analysis), to improve performance at runtime.
 +
(3) An analysis component that will bring together and further process measured data and runtime information. Based on its analysis, the component will provide applications with methods and services to improve decisions on repartitioning, task migration, etc.
  
Im Projekt P6 der DFG-Forschergruppe 493 soll ein streng partitionierter Ansatz zur numerischen Simulation von Fluid-Struktur-Wechselwirkungen weiterentwickelt und an prototypischen und zugleich technisch relevanten Modellkonfigurationen erprobt werden. Für die Strömungsberechnungen wird der auf kartesischen Gittern arbeitende MAC-Code Nast++, entwickelt für die Behandlung zeitabhängiger laminarer Strömungen viskoser inkompressibler Fluide in veränderlichen dreidimensionalen Geometrien, weiterentwickelt und eingesetzt. Zur Berechnung der Antwort der flexiblen Strukturen bringt das Projekt P10 (Prof. Rank, Dr.-Ing. Düster) einen Löser zur strukturdynamischen Simulation in den partitionierten Ansatz ein. Nach zunächst vorzunehmenden Verbesserungen bzw. Erweiterungen am Ströungscode soll die voll transiente (implizite) Kopplung im Sinne der partitionierten Lösung realisiert und im Hinblick auf Robustheit und Stabilität untersucht und optimiert werden. Zur Validierung soll vor allem das Prinzipexperiment FLUSTRUC-A aus Projekt P4 (Prof. Durst, Dr.-Ing. Breuer, Dipl.-Ing. Lienhart) dienen. Ein weiterer Schwerpunkt der Arbeiten liegt auf der Bereitstellung einer modularen Software-Infrastruktur, die über einheitlich definierte Schnittstellen den einfachen Austausch von Komponenten gestattet und somit in der Forschergruppe beispielsweise zum Vergleich verschiedener Strukturlöser bzw. verschiedener Fluidlöser in unterschiedlichen Szenarien genutzt werden kann. Hierbei findet eine intensive Kooperation der Teilprojekte P6, P8 und P10 statt.
+
See the [http://www.chameleon-hpc.org/ Chameleon project website] for further information.
 
 
  
= Modeling and Simulation of Micropumps =
+
= BMWi: Federal Ministry for Economics Affairs and Energy =
  
 +
== ATHLET-preCICE - Erweiterung von ATHLET durch die allgemeine Kopplungsschnittstelle preCICE für die Simulation von Multiphysikproblemen in der Reaktorsicherheit ==
  
 
{| class="wikitable"
 
{| class="wikitable"
 +
|-
 +
| '''Project type''' || PT-GRS Reaktorsicherheitsforschung im Förderbereich Transienten und Unfallabläufe
 
|-
 
|-
| '''Project type''' || German Research Foundation Project
+
| '''Funded by''' || BMWi
|-
 
| '''Funded by''' || German Research Foundation
 
 
|-
 
|-
| '''Begin''' || April 2003
+
| '''Begin''' || 2019
 
|-
 
|-
| '''End''' || open
+
| '''End''' || 2022
 
|-
 
|-
| '''Leader''' || [[Dr. rer. nat. Miriam Mehl]], [[Univ.-Prof. Dr. Christoph Zenger]]
+
| '''Leader''' || [[Dr. rer. nat. Benjamin Uekermann]] , [[Univ.-Prof. Dr. Hans-Joachim Bungartz]]
 
|-
 
|-
| '''Staff''' || [[Dipl.-Inf. Tobias Weinzierl]], [[Dr. rer. nat. Tobias Neckel]], [[Dipl.-Ing. Ioan Lucian Muntean]]
+
| '''Staff''' || [[Gerasimos Chourdakis, M.Sc.]]
 
|-
 
|-
| '''Contact person''' || [[Dr. rer. nat. Miriam Mehl]]
+
| '''Contact person''' || [[Dr. rer. nat. Benjamin Uekermann]]
 
|-  
 
|-  
| '''Co-operation partner''' || Prof.Dr. Peter Hänggi (Physik, Uni Augsburg)
+
| '''Co-operation partner''' || Dr.-Ing. Fabian Weyermann, Gesellschaft für Anlagen- und Reaktorsicherheit (GRS) gGmbH
 
|}
 
|}
  
 
'''Brief description'''<br><br>
 
'''Brief description'''<br><br>
 +
Durch den Einsatz passiver Sicherheitssysteme bei Reaktoren der Generation 3+ können der Kühlkreislauf
 +
und das Containment nicht mehr getrennt voneinander betrachtet werden. So sind zum Beispiel bei
 +
Gebäudekondensatoren physikalische Effekte beider Systeme stark gekoppelt: Thermohydraulik in den
 +
Rohrleitungen, Wärmeleitung in komplizierten dreidimensionalen Strukturen (Kühlrippen) und eine
 +
konvektive Gas- oder Dampfströmung auf der Kondensatoraußenseite. Die Simulation des Gesamtsystems
 +
ist daher ein Multiphysikproblem, und damit ist eine Kopplung mehrerer Simulationsprogramme notwendig.
 +
Eine allgemeine Code-unabhängige Kopplung kann mittels der Open-Source Kopplungsbibliothek preCICE,
 +
sehr effizient realisiert werden.
 +
Im Rahmen dieses Projektes wollen wir eine preCICE-Schnittstelle für AC2 entwickeln. Diese soll zuerst für
 +
das Modul ATHLET implementiert werden. Da schon eine große Anzahl verschiedenster
 +
Simulationsprogramme wie ANSYS Fluent, COMSOL, OpenFOAM, CalculiX, oder Code_Aster über eine
 +
preCICE-Schnittstelle verfügen, würden dadurch alle diese Programme unmittelbar für gekoppelte
 +
Analysen mit ATHLET nutzbar. Ein weiterer Vorteil dieser Schnittstelle ist, dass dadurch nicht nur die
 +
gleichzeitige Kopplung von zwei Rechenprogrammen, sondern drei oder auch mehr, möglich ist. Die
 +
detaillierte Simulation des genannten Beispiels des Gebäudekondensators wird hierdurch erst möglich. Da
 +
ähnliche multiphysikalische Probleme auch bei den modularen Reaktoren, die in vielen Ländern als die
 +
Zukunft der Nukleartechnik gesehen, auftreten, ist die angestrebte Implementierung einer preCICE-
 +
Schnittstelle in ATHLET ein notwendiger Schritt für die Zukunftsfähigkeit von ATHLET.
  
In this project, a new type of micropumps will be examined in detail. The micropump consists of a three-dimensional array of identical pores with periodically but asymmetrically varying diameter, within which a suspension with particles to be sorted is pumped to and fro. The interplay of the flow field and of stochastic thermical forces results - according to the principles of Brownian Motors - in a directed movement of the suspended particles. As the transport direction depends on the dynamically relevant details of the system, in particular for example of the particle size, this hydrodynamical micropump can be used for a continuous and parallel sorting of particles. The Brownian motion of small particles in a time-dependent viscous flow field through a pore with varying diameter represents a challenging and complex hydrodynamical problem. As, however, an as accurate as possible understanding of the underlying physical processes is indispensable for an experimental realization of the micropump, this problem shall be exhaustively examined within this project with the help of a combination of analytical and numerical methods. Special subjects are:
 
 
* transport properties of particles in dependence on the parameters particle size, pumping amplitude and frequency, pore shape, etc.,
 
* Interactions between particles via their volume and hydrodynamical effects,
 
* efficiency of particle sorting.
 
  
= Particle Transport in Drift Ratchet as an Application Example for High-Performance CFD and FSI =
+
= HydroBITS: Code Optimisation and Simulation for Bavarian Water Supply and Distribution =
  
 
{| class="wikitable"
 
{| class="wikitable"
 
|-
 
|-
| '''Project type''' || Grid Computing-based CFD and FSI Simulations
+
| '''Project type''' || Research Project
 
|-
 
|-
| '''Funded by''' || DEISA
+
| '''Funded by''' || Bavarian State Ministry of the Environment and Consumer Protection / LfU
 
|-
 
|-
| '''Begin''' || January 2008
+
| '''Begin''' || January 2018
 
|-
 
|-
| '''End''' || December 2008
+
| '''End''' || December 2021
 
|-
 
|-
 
| '''Leader''' || [[Univ.-Prof. Dr. Hans-Joachim Bungartz]]
 
| '''Leader''' || [[Univ.-Prof. Dr. Hans-Joachim Bungartz]]
 
|-
 
|-
| '''Staff''' || [[Dipl.-Ing. Ioan Lucian Muntean]]
+
| '''Staff''' || [[Dr. rer. nat. Tobias Neckel]], [[Ivana Jovanovic, M.Sc. (hons)]]
 
|-
 
|-
| '''Contact person''' || [[Dipl.-Ing. Ioan Lucian Muntean]]
+
| '''Contact person''' || [[Dr. rer. nat. Tobias Neckel]]
 
|-  
 
|-  
| '''Co-operation partner''' ||  Prof.Dr. Peter Hänggi (Physik, Uni Augsburg), Prof.Dr.-Ing. Rodica Potolea (TU Cluj-Napoca)
+
| '''Co-operation partner''' ||  Dr. Jens Weismüller, Dr. Wolfgang Kurtz, LRZ
 
|}
 
|}
  
 
'''Brief description'''<br><br>
 
'''Brief description'''<br><br>
 +
In [https://www.lrz.de/forschung/projekte/forschung-e-infra/HydroBITS/ HydroBITS], existing IT structures at different institutions related to water supply and distribution in Bavaria are going to be analysed. Basics  for modernising the corresponding IT infrastructure are going to be created which are necessary due to various technological developmentss in the recent years. In cooperation with the LRZ, workflows as well as simulation models and data of the Bavarian Landesamts für Umwelt are analysed. A demonstrator platform with a prototype for a modern IT structure are going to be created.
  
By means of numerical simulations (CFD and FSI), this project contributes to a better understanding of the physical phenomena involved in particle separation methods based on drift ratchets. This will allow for the optimization and tailoring of the system parameters for specific types of particles and transporting flows. The drift ratchet simulation scenario is computationally expensive, especially because of large simulation times with small time steps, multi-scale models, multi-physics phenomena, and the movement of particles in the complex geometry of the ratchets.
+
= Helmholtz Gemeinschaft: MUnich School of Data Science (MUDS): Integrated Data Analysis 2.0 =
 
 
In this project, we focus on:
 
 
 
* computation of CFD and FSI simulations on grid computing environments;
 
* parameter study of drift ratchet scenarios;
 
* simulation software tuning for different high-performance computing architectures available within DEISA.
 
 
 
Furthermore, we intend to broaden the software package GridSFEA to support and ease the execution of these large and complex simulations on the Grid.
 
 
 
 
 
= Hardware-oriented Simulation and Computing =  
 
  
 
{| class="wikitable"
 
{| class="wikitable"
 
|-
 
|-
| '''Project type''' || IGSSE Project Team
+
| '''Project type''' || Research Project
 
|-
 
|-
| '''Funded by''' || Excellence Initiative of the German federal and state governments
+
| '''Funded by''' || Helmholtz Gemeinschaft
 
|-
 
|-
| '''Begin''' || April 2007
+
| '''Begin''' || September 2019
 
|-
 
|-
| '''End''' || March 2010
+
| '''End''' || August 2023
 
|-
 
|-
| '''Leader''' || [[Dr. rer. nat. Michael Bader]], Dr. Carsten Trinitis
+
| '''Leader''' || [[Univ.-Prof. Dr. Hans-Joachim Bungartz]], Prof. Frank Jenko (MPP)
 
|-
 
|-
| '''Staff''' || [[Csaba Attila Vigh, M.Sc]],  
+
| '''Staff''' || [[Dr. rer. nat. Tobias Neckel]], [[Ravi Kislaya, M.Sc.]]
[[Dipl.-Inf. Tobias Weinzierl|Dipl.-Inf.]]
 
 
|-
 
|-
| '''Contact person''' || [[Dr. rer. nat. Michael Bader]]
+
| '''Contact person''' || [[Dr. rer. nat. Tobias Neckel]]
 
|-  
 
|-  
| '''Co-operation partner''' ||  Prof.Dr. Arndt Bode (CeCVDE, TUM-Informatik), Prof. Dr. Markus Schwaiger (BioMedTUM)
+
| '''Co-operation partner''' ||  Michael Bergmann (MPP)
 
|}
 
|}
  
 
'''Brief description'''<br><br>
 
'''Brief description'''<br><br>
 +
In this project of [https://www.mu-ds.de/ MUDS], the existing approaches for Bayesian Inversion in the context of fusion plasma simulations (the so-called Integrated Data Analysis) will be generalized and extended to incorporate a) stochastic information for forward propagation of uncertainties and b) simulation results of plasma microturbulence back into the Inversion process. In particular, the code [http://genecode.org/ GENE] will be used.
  
The recent development of commodity as well as high-performance computers shows that computationally and data intensive tasks can only benefit from the hardware's full potential, if both processor and architecture features are taken into account - from the early algorithmic design up to the final implementation. Evident examples are the limited memory access via a hierarchy of cache memory and the increasingly hybrid and hierarchical design of high-end systems, both complicated by the ongoing trend towards multi- and manycore CPUs, accelerators and other HPSoCs (High Performance Systems on a Chip). Against this background, this proposal focuses on hardware-aware programming in the context of several applications from Science and Engineering:
+
= [https://www.konwihr.uni-erlangen.de/about-konwihr.shtml KONWIHR]: The Bavarian Competence Network for Technical and Scientific High Performance Computing=
 
 
* Simulation of fluid flow problems on dynamically adaptive discretisation grids using recursive structured grid generation approaches and space-filling curves for parallelisation and cache-oblivious implementation.
 
* Compute- and memory-intensive Boundary Element calculations of electric field and potential distributions in the context of simulation and optimisation of High Voltage Apparatus design. (Group Prof. Bode).
 
* Hardware-aware algorithms for image reconstruction in medical imaging. (Group Prof. Schwaiger)
 
 
 
 
 
 
 
 
 
= Development of New Methods for the Production of Highly Reactive Polyisobutenes =
 
  
 +
== ProPE-AL: Process-oriented Performance Engineering Service Infrastructure for Scientific Software at German HPC Centers - Algorithms ==
  
 
{| class="wikitable"
 
{| class="wikitable"
 
|-
 
|-
| '''Project type''' || IGSSE Project Team
+
| '''Project type''' || KONWIHR
 
|-
 
|-
| '''Funded by''' || Excellence Initiative of the German federal and state governments
+
| '''Funded by''' || KONWIHR
 
|-
 
|-
| '''Begin''' || April 2007
+
| '''Begin''' || Obtober 2017
 
|-
 
|-
| '''End''' || March 2010
+
| '''End''' || September 2020
 
|-
 
|-
| '''Leader''' || [[Univ.-Prof. Dr. Hans-Joachim Bungartz]], [[Dr. rer. nat. Miriam Mehl]]
+
| '''Leader''' || [[Univ.-Prof. Dr. Michael Bader]], [[Univ.-Prof. Dr. Hans-Joachim Bungartz]]
 
|-
 
|-
| '''Staff''' || [[Csaba Attila Vigh, M.Sc]]
+
| '''Staff''' || [[Hayden Liu Weng, M.Sc. (hons)]]
[[Dr. rer. nat. Tobias Neckel]]
 
 
|-
 
|-
| '''Contact person''' || [[Dr. rer. nat. Miriam Mehl]]
+
| '''Contact person''' || [[Univ.-Prof. Dr. Michael Bader]], [[Univ.-Prof. Dr. Hans-Joachim Bungartz]]
 
|-  
 
|-  
| '''Co-operation partner''' || Prof. Dr. Fritz Kühn (Chemisty, TUM)
+
| '''Co-operation partner''' || [https://hpc.fau.de/person/gerhard-wellein/ Univ.-Prof. Dr. Gerhard Wellein, FAU Erlangen-Nürnberg], [http://www.itc.rwth-aachen.de/cms/IT-Center/IT-Center/Team/~epvp/Mitarbeiter-CAMPUS-/?gguid=0xB8B55109186DA749BE27700404DA28D8&allou=1 Univ.-Prof. Dr. Matthias Müller, RWTH Aachen], [https://tu-dresden.de/zih/die-einrichtung/struktur/wolfgang-e-nagel Univ.-Prof. Dr. Wolfgang Nagel, TU Dresden]
 
|}
 
|}
  
 
'''Brief description'''<br><br>
 
'''Brief description'''<br><br>
 +
As part of the DFG call "Performance Engineering for Scientific Software", the
 +
Project partners G. Wellein (FAU Erlangen-Nuremberg), M. Müller (RWTH Aachen) and W.
 +
Nagel (TU Dresden) initiated the project
 +
[https://gauss-allianz.de/en/project/title/ProPE "Process-oriented Performance Engineering Service Infrastructure for Scientific Software at German HPC Centers" (acronym ProPE)].
 +
The project aims at implementing performance engineering (PE) as a well-defined, structured process to improve the resource efficiency of programs.
 +
This structured PE process should allow for target-oriented optimization and parallelization of application codes guided by performance patterns and performance models.
 +
The associated KONWIHR project ProPE-Algorithms (ProPE-AL) adds a further algorithmic optimization step to this well-defined, structured process.
 +
This extension takes into account that the best possible sustainable use of HPC resources through application codes is not only a question of the efficiency of the implementation,
 +
but also a question of the efficiency of the (numerical) algorithms that application codes are based on.
  
Polyisobutene are used in industry in large amounts. Depending on their molecular weight, they are required for rubber production or applied as adhesives, e.g.. More than 100,000 t of highly reactive polyisobutene are produced per year. Thus, efficiency and environmental compatibility are very important tasks. However, to achieve a high qulity and good productivity, all known production methods require reaction temperatures far below 0 degree celsius and solvents such as methylenchloirde, dichlormethane, or ethene. Recently, a new type of catalysts was developed at TUM (Lst. für Anorganische Chemie), that allows the production of highly reactive polyisobutene at ambient temperature and in solvents free from chlorine. The tasks of this group are to transfer this method developed on the laboratory scale to the scale of a production reactor, the detection of the underlying chemical reaction mechanisms, and, finally, the further improvement of the method. To reach these tasks, we will exploit synergies between chemistry and informatics by combining methods of experimental chemistry (reaction mechanisms, testing of other catalysts, heterogeneous catalysis, etc.) and scientific computing (examination and optimization of the cooling of the exothermic reactions, flow and transport processes).
+
= Volkswagen Stiftung: ASCETE, ASCETE-II (Advanced Simulation of Coupled Earthquake-Tsunami Events) =  
 
 
= Modeling and Valuation of Financial Derivatives in Incomplete Markets (FIDEUM)=
 
 
 
Website of [http://www.mathematik-21.de/projects/projects.shtml BMBF projects]
 
 
 
Website of [[FIDEUM]]
 
  
 
{| class="wikitable"
 
{| class="wikitable"
 
|-
 
|-
| '''Project type''' || BMBF support program: Mathematics for innovations in the Industrial and Service Sectors
+
| '''Project type''' || Call "Extreme Events: Modelling, Analysis and Prediction"
 
|-
 
|-
| '''Funded by''' || BMBF
+
| '''Funded by''' || Volkswagen Stiftung
 
|-
 
|-
| '''Begin''' || July 2007
+
| '''Begin''' || February 2012
 
|-
 
|-
| '''End''' || June 2010
+
| '''End''' || December 2019
 
|-
 
|-
| '''Leader''' || [[Univ.-Prof. Dr. Hans-Joachim Bungartz]]
+
| '''Leader''' || [http://www.klimacampus.de/joernbehrens.html Univ.-Prof. Dr. Jörn Behrens] (KlimaCampus, Univ. Hamburg)
 
|-
 
|-
| '''Staff''' || [[Dr. rer. nat. Stefan Zimmer]], [[Dipl.-Tech. Math. Stefanie Schraufstetter]], [[Dipl.-Inf. Dirk Pflüger]]
+
| '''Staff''' || [[Leonhard Rannabauer, M.Sc.]], [[Carsten Uphoff, M.Sc.|Carsten Uphoff]]; former staff: [[Alexander Breuer]], [[Kaveh Rahnema]]
 
|-
 
|-
| '''Contact person''' || [[Dr. rer. nat. Stefan Zimmer]]
+
| '''Contact person''' || [[Michael Bader|Univ.-Prof. Dr. Michael Bader]]
 
|-  
 
|-  
| '''Co-operation partner''' || Prof. Dr. Drs. h.c. Willi Jäger (IWR, University of Heidelberg)
+
| '''Co-operation partner''' || [http://www.klimacampus.de/joernbehrens.html Univ.-Prof. Dr. Jörn Behrens] (KlimaCampus, Univ. Hamburg), [http://www.geophysik.uni-muenchen.de/Members/igel Univ.-Prof. Dr. Heiner Igel], [http://www.geophysik.uni-muenchen.de/Members/kaeser Dr. Martin Käser], [http://www.geophysik.uni-muenchen.de/Members/pelties Dr. Christian Pelties], [http://www.geophysik.uni-muenchen.de/Members/gabriel Dr. Alice-Agnes Gabriel] (all: GeoPhysics, Univ. München), [http://www.seg2.ethz.ch/dalguer/ Dr. Luis Angel Dalguer], [http://www.seismo.ethz.ch/research/groups/comp/people/vylona/index Dr. Ylona van Dinther] (ETH Zürich, Swiss Seismological Service). <br>[http://www.ascete.de/ see official ASCETE webpage]
Prof. Dr. Markus Reiß (Institute of Applied Mathematics, University of Heidelberg)
 
Prof. Dr. Michael Griebel (Institute for Numerical Simulation, University of Bonn)
 
Thetaris
 
 
|}
 
|}
  
 
'''Brief description'''<br><br>
 
'''Brief description'''<br><br>
  
Incomplete markets require new statistical, analytical, and numerical methods, to cope with stochastic volatilities or jumps in the stochastic processes, e.g. These are investigated in a joint project of Universität Heidelberg (with focus on modeling, analysis and statistics), Universität Bonn (with focus on numerics) and SCCS (with focus on software development). The goal of our work in the project is to integrate newly developed methods - especially sparse grid techniques - into the framework of ThetaML, a system of [http://www.thetaris.com/22-0-Imprint.html Thetaris GmbH] that allows rapid formulation and analysis of complex financial derivatives.
+
Earthquakes and tsunamis represent the most dangerous natural catastrophes and can cause large numbers of fatalities and severe economic loss in a single and unexpected extreme event as shown in Sumatra in 2004, Samoa in 2009, Haiti in 2010, or Japan in 2011. Both phenomena are consequences of the complex system of interactions of tectonic stress, fracture mechanics, rock friction, rupture dynamics, fault geometry, ocean bathymetry, and coast line geometry. The ASCETE project forms an interdisciplinary research consortium that – for the first time – will couple the most advanced simulation technologies for earthquake rupture dynamics and tsunami propagation to understand the fundamental conditions of tsunami generation. To our knowledge, tsunami models that consider the fully dynamic rupture process coupled to hydrodynamic models have not been investigated yet. Therefore, the proposed project is original and unique in its character, and has the potential to gain insight into the underlying physics of earthquakes capable to generate devastating tsunamis.
 
 
For more details see [http://www5.in.tum.de/wiki/index.php/FIDEUM here].
 
 
 
  
 +
See the [http://www.ascete.de/ ASCETE website] for further information.
  
= Podstdoctoral Grant Ekaterina Elts =
+
= Intel Parallel Computing Center: Extreme Scaling on x86/MIC/KNL (ExScaMIC) =  
  
 
{| class="wikitable"
 
{| class="wikitable"
 
|-
 
|-
| '''Project type''' || Postdoctoral grant
+
| '''Project type''' || Intel Parallel Computing Center
 
|-
 
|-
| '''Funded by''' || Bayerische Forschungsstiftung
+
| '''Funded by''' || Intel
 
|-
 
|-
| '''Begin''' || August 2008
+
| '''Begin''' || July 2014
 
|-
 
|-
| '''End''' || July 2009
+
| '''End''' || October 2018
 
|-
 
|-
| '''Leader''' || [[Univ.-Prof. Dr. Hans-Joachim Bungartz]]
+
| '''Leader''' || [[Univ.-Prof. Dr. Michael Bader]], [[Univ.-Prof. Dr. Hans-Joachim Bungartz]],  [https://www.lrz.de/persons/bode_arndt/ Univ.-Prof. Dr. Arndt Bode]
 
|-
 
|-
| '''Staff''' || Dr. [[Ekaterina Elts, M.Sc]]
+
| '''Staff''' || [[Nikola Tchipev, M.Sc.|Nikola Tchipev]], [[Steffen Seckler]], [[Carsten Uphoff]], [[Sebastian Rettenberger]]; former staff: [[Alexander Breuer]]
 
|-
 
|-
| '''Contact person''' || Dr. [[Ekaterina Elts, M.Sc]]
+
| '''Contact person''' || [[Univ.-Prof. Dr. Michael Bader]]
 
|-  
 
|-  
| '''Co-operation partner''' || -
+
| '''Co-operation partner''' || [http://www.lrz.de/ Leibniz Supercomputing Centre]
 
|}
 
|}
  
 
'''Brief description'''<br><br>
 
'''Brief description'''<br><br>
  
This  grant will allow Dr. Elts to continue her work in the field of molecular dynamics at the SCCS group.
+
The project is optimizing four different established or upcoming CSE community codes for Intel-based supercomputers. We assume a target platform that will offer several hundred PetaFlop/s based on Intel's x86 (including Intel® Xeon Phi™) architecture. To prepare simulation software for such platforms, we tackle two expected major challenges: achieving a high fraction of the available node-level performance on (shared-memory) compute nodes and scaling this performance up to the range of 10,000 to 100,000 compute nodes.  
 
 
= Numerical Aspects of the Simulation of Quantum Many-body Systems =
 
 
 
{| class="wikitable"
 
|-
 
| '''Project type''' || QCCC project
 
|-
 
| '''Funded by''' || Quantum Computing, Control and Communication (QCCC)
 
|-
 
| '''Begin''' || January 2008
 
|-
 
| '''End''' || December 2008
 
|-
 
| '''Leader''' || [[Univ.-Prof. Dr. Thomas Huckle]]
 
|-
 
| '''Staff''' || [[Dipl.-Math. Konrad Waldherr]]
 
|-
 
| '''Contact person''' || [[Univ.-Prof. Dr. Thomas Huckle]]
 
|-
 
| '''Co-operation partner''' ||  Dr. Thomas Schulte-Herbrueggen (Chemistry, TUM)
 
|}
 
  
 +
We examine four applications from different areas of science and engineering: earthquake simulation and seismic wave propagation with the ADER-DG code SeisSol, simulation of cosmological structure formation using GADGET, the molecular dynamics code ls1 mardyn developed for applications in chemical engineering, and the software framework SG++ to tackle high-dimensional problems in data mining or financial mathematics (using sparse grids). While addressing the Xeon Phi™ (co-)processor architectures, in particular, the project tackles fundamental challenges that are relevant for most supercomputing architectures – such as parallelism on multiple levels (nodes, cores, hardware threads per core, data parallelism) or compute cores that offer strong SIMD capabilities with increasing vector width.
  
'''Brief description'''<br><br>
+
While the first project phase (2014-2016) addressed the Intel Xeon Phi coprocessor (Knights Corner), the second project phase (2016-2018) will specifically focuses on the Xeon Phi as stand-alone processor (Knights Landing architecture).
  
In the last years a growing attention has been dedicated to many body quantum systems from the point of view of quantum information. Indeed, after the initial investigation of simple systems as single or two qubits, the needs of understanding the characteristics of a realistic quantum information device leads necessary to the study of many body quantum systems. These studies are also driven by the very fast development of experiments which in the last years reach the goal of coherent control of a few qubits (ion traps, charge qubits, etc...) with a roadmap for further scaling and improvement of coherent control and manipulation techniques. Also, new paradigm of performing quantum information tasks, such as quantum information transfer, quantum cloning and others, without direct control of the whole quantum system but using our knowledge of it has increased the need of tools to understand in details the behaviour of many body quantum system as we find them in nature. These new goals of the quantum information community lead to an unavoidable exchange of knowledge with other communities that already have the know-how and the insight to address such problems; for example the condensed matter, computational physics or quantum chaos communities. Applying known techniques and developing new ones from a quantum information perspective have already produced fast and unexpected developments in these fields. The comprehension of many body quantum systems ranging from few qubits to the thermodynamical limit is thus needed and welcome not only to develop useful quantum information devices, but it will lead us to a better understanding of the quantum world.
+
= Elite Network of Bavaria (ENB): =
  
= Bavarian Graduate School of Computational Engineering =
+
== Bavarian Graduate School of Computational Engineering (BGCE) ==
  
 
[http://www.bgce.de Website of the BGCE]
 
[http://www.bgce.de Website of the BGCE]
Line 1,070: Line 671:
 
| '''Project type''' || Elite Study Program
 
| '''Project type''' || Elite Study Program
 
|-
 
|-
| '''Funded by''' || Elite Network of Bavaria
+
| '''Funded by''' || Elite Network of Bavaria, TUM, FAU
 
|-
 
|-
 
| '''Begin''' || April 2005
 
| '''Begin''' || April 2005
 
|-
 
|-
| '''End''' || April 2015
+
| '''End''' || April 2025
 
|-
 
|-
 
| '''Leader''' || [[Univ.-Prof. Dr. Hans-Joachim Bungartz]]
 
| '''Leader''' || [[Univ.-Prof. Dr. Hans-Joachim Bungartz]]
 
|-
 
|-
| '''Staff''' || [[Dr. rer. nat. Tobias Neckel]], [[Dipl.-Inf. Marion Bendig]]
+
| '''Staff''' || [[Dr. rer. nat. Tobias Neckel]], [[Michael Rippl, M.Sc. (hons)]], [[Benjamin Rüth, M.Sc. (hons)]]
 
|-
 
|-
 
| '''Contact person''' || [[Dr. rer. nat. Tobias Neckel]]
 
| '''Contact person''' || [[Dr. rer. nat. Tobias Neckel]]
 
|-  
 
|-  
| '''Co-operation partner''' || International Master's Program Computational Science and Engineering (TUM)
+
| '''Co-operation partner''' || International Master's Program Computational Science and Engineering (TUM)<br>
International Master's Program Computational Mechanics (TUM)
+
International Master's Program Computational Mechanics (TUM)<br>
 
International Master's Program Computational Engineering (U Erlangen)
 
International Master's Program Computational Engineering (U Erlangen)
 
|}
 
|}
Line 1,097: Line 698:
 
Students who master the regular program with an above-average grade, and successfully finish the Honours program, as well, earn the academic degree "Master of Science with Honours".
 
Students who master the regular program with an above-average grade, and successfully finish the Honours program, as well, earn the academic degree "Master of Science with Honours".
  
 +
= International Graduate School of Science and Engineering (IGSSE):  =
  
= Accompanying Mobility Measures for the SimLab in Belgrade - SimLab Scholarship Program and Compact Courses =
+
==An Exascale Library for Numerically Inspired Machine Learning [http://www.igsse.gs.tum.de/index.php?id=262 (ExaNIML)]==
 
 
[http://www5.in.tum.de/forschung/simlab/ Website of the project]
 
  
 
{| class="wikitable"
 
{| class="wikitable"
 
|-
 
|-
| '''Project type''' || DAAD Programme Academic Reconstruction of South Eastern Europe
+
| '''Project type''' || International IGGSE project
 
|-
 
|-
| '''Funded by''' || German Academic Exchange Service (DAAD)
+
| '''Funded by''' || International Graduate School of Science and Engineering
 
|-
 
|-
| '''Begin''' || February 2002
+
| '''Begin''' || June 2018
 
|-
 
|-
| '''End''' || December 2008
+
| '''End''' || December 2020
 
|-
 
|-
 
| '''Leader''' || [[Univ.-Prof. Dr. Hans-Joachim Bungartz]]
 
| '''Leader''' || [[Univ.-Prof. Dr. Hans-Joachim Bungartz]]
 
|-
 
|-
| '''Staff''' || Dr. Ralf-Peter Mundani, [[Dipl.-Ing. Ioan Lucian Muntean]]
+
| '''Staff''' || [[Dr. rer. nat. Tobias Neckel]], [[Severin Reiz]]
 
|-
 
|-
| '''Contact person''' || [[Univ.-Prof. Dr. Hans-Joachim Bungartz]]
+
| '''Contact person''' || [[Severin Reiz]]
 +
|-  
 +
| '''Co-operation partner''' || The University of Texas at Austin<br>
 +
Institute for Computational Engineering and Sciences 
 
|}
 
|}
  
  
[http://www5.in.tum.de/forschung/simlab/daad_stip.html SimLab Scholarship program] <br>
+
'''Brief description'''<br><br>
 
+
There is a significant gap between algorithms and software in Data Analytics and those in Computational Science and Engineering (CSE) concerning their maturity on High-Performance Computing (HPC) systems. Given the fact that Data Analytics tasks show a rapidly growing share of supercomputer usage, this gap is a serious issue. This proposal aims to bridge this gap for a number of important tasks arising, e.g., in a Machine Learning (ML) context: density estimation, and high-dimensional approximation (for example (semi-supervised) classification).
[http://www5.in.tum.de/forschung/simlab/course2008.html Seventh SimLab Course on Parallel Numerical Simulation]<br>
+
To this end, we aim to (1) design and analyze novel algorithms that combine two powerful numerical methods: sparse grids and kernel methods; and to (2) design and implement an HPC library that provides an open-source implementation of these algorithms and supports heterogeneous distributed-memory architectures. The attractiveness of sparse grids is mainly due to their high-quality accuracy guarantees and their foundation on rigorous approximation theory. But their shortcoming is that they require (regular) Cartesian grids. Kernel methods do not require Cartesian grids but, first, their approximation properties can be suboptimal in a practice, and second, they require regularization whose parameters can be expensive to determine.
 
+
Our main idea is to use kernel methods for manifold learning and to combine them with the sparse grids to define approximations on the manifold. Such high-dimensional approximation problems find applications in model reduction, uncertainty quantification (UQ), and ML.
[http://www5.in.tum.de/forschung/simlab/simlab_info.html Additional information]<br>
 
 
 
-->
 
 
 
 
 
[[Category:Research]]
 

Latest revision as of 11:38, 30 April 2020



Contents

DFG: German Research Foundation

Research Software Sustainability

preDOM – Domestication of the Coupling Library preCICE

Funded by DFG
Begin 2018
End 2021
Leader Dr. rer. nat. Benjamin Uekermann, Univ.-Prof. Dr. Hans-Joachim Bungartz
Staff
Contact person Dr. rer. nat. Benjamin Uekermann

Brief description

The purpose of the proposed project is to domesticate preCICE – to make preCICE usable without support by the developer team. To achieve this goal, usability and documentation of preCICE have to be improved significantly. Marketing and sustainability strategies are required to build-up awareness of and trust in the software in the community. In addition, best practices on how to make a scientific software prototype usable for a wide academic range, can be derived and shall be applied to similar software projects.

Reference: preCICE Webpage, preCICE Source Code

SeisSol-CoCoReCS – SeisSol as a Community Code for Reproducible Computational Seismology

Funded by DFG
Begin 2018
End 2021
Leader Univ.-Prof. Dr. Michael Bader, Dr. Anton Frank, (LRZ), Dr. Alice-Agnes Gabriel (LMU)
Staff Ravil Dorozhinskii, M.Sc., Lukas Krenz, M.Sc., Carsten Uphoff
Contact person Univ.-Prof. Dr. Michael Bader

Brief description

The project is funded as part of DFG's initiative to support sustainable research software. In the CoCoReCS project, we will improve several issues that impede a wider adoption of the earthquake simulation software SeisSol. This includes improvements to the workflows for CAD and meshing, establishing better training and introductory material and the setup of an infrastructure to reproduce test cases, benchmarks and user-provided simulation scenarios.

Priority Program 1648 SPPEXA - Software for Exascale Computing

Coordination Project

Funded by DFG
Begin 2012
End 2020
Leader Univ.-Prof. Dr. Hans-Joachim Bungartz
Staff Severin Reiz
Contact person Univ.-Prof. Dr. Hans-Joachim Bungartz

Brief description

The Priority Programme (SPP) SPPEXA is different from other SPP with respect to its genesis, its volume, its funding via DFG's Strategy Fund, with respect to the range of disciplines involved, and to a clear strategic orientation towards a set of time-critical objectives. Therefore, despite its distributed structure, SPPEXA also resembles a Collaborative Research Centre to a large extent. Its successful implementation and evolution will require both more and more intense structural measures. The Coordination Project comprises all intended SPPEXAwide activities, including steering and coordination, internal and international collaboration and networking, and educational activities.

Reference: Priority Program 1648 SPPEXA - Software for Exascale Computing

ExaFSA - Exascale Simulation of Fluid-Structure-Acoustics Interaction

Funded by DFG
Begin 2012
End 2019
Leader Prof. Dr. Miriam Mehl
Staff Dr. rer. nat. Benjamin Uekermann, Benjamin Rüth
Contact person Prof. Dr. Miriam Mehl

Brief description

In scientific computing, an increasing need for ever more detailed insights and optimization leads to improved models often including several physical effects described by different types of equations. The complexity of the corresponding solver algorithms and implementations typically leads to coupled simulations reusing existing software codes for different physical phenomena (multiphysics simulations) or for different parts of the simulation pipeline such as grid handling, matrix assembly, system solvers, and visualization. Accuracy requirements can only be met with a high spatial and temporal resolution making exascale computing a necessary technology to address runtime constraints for realistic scenarios. However, running a multicomponent simulation efficiently on massively parallel architectures is far more challenging than the parallelization of a single simulation code. Open questions range from suitable load balancing strategies over bottleneck-avoiding communication, interactive visualization for online analysis of results, synchronization of several components to parallel numerical coupling schemes. We intend to tackle these challenges for fluid-structure-acoustics interactions, which are extremely costly due to the large range of scales. Specifically, this requires innovative surface and volume coupling numerics between the different solvers as well as sophisticated dynamical load balancing and in-situ coupling and visualization methods.

Reference: ExaFSA Webpage, preCICE Webpage, preCICE Source Code

EXAHD - An Exa-Scalable Two-Level Sparse Grid Approach for Higher-Dimensional Problems in Plasma Physics and Beyond

Funded by DFG
Begin 2012
End 2020
Leader Univ.-Prof. Dr. Hans-Joachim Bungartz
Staff Michael Obersteiner
Contact person Univ.-Prof. Dr. Hans-Joachim Bungartz

Brief description

Higher-dimensional problems (i.e., beyond four dimensions) appear in medicine, finance, and plasma physics, posing a challenge for tomorrow's HPC. As an example application, we consider turbulence simulations for plasma fusion with one of the leading codes, GENE, which promises to advance science on the way to carbon-free energy production. While higher-dimensional applications involve a huge number of degrees of freedom such that exascale computing gets necessary, mere domainde composition approaches for their parallelization are infeasible since the communication explodes with increasing dimensionality. Thus, to ensure high scalability beyond domain decomposition, a second major level of parallelism has to be provided. To this end, we propose to employ the sparse grid combination scheme, a model reduction approach for higher-dimensional problems. It computes the desired solution via a combination of smaller, anisotropic and independent simulations, and thus provides this extra level of parallelization. In its randomized asynchronous and iterative version, it will break the communication bottleneck in exascale computing, achieving full scalability. Our two-level methodology enables novel approaches to scalability (ultra-scalable due to numerically decoupled subtasks), resilience (fault and outlier detection and even compensation without the need of recomputing), and load balancing (high-level compensation for insufficiencies on the application level).

Reference: Priority Program 1648 SPPEXA - Software for Exascale Computing

SFB-TRR 89: Invasive Computing

Funded by DFG
Begin Mid 2010
End 3rd phase in mid 2022
Leader Univ.-Prof. Dr. Hans-Joachim Bungartz (D3), Univ.-Prof. Dr. Michael Bader (A4)
Staff Santiago Narvaez, M.Sc., Emily Mo-Hellenbrand, M.Sc., Alexander Pöppl, M.Sc., Dr. rer. nat. Tobias Neckel, Dr. rer. nat. Philipp Neumann; former staff: Dr. rer. nat. Martin Schreiber
Contact person Univ.-Prof. Dr. Hans-Joachim Bungartz (D3), Univ.-Prof. Dr. Michael Bader (A4)

Brief description

In the CRC/Transregio "Invasive Computing", we investigate a novel paradigm for designing and programming future parallel computing systems - called invasive computing. The main idea and novelty of invasive computing is to introduce resource-aware programming support in the sense that a given program gets the ability to explore and dynamically spread its computations to neighbour processors similar to a phase of invasion, then to execute portions of code of high parallelism degree in parallel based on the available (invasible) region on a given multi-processor architecture. Afterwards, once the program terminates or if the degree of parallelism should be lower again, the program may enter a retreat phase, deallocate resources and resume execution again, for example, sequentially on a single processor. In order to support this idea of self-adaptive and resource-aware programming, not only new programming concepts, languages, compilers and operating systems are necessary but also revolutionary architectural changes in the design of MPSoCs (Multi-Processor Systems-on-a-Chip) must be provided so to efficiently support invasion, infection and retreat operations involving concepts for dynamic processor, interconnect and memory reconfiguration.

Reference: Transregional Collaborative Research Centre 89 - Invasive Computing

A4: Design-Time Characterisation and Analysis of Invasive Algorithmic Patterns

D3: Invasion for High Performance Computing

EU Horizon 2020

An Exascale Hyperbolic PDE Engine (ExaHyPE)

Project type EU Horizon 2020, FET-PROACTIVE call Towards Exascale High Performance Computing (FETHPC)
Funded by European Union’s Horizon 2020 research and innovation programme
Begin October 2015
End September 2019
Leader Univ.-Prof. Dr. Michael Bader
Staff Dr. Anne Reinarz, Jean-Matthieu Gallard, Leonhard Rannabauer, Philipp Samfass, M.Sc.; former staff: Dr. rer. nat. Vasco Varduhn, Angelika Schwarz, M.Sc.
Contact person Univ.-Prof. Dr. Michael Bader
Co-operation partner Prof. Michael Dumbser (Univ. Trento), Dr. Tobias Weinzierl (Durham University), Prof. Dr. Luciano Rezzolla (Fra nkfurt Institute for Advanced Studies), Prof. Dr. Heiner Igel and Dr. Alice Gabriel (LMU München), Robert Iberl (BayFor), Dr. Alexander Moskovsky (RSC Group); Prof. Dr. Arndt Bode (LRZ)

Brief description

The Horizon 2020 project ExaHyPE is an international collaborative project to develop an exascale-ready engine to solve hyperbolic partial differential equations. The engine will rely on high-order ADER-DG discretization (Arbitrary high-order DERivative Discontinuous Galerkin) on dynamically adaptive Cartesian meshes (building on the Peano framework for adaptive mesh refinement).

ExaHyPE focuses on grand challenges from computational seismology (earthquake simulation) and computational astrophysics (simulation of binary neutron star systems), but at the same time aims at developing a flexible engine to solve a wide range of hyperbolic PDE systems.

See the ExaHyPE website for further information!

Centre of Excellence for Exascale Supercomputing in the area of ​​the Solid Earth (ChEESE)

Project type EU Horizon 2020, INFRAEDI-02-2018 call Centres of Excellence on HPC
Funded by European Union’s Horizon 2020 research and innovation programme
Begin November 2018
End October 2021
Leader Barcelona Supercomputing Centre
Staff Ravil Dorozhinskii, M.Sc., Lukas Krenz, M.Sc., Leonhard Rannabauer, M.Sc., Jean-Matthieu Gallard, M.Sc.
Contact person Univ.-Prof. Dr. Michael Bader
Co-operation partner 14 participating institutes, see the ChEESE website for details.

Brief description

The ChEESE Center of Excellence will prepare flagship codes and enable services for Exascale supercomputing in the area of Solid Earth (SE). ChEESE will harness European institutions in charge of operational monitoring networks, tier-0 supercomputing centers, academia, hardware developers and third-parties from SMEs, Industry and public-governance. The scientific ambition is to prepare 10 flagship codes to address Exascale Computing Challenging (ECC) problems on computational seismology, magnetohydrodynamics, physical volcanology, tsunamis, and data analysis and predictive techniques for earthquake and volcano monitoring.

SCCS contributes SeisSol and ExaHyPE as flagship in ChEESE. See the ChEESE website for further information!

ENERXICO - Supercomputing and Energy for Mexico

Project type EU Horizon 2020, call FETHPC-01-2018 International Cooperation on HPC
Funded by European Union’s Horizon 2020 research and innovation programme
Begin June 2019
End June 2021
Leader Barcelona Supercomputing Centre
Staff Dr. Anne Reinarz, Sebastian Wolf, M.Sc.
Contact person Univ.-Prof. Dr. Michael Bader
Co-operation partner 16 participating institutes, see the ENERXICO website for details.

Brief description

ENERXICO is a collaborative research and innovation action that shall foster the collaboration between Europe and Mexico in supercomputing. ENERXICO will develop performance simulation tools that require exascale HPC and data intensive algorithms for different energy sources: wind energy production, efficient combustion systems for biomass-derived fuels (biogas) and exploration geophysics for hydrocarbon reservoirs.

SCCS is mainly concerned with large-scale seismic simulations based on SeisSol and ExaHyPE. See the ENERXICO website for further information!

BMBF: Federal Ministry of Education and Research

ELPA-AEO - Eigenwert-Löser für PetaFlop-Anwendungen: Algorithmische Erweiterungen und Optimierungen

Project type Fördermassnahme IKT 2020 - Höchstleistungsrechnen im Förderbereich: HPC
Funded by BMBF
Begin 2016
End 2018
Leader Dr. Hermann Lederer, Univ.-Prof. Dr. Hans-Joachim Bungartz
Staff Univ.-Prof. Dr. Thomas Huckle, Michael Rippl, M.Sc.
Contact person Univ.-Prof. Dr. Thomas Huckle
Co-operation partner Dr. Hermann Lederer (Rechenzentrum MPG Garching), Prof. Dr. Bruno Lang (Universität Wuppertal), Prof. Dr. Karsten Reuter

(Chemie, TUM), Dr. Christoph Scheuerer (TUM-Chemie), Fritz-Haber-Institut Berlin

Brief description

Übergeordnetes Ziel ist es, die Effizienz von Supercomputer-Simulationen zu steigern, für die die Lösung des Eigenwertproblems für dichte und Band-strukturierte symmetrische Matrizen zu einem entscheidenden Beitrag wird. Dies ist insbesondere bei Fragestellungen aus der Materialforschung, der biomolekularen Forschung und der Strukturdynamik der Fall. Aufbauend auf den Ergebnissen des ELPA-Vorhabens sollen im Rahmen dieses Vorhabens noch größere Probleme als bisher adressiert werden können, der mit der Simulation verbundene Rechenaufwand verringert und bei vorgegebener Genauigkeit und weiterhin hoher Software-Skalierbarkeit Ressourceneinsatz und Energieverbrauch reduziert werden.


TaLPas: Task-basierte Lastverteilung und Auto-Tuning in der Partikelsimulation

Project type BMBF Programm: Grundlagenorientierte Forschung für HPC-Software im Hoch- und Höchstleistungsrechnen
Funded by BMBF
Begin January 2017
End June 2020
Leader Univ.-Prof. Dr. Hans-Joachim Bungartz, TUM, Philipp Neumann, Universität Hamburg
Staff Univ.-Prof. Dr. Hans-Joachim Bungartz, Nikola Tchipev, M.Sc., Steffen Seckler, M.Sc. (hons)
Contact person Nikola Tchipev, M.Sc.
Co-operation partner Philipp Neumann, Universität Hamburg, Colin W. Glass, HLRS/Universität Stuttgart, Guido Reina, VISUS/Universität Stuttgart, Felix Wolf, TU Darmstadt, Martin Horsch, TU Kaiserslautern, Jadran Vrabec, Universität Paderborn

Brief description

The main goal of TaLPas is to provide a solution to fast and robust simulation of many, potentially dependent particle systems in a distributed environment. This is required in many applications, including, but not limited to,

  • sampling in molecular dynamics: so-called “rare events”, e.g. droplet formation, require a multitude of molecular dynamics simulations to investigate the actual conditions of phase transition,
  • uncertainty quantification: various simulations are performed using different parametrisations to investigate the sensitivity of the parameters on the actual solution,
  • parameter identification: given, e.g., a set of experimental data and a molecular model, an optimal set of model parameters needs to be found to fit the model to the experiment.

For this purpose, TaLPas targets

  • the development of innovative auto-tuning based particle simulation software in form of an open-source library to leverage optimal node-level performance. This will guarantee an optimal time-to-solution for small- to mid-sized particle simulations,
  • the development of a scalable task scheduler to yield an optimal distribution of potentially dependent simulation tasks on available HPC compute resources,
  • the combination of both auto-tuning based particle simulation and scalable task scheduler, augmented by an approach to resilience. This will guarantee robust, that is fault-tolerant, sampling evaluations on peta- and future exascale platforms.

For more details, see the project website.

Chameleon: Eine Taskbasierte Programmierumgebung zur Entwicklung reaktiver HPC Anwendungen

Project type BMBF Programm: Grundlagenorientierte Forschung für HPC-Software im Hoch- und Höchstleistungsrechnen
Funded by BMBF
Begin April 2017
End March 2020
Leader Dr. Karl Fürlinger, LMU, Prof. Dr. Dieter Kranzlmüller, LMU
Staff Univ.-Prof. Dr. Michael Bader, Philipp Samfass, Carsten Uphoff
Contact person Univ.-Prof. Dr. Michael Bader
Co-operation partner Dr. Christian Terboven, RWTH Aachen University

Brief description

The project Chameleon develops a task-based programming environment for reactive applications. "Reactive" means that programmers can let application react to changing hardware conditions. Chameleon envisages three components that together with MPI and OpenMP facilitate reaktive applications: (1) A task-based environment that allows applications to better tolerate idle times and load imbalances across nodes. This environment will be implemented by extending the established programming models MPI and OpenMP. (2) A component for "performance introspection", which allows applications and runtime environment to gain information on the current, dynamic performance properties (using techniques and tools from performance analysis), to improve performance at runtime. (3) An analysis component that will bring together and further process measured data and runtime information. Based on its analysis, the component will provide applications with methods and services to improve decisions on repartitioning, task migration, etc.

See the Chameleon project website for further information.

BMWi: Federal Ministry for Economics Affairs and Energy

ATHLET-preCICE - Erweiterung von ATHLET durch die allgemeine Kopplungsschnittstelle preCICE für die Simulation von Multiphysikproblemen in der Reaktorsicherheit

Project type PT-GRS Reaktorsicherheitsforschung im Förderbereich Transienten und Unfallabläufe
Funded by BMWi
Begin 2019
End 2022
Leader Dr. rer. nat. Benjamin Uekermann , Univ.-Prof. Dr. Hans-Joachim Bungartz
Staff Gerasimos Chourdakis, M.Sc.
Contact person Dr. rer. nat. Benjamin Uekermann
Co-operation partner Dr.-Ing. Fabian Weyermann, Gesellschaft für Anlagen- und Reaktorsicherheit (GRS) gGmbH

Brief description

Durch den Einsatz passiver Sicherheitssysteme bei Reaktoren der Generation 3+ können der Kühlkreislauf und das Containment nicht mehr getrennt voneinander betrachtet werden. So sind zum Beispiel bei Gebäudekondensatoren physikalische Effekte beider Systeme stark gekoppelt: Thermohydraulik in den Rohrleitungen, Wärmeleitung in komplizierten dreidimensionalen Strukturen (Kühlrippen) und eine konvektive Gas- oder Dampfströmung auf der Kondensatoraußenseite. Die Simulation des Gesamtsystems ist daher ein Multiphysikproblem, und damit ist eine Kopplung mehrerer Simulationsprogramme notwendig. Eine allgemeine Code-unabhängige Kopplung kann mittels der Open-Source Kopplungsbibliothek preCICE, sehr effizient realisiert werden. Im Rahmen dieses Projektes wollen wir eine preCICE-Schnittstelle für AC2 entwickeln. Diese soll zuerst für das Modul ATHLET implementiert werden. Da schon eine große Anzahl verschiedenster Simulationsprogramme wie ANSYS Fluent, COMSOL, OpenFOAM, CalculiX, oder Code_Aster über eine preCICE-Schnittstelle verfügen, würden dadurch alle diese Programme unmittelbar für gekoppelte Analysen mit ATHLET nutzbar. Ein weiterer Vorteil dieser Schnittstelle ist, dass dadurch nicht nur die gleichzeitige Kopplung von zwei Rechenprogrammen, sondern drei oder auch mehr, möglich ist. Die detaillierte Simulation des genannten Beispiels des Gebäudekondensators wird hierdurch erst möglich. Da ähnliche multiphysikalische Probleme auch bei den modularen Reaktoren, die in vielen Ländern als die Zukunft der Nukleartechnik gesehen, auftreten, ist die angestrebte Implementierung einer preCICE- Schnittstelle in ATHLET ein notwendiger Schritt für die Zukunftsfähigkeit von ATHLET.


HydroBITS: Code Optimisation and Simulation for Bavarian Water Supply and Distribution

Project type Research Project
Funded by Bavarian State Ministry of the Environment and Consumer Protection / LfU
Begin January 2018
End December 2021
Leader Univ.-Prof. Dr. Hans-Joachim Bungartz
Staff Dr. rer. nat. Tobias Neckel, Ivana Jovanovic, M.Sc. (hons)
Contact person Dr. rer. nat. Tobias Neckel
Co-operation partner Dr. Jens Weismüller, Dr. Wolfgang Kurtz, LRZ

Brief description

In HydroBITS, existing IT structures at different institutions related to water supply and distribution in Bavaria are going to be analysed. Basics for modernising the corresponding IT infrastructure are going to be created which are necessary due to various technological developmentss in the recent years. In cooperation with the LRZ, workflows as well as simulation models and data of the Bavarian Landesamts für Umwelt are analysed. A demonstrator platform with a prototype for a modern IT structure are going to be created.

Helmholtz Gemeinschaft: MUnich School of Data Science (MUDS): Integrated Data Analysis 2.0

Project type Research Project
Funded by Helmholtz Gemeinschaft
Begin September 2019
End August 2023
Leader Univ.-Prof. Dr. Hans-Joachim Bungartz, Prof. Frank Jenko (MPP)
Staff Dr. rer. nat. Tobias Neckel, Ravi Kislaya, M.Sc.
Contact person Dr. rer. nat. Tobias Neckel
Co-operation partner Michael Bergmann (MPP)

Brief description

In this project of MUDS, the existing approaches for Bayesian Inversion in the context of fusion plasma simulations (the so-called Integrated Data Analysis) will be generalized and extended to incorporate a) stochastic information for forward propagation of uncertainties and b) simulation results of plasma microturbulence back into the Inversion process. In particular, the code GENE will be used.

KONWIHR: The Bavarian Competence Network for Technical and Scientific High Performance Computing

ProPE-AL: Process-oriented Performance Engineering Service Infrastructure for Scientific Software at German HPC Centers - Algorithms

Project type KONWIHR
Funded by KONWIHR
Begin Obtober 2017
End September 2020
Leader Univ.-Prof. Dr. Michael Bader, Univ.-Prof. Dr. Hans-Joachim Bungartz
Staff Hayden Liu Weng, M.Sc. (hons)
Contact person Univ.-Prof. Dr. Michael Bader, Univ.-Prof. Dr. Hans-Joachim Bungartz
Co-operation partner Univ.-Prof. Dr. Gerhard Wellein, FAU Erlangen-Nürnberg, Univ.-Prof. Dr. Matthias Müller, RWTH Aachen, Univ.-Prof. Dr. Wolfgang Nagel, TU Dresden

Brief description

As part of the DFG call "Performance Engineering for Scientific Software", the Project partners G. Wellein (FAU Erlangen-Nuremberg), M. Müller (RWTH Aachen) and W. Nagel (TU Dresden) initiated the project "Process-oriented Performance Engineering Service Infrastructure for Scientific Software at German HPC Centers" (acronym ProPE). The project aims at implementing performance engineering (PE) as a well-defined, structured process to improve the resource efficiency of programs. This structured PE process should allow for target-oriented optimization and parallelization of application codes guided by performance patterns and performance models. The associated KONWIHR project ProPE-Algorithms (ProPE-AL) adds a further algorithmic optimization step to this well-defined, structured process. This extension takes into account that the best possible sustainable use of HPC resources through application codes is not only a question of the efficiency of the implementation, but also a question of the efficiency of the (numerical) algorithms that application codes are based on.

Volkswagen Stiftung: ASCETE, ASCETE-II (Advanced Simulation of Coupled Earthquake-Tsunami Events)

Project type Call "Extreme Events: Modelling, Analysis and Prediction"
Funded by Volkswagen Stiftung
Begin February 2012
End December 2019
Leader Univ.-Prof. Dr. Jörn Behrens (KlimaCampus, Univ. Hamburg)
Staff Leonhard Rannabauer, M.Sc., Carsten Uphoff; former staff: Alexander Breuer, Kaveh Rahnema
Contact person Univ.-Prof. Dr. Michael Bader
Co-operation partner Univ.-Prof. Dr. Jörn Behrens (KlimaCampus, Univ. Hamburg), Univ.-Prof. Dr. Heiner Igel, Dr. Martin Käser, Dr. Christian Pelties, Dr. Alice-Agnes Gabriel (all: GeoPhysics, Univ. München), Dr. Luis Angel Dalguer, Dr. Ylona van Dinther (ETH Zürich, Swiss Seismological Service).
see official ASCETE webpage

Brief description

Earthquakes and tsunamis represent the most dangerous natural catastrophes and can cause large numbers of fatalities and severe economic loss in a single and unexpected extreme event as shown in Sumatra in 2004, Samoa in 2009, Haiti in 2010, or Japan in 2011. Both phenomena are consequences of the complex system of interactions of tectonic stress, fracture mechanics, rock friction, rupture dynamics, fault geometry, ocean bathymetry, and coast line geometry. The ASCETE project forms an interdisciplinary research consortium that – for the first time – will couple the most advanced simulation technologies for earthquake rupture dynamics and tsunami propagation to understand the fundamental conditions of tsunami generation. To our knowledge, tsunami models that consider the fully dynamic rupture process coupled to hydrodynamic models have not been investigated yet. Therefore, the proposed project is original and unique in its character, and has the potential to gain insight into the underlying physics of earthquakes capable to generate devastating tsunamis.

See the ASCETE website for further information.

Intel Parallel Computing Center: Extreme Scaling on x86/MIC/KNL (ExScaMIC)

Project type Intel Parallel Computing Center
Funded by Intel
Begin July 2014
End October 2018
Leader Univ.-Prof. Dr. Michael Bader, Univ.-Prof. Dr. Hans-Joachim Bungartz, Univ.-Prof. Dr. Arndt Bode
Staff Nikola Tchipev, Steffen Seckler, Carsten Uphoff, Sebastian Rettenberger; former staff: Alexander Breuer
Contact person Univ.-Prof. Dr. Michael Bader
Co-operation partner Leibniz Supercomputing Centre

Brief description

The project is optimizing four different established or upcoming CSE community codes for Intel-based supercomputers. We assume a target platform that will offer several hundred PetaFlop/s based on Intel's x86 (including Intel® Xeon Phi™) architecture. To prepare simulation software for such platforms, we tackle two expected major challenges: achieving a high fraction of the available node-level performance on (shared-memory) compute nodes and scaling this performance up to the range of 10,000 to 100,000 compute nodes.

We examine four applications from different areas of science and engineering: earthquake simulation and seismic wave propagation with the ADER-DG code SeisSol, simulation of cosmological structure formation using GADGET, the molecular dynamics code ls1 mardyn developed for applications in chemical engineering, and the software framework SG++ to tackle high-dimensional problems in data mining or financial mathematics (using sparse grids). While addressing the Xeon Phi™ (co-)processor architectures, in particular, the project tackles fundamental challenges that are relevant for most supercomputing architectures – such as parallelism on multiple levels (nodes, cores, hardware threads per core, data parallelism) or compute cores that offer strong SIMD capabilities with increasing vector width.

While the first project phase (2014-2016) addressed the Intel Xeon Phi coprocessor (Knights Corner), the second project phase (2016-2018) will specifically focuses on the Xeon Phi as stand-alone processor (Knights Landing architecture).

Elite Network of Bavaria (ENB):

Bavarian Graduate School of Computational Engineering (BGCE)

Website of the BGCE

Project type Elite Study Program
Funded by Elite Network of Bavaria, TUM, FAU
Begin April 2005
End April 2025
Leader Univ.-Prof. Dr. Hans-Joachim Bungartz
Staff Dr. rer. nat. Tobias Neckel, Michael Rippl, M.Sc. (hons), Benjamin Rüth, M.Sc. (hons)
Contact person Dr. rer. nat. Tobias Neckel
Co-operation partner International Master's Program Computational Science and Engineering (TUM)

International Master's Program Computational Mechanics (TUM)
International Master's Program Computational Engineering (U Erlangen)

Brief description

The Bavarian Graduate School of Computational Engineering is an association of the three Master programs: Computational Engineering (CE) at the University of Erlangen-Nürnberg, Computational Mechanics (COME), and Computational Science and Engineering (CSE), both at TUM. Funded by the Elitenetzwerk Bayern, the Bavarian Graduate School offers an Honours program for gifted and highly motivated students. The Honours program extends the regular Master's programs by several academic offers:

  • additional courses in the area of computational engineering, in particular block courses, and summer academies.
  • Courses and seminars on "soft skills" - like communication skills, management, leadership, etc.
  • an additional semester project closely connected to current research

Students who master the regular program with an above-average grade, and successfully finish the Honours program, as well, earn the academic degree "Master of Science with Honours".

International Graduate School of Science and Engineering (IGSSE):

An Exascale Library for Numerically Inspired Machine Learning (ExaNIML)

Project type International IGGSE project
Funded by International Graduate School of Science and Engineering
Begin June 2018
End December 2020
Leader Univ.-Prof. Dr. Hans-Joachim Bungartz
Staff Dr. rer. nat. Tobias Neckel, Severin Reiz
Contact person Severin Reiz
Co-operation partner The University of Texas at Austin

Institute for Computational Engineering and Sciences


Brief description

There is a significant gap between algorithms and software in Data Analytics and those in Computational Science and Engineering (CSE) concerning their maturity on High-Performance Computing (HPC) systems. Given the fact that Data Analytics tasks show a rapidly growing share of supercomputer usage, this gap is a serious issue. This proposal aims to bridge this gap for a number of important tasks arising, e.g., in a Machine Learning (ML) context: density estimation, and high-dimensional approximation (for example (semi-supervised) classification). To this end, we aim to (1) design and analyze novel algorithms that combine two powerful numerical methods: sparse grids and kernel methods; and to (2) design and implement an HPC library that provides an open-source implementation of these algorithms and supports heterogeneous distributed-memory architectures. The attractiveness of sparse grids is mainly due to their high-quality accuracy guarantees and their foundation on rigorous approximation theory. But their shortcoming is that they require (regular) Cartesian grids. Kernel methods do not require Cartesian grids but, first, their approximation properties can be suboptimal in a practice, and second, they require regularization whose parameters can be expensive to determine. Our main idea is to use kernel methods for manifold learning and to combine them with the sparse grids to define approximations on the manifold. Such high-dimensional approximation problems find applications in model reduction, uncertainty quantification (UQ), and ML.