Personal tools

Running Research and Development Projects

From Sccswiki

Jump to: navigation, search


Contents

Excellence Initiative: IAS

The Institute for Advanced Study (IAS) of Technische Universität München is the centerpiece of TUM’s institutional strategy to promote top-level research in the so-called Excellence Initiative by the German federal and state governments.

HPC - Tackling the Multi-Challenge

Project type IAS focus group
Funded by Excellence Initiative of the German federal and state and governments
Begin 2010
End 2015
Leader Univ.-Prof. Dr. Hans-Joachim Bungartz
Staff Dr. rer. nat. habil. Miriam Mehl, Dr. rer. nat. Dirk Pflüger, Christoph Kowitz, M.Sc., Valeriy Khakhutskyy, M.Sc., Dipl.-Math. Benjamin Uekermann, Arash Bakhtiari, M.Sc. (hons)
Contact person Dr. rer. nat. habil. Miriam Mehl
Co-operation partner Prof. George Biros (UT Austin, USA), Markus Hegland (Canberra, Australia)

Brief description

High-performance computing (HPC) is a thriving cross-sectional research field of utmost relevance in science and engineering. Actually, scientific progress is more and more depending on insight gained by computational research. With the increased technological potential, however, the requirements are growing, too – which leads to several computational challenges, which are all related to some “multi-X” notion: multi-disciplinary, multi-physics, multi-scale, multi-dimensional, multi-level, multi-core. This focus group will primarily address the three topic multi-physics (mp), multi-dimensional (md), and multi-core (mc).
The interplay of these three subtopics is straightforward: Both mp and md are among the usual suspects that need and, thus, drive HPC technology and mc; mp frequently appears in the context of optimisation or parameter identification or estimation – thriving topics of current md research; and present as well as future mc technology is inspired by algorithmic patterns, as provided by mp and md. Hence, it is not only reasonable to address mp, md, and mc in an integral way, it is essential, and this IAS focus group offers the unique chance of doing this at a very high international level.



DFG: German Research Foundation

Priority Program 1648 SPPEXA - Software for Exascale Computing

Coordination Project

Funded by DFG
Begin 2012
End 2018
Leader Univ.-Prof. Dr. Hans-Joachim Bungartz
Staff Philipp Neumann
Contact person Univ.-Prof. Dr. Hans-Joachim Bungartz

Brief description

The Priority Programme (SPP) SPPEXA is different from other SPP with respect to its genesis, its volume, its funding via DFG's Strategy Fund, with respect to the range of disciplines involved, and to a clear strategic orientation towards a set of time-critical objectives. Therefore, despite its distributed structure, SPPEXA also resembles a Collaborative Research Centre to a large extent. Its successful implementation and evolution will require both more and more intense structural measures. The Coordination Project comprises all intended SPPEXAwide activities, including steering and coordination, internal and international collaboration and networking, and educational activities.

Reference: Priority Program 1648 SPPEXA - Software for Exascale Computing

ExaFSA - Exascale Simulation of Fluid-Structure-Acoustics Interaction

Funded by DFG
Begin 2012
End 2018
Leader Univ.-Prof. Dr. Miriam Mehl
Staff Dipl.-Math. Benjamin Uekermann
Contact person Univ.-Prof. Dr. Miriam Mehl

Brief description

In scientific computing, an increasing need for ever more detailed insights and optimization leads to improved models often including several physical effects described by different types of equations. The complexity of the corresponding solver algorithms and implementations typically leads to coupled simulations reusing existing software codes for different physical phenomena (multiphysics simulations) or for different parts of the simulation pipeline such as grid handling, matrix assembly, system solvers, and visualization. Accuracy requirements can only be met with a high spatial and temporal resolution making exascale computing a necessary technology to address runtime constraints for realistic scenarios. However, running a multicomponent simulation efficiently on massively parallel architectures is far more challenging than the parallelization of a single simulation code. Open questions range from suitable load balancing strategies over bottleneck-avoiding communication, interactive visualization for online analysis of results, synchronization of several components to parallel numerical coupling schemes. We intend to tackle these challenges for fluid-structure-acoustics interactions, which are extremely costly due to the large range of scales. Specifically, this requires innovative surface and volume coupling numerics between the different solvers as well as sophisticated dynamical load balancing and in-situ coupling and visualization methods.

Reference: Priority Program 1648 SPPEXA - Software for Exascale Computing

EXAHD - An Exa-Scalable Two-Level Sparse Grid Approach for Higher-Dimensional Problems in Plasma Physics and Beyond

Funded by DFG
Begin 2012
End 2018
Leader Univ.-Prof. Dr. Hans-Joachim Bungartz
Staff Alfredo Parra, Christoph Kowitz
Contact person Univ.-Prof. Dr. Hans-Joachim Bungartz

Brief description

Higher-dimensional problems (i.e., beyond four dimensions) appear in medicine, finance, and plasma physics, posing a challenge for tomorrow's HPC. As an example application, we consider turbulence simulations for plasma fusion with one of the leading codes, GENE, which promises to advance science on the way to carbon-free energy production. While higher-dimensional applications involve a huge number of degrees of freedom such that exascale computing gets necessary, mere domainde composition approaches for their parallelization are infeasible since the communication explodes with increasing dimensionality. Thus, to ensure high scalability beyond domain decomposition, a second major level of parallelism has to be provided. To this end, we propose to employ the sparse grid combination scheme, a model reduction approach for higher-dimensional problems. It computes the desired solution via a combination of smaller, anisotropic and independent simulations, and thus provides this extra level of parallelization. In its randomized asynchronous and iterative version, it will break the communication bottleneck in exascale computing, achieving full scalability. Our two-level methodology enables novel approaches to scalability (ultra-scalable due to numerically decoupled subtasks), resilience (fault and outlier detection and even compensation without the need of recomputing), and load balancing (high-level compensation for insufficiencies on the application level).

Reference: Priority Program 1648 SPPEXA - Software for Exascale Computing

SFB-TRR 89: Invasive Computing

Funded by DFG
Begin Mid 2010
End 2nd phase in mid 2018
Leader Univ.-Prof. Dr. Hans-Joachim Bungartz (D3), Univ.-Prof. Dr. Michael Bader (A4)
Staff Emily Mo-Hellenbrand, M.Sc., Alexander Pöppl, M.Sc., Dr. rer. nat. Tobias Neckel, Dr. rer. nat. Philipp Neumann; former staff: Dr. rer. nat. Martin Schreiber
Contact person Univ.-Prof. Dr. Hans-Joachim Bungartz (D3), Univ.-Prof. Dr. Michael Bader (A4)

Brief description

In the proposed CRC/Transregio, we intend to investigate a completely novel paradigm for designing and programming future parallel computing systems called invasive computing. The main idea and novelty of invasive computing is to introduce resource-aware programming support in the sense that a given program gets the ability to explore and dynamically spread its computations to neighbour processors similar to a phase of invasion, then to execute portions of code of high parallelism degree in parallel based on the available (invasible) region on a given multi-processor architecture. Afterwards, once the program terminates or if the degree of parallelism should be lower again, the program may enter a retreat phase, deallocate resources and resume execution again, for example, sequentially on a single processor. In order to support this idea of self-adaptive and resource-aware programming, not only new programming concepts, languages, compilers and operating systems are necessary but also revolutionary architectural changes in the design of MPSoCs (Multi-Processor Systems-on-a-Chip) must be provided so to efficiently support invasion, infection and retreat operations involving concepts for dynamic processor, interconnect and memory reconfiguration.

Reference: Transregional Collaborative Research Centre 89 - Invasive Computing

A4: Design-Time Characterisation and Analysis of Invasive Algorithmic Patterns

D3: Invasion for High Performance Computing

EU Horizon 2020: An Exascale Hyperbolic PDE Engine (ExaHyPE)

Project type EU Horizon 2020, FET-PROACTIVE call Towards Exascale High Performance Computing (FETHPC)
Funded by European Union’s Horizon 2020 research and innovation programme
Begin October 2015
End September 2019
Leader Univ.-Prof. Dr. Michael Bader
Staff Dr. rer. nat. Vasco Varduhn, Angelika Schwarz, M.Sc.
Contact person Univ.-Prof. Dr. Michael Bader
Co-operation partner Prof. Michael Dumbser (Univ. Trento), Dr. Tobias Weinzierl (Durham University), Prof. Dr. Luciano Rezzolla (Fra nkfurt Institute for Advanced Studies), Prof. Dr. Heiner Igel and Dr. Alice Gabriel (LMU München), Robert Iberl (BayFor), Dr. Alexander Moskovsky (RSC Group); Prof. Dr. Arndt Bode (LRZ)

Brief description

The Horizon 2020 project ExaHyPE is an international collaborative project to develop an exascale-ready engine to solve hyperbolic partial differential equations. The engine will rely on high-order ADER-DG discretization (Arbitrary high-order DERivative Discontinuous Galerkin) on dynamically adaptive Cartesian meshes (building on the Peano framework for adaptive mesh refinement).

ExaHyPE will focus on grand challenges from computational seismology (earthquake simulation) and computational astrophysics (simulation of binary neutron star systems), but at the same time aims at developing a flexible engine to solve a wide range of hyperbolic PDE systems.

See the ExaHyPE website for further information!

G8-Initiative: Nuclear Fusion Simulations at Exascale (Nu-FuSe)

Project type G8 Research Councils Initiative on Multilateral Research Funding
Funded by G8 group of leading industrial nations
Begin July 2011
End April 2015
Leader Univ.-Prof. Dr. Hans-Joachim Bungartz
Staff Dr. rer. nat. Tobias Neckel
Contact person Dr. rer. nat. Tobias Neckel
Co-operation partner Prof. Frank Jenko ( Max-Planck Institut für Plasmaphysik, IPP)

Brief description

The G8 project Nu-FuSE is an international project looking to significantly improve computational modelling capabilities to the level required by the new generation of fusion reactors. The focus is on three specific scientific areas: fusion plasma; the materials from which fusion reactors are built; and the physics of the plasma edge. This will require computing at the “exascale” level across a range of simulation codes, collaborating together to work towards full integrated fusion tokamak modelling.

To exploit upcoming exascale systems effectively for fusion modelling creates significant challenges around scaling, resiliency, result validation and programmability. This project will be focusing on meeting these challenges by improving the performance and scaling of community modelling codes to enable simulations orders of magnitude larger than are currently undertaken.

BMBF: Federal Ministry of Education and Research

ELPA-AEO - Eigenwert-Löser für PetaFlop-Anwendungen: Algorithmische Erweiterungen und Optimierungen

Project type Fördermassnahme IKT 2020 - Höchstleistungsrechnen im Förderbereich: HPC
Funded by BMBF
Begin 2016
End 2018
Leader Dr. Hermann Lederer, Univ.-Prof. Dr. Hans-Joachim Bungartz
Staff Univ.-Prof. Dr. Thomas Huckle, Michael Rippl, M.Sc.
Contact person Univ.-Prof. Dr. Thomas Huckle
Co-operation partner Dr. Hermann Lederer (Rechenzentrum MPG Garching), Prof. Dr. Bruno Lang (Universität Wuppertal), Prof. Dr. Karsten Reuter

(Chemie, TUM), Dr. Christoph Scheuerer (TUM-Chemie), Fritz-Haber-Institut Berlin

Brief description

Übergeordnetes Ziel ist es, die Effizienz von Supercomputer-Simulationen zu steigern, für die die Lösung des Eigenwertproblems für dichte und Band-strukturierte symmetrische Matrizen zu einem entscheidenden Beitrag wird. Dies ist insbesondere bei Fragestellungen aus der Materialforschung, der biomolekularen Forschung und der Strukturdynamik der Fall. Aufbauend auf den Ergebnissen des ELPA-Vorhabens sollen im Rahmen dieses Vorhabens noch größere Probleme als bisher adressiert werden können, der mit der Simulation verbundene Rechenaufwand verringert und bei vorgegebener Genauigkeit und weiterhin hoher Software-Skalierbarkeit Ressourceneinsatz und Energieverbrauch reduziert werden.

CzeBaCCA: Czech-Bavarian Competence Centre for Supercomputing Applications

Project type Bundesministerium für Bildung und Forschung Förderprogramm Auf- und Ausbau gemeinsamer Forschungsstrukturen in Europa
Funded by BMBF
Begin January 2016
End June 2017
Leader Prof. Dr. Arndt Bode, Univ.-Prof. Dr. Michael Bader
Staff Sebastian Rettenberger
Contact person Univ.-Prof. Dr. Michael Bader
Co-operation partner Leibniz Supercomputing Centre; IT4Innovations national supercomputing centre (Ostrava, CZ)

Brief description

By bringing together LRZ and IT4I as two key providers of European supercomputing infrastructure, the project aims to realise concrete short-term measures that foster further projects, infrastructural changes and collaborations that are necessary to achieve the following goals.

  • Foster Czech-German collaboration in computational science focusing on cutting-edge supercomputing in various fields of high scientific and societal impact (establishing new cooperations in supercomputing, initiate new consortia between German and Czech institutes, etc.).
  • Establish scientific communities of computational scientists that are well-trained in using the latest supercomputing architectures (improve and tailor LRZ’s and IT4I’s existing course programs to the new SALOMON supercomputer, train users in applying and adapting simulation software according to their needs, etc,).
  • Improve the efficiency of simulation software on latest supercomputing architectures and establish competence teams for simulation software to maintain the best-possible utilisation of supercomputers as critical research infrastructure.

MultikOSI/MEPKA

Project type Cooperation Project with Munich University of Applied Sciences
Funded by BMBF
Begin September 2013
End September 2016
Leader Roland G. Meier Prof. Dr. Gerta Köster
Staff Felix Dietrich, Michael Seitz, Isabella von Sivers, Benedikt Zönnchen
Contact person Prof. Dr. Gerta Köster
Co-operation partner Hochschule München, VDS, BLKA SIZ (Munich Police), IMS, TU KL Stadtsoziologie, TUM HR6, VDI TZ GmbH, TU Kaiserslautern, TUM CMS


Brief description

Events like "public viewing", festivals or concerts are an important part of urban life and must be managed to ensure safety and security. Yet, event managers and security personal lack scientifically validated and practical instruments.

The MultikOSi project combines knowledge and competencies from areas such as crowd management, mathematics, informatics, sociology and civil engineering. The goal is to understand the general processes at urban mass events and, with that knowledge, to develop new methods to improve safety. Part of this are new models of pedestrian dynamics, including sociological and psychological aspects, and new combinations of existing models. The need to consider safety, openness and economics at mass events make the task a multi-criterial optimization process. The holistic and interdisciplinary approach will lead to an empowered planning phase and optimized security concepts for events. The scientific results can be used after the project to develop software tools for event planning.

The MEPKA project at Munich University of Applied Sciences investigates mathematical properties of state-of-the-art pedestrian locomotion models to both improve numerical performance of these models and develop new models that are more robust and more consistent with empirical evidence. Based on that, first models incorporating psychological aspects, like self categorization, are devised and validated.

SkaSim: Scalable HPC-Software for molecular simulation in the chemical industry

Project type BMBF support program: Application-oriented HPC-Software for supercomputers
Funded by BMBF
Begin July 2013
End June 2016
Leader Prof. Dr.-Ing. M. Resch, HLRS
Staff Univ.-Prof. Dr. Hans-Joachim Bungartz, Nikola Tchipev, M.Sc., Dipl.-Inf. Wolfgang Eckhardt
Contact person Nikola Tchipev, M.Sc.
Co-operation partner Prof. M. Resch, HLRS, Prof. D. Reith, HBRS, Dr. P. Klein Fraunhofer, ITWM, Prof. H. Hasse, TU Kaiserslautern, Dr. T. Soddemann, Fraunhofer SCAI, Prof. J. Vrabec, Uni Paderborn, BASF SE, Cray Computers Deutschland GMBH, DDBST GMBH, Eurotechnica GMBH, Solvay Fluor GMBH

Brief description

Molecular dynamics (MD) and Monte-Carlo (MC) simulations form the basis for investigating many relevant application scenarios in science and engineering. At the heart of these simulations lie physically meaningful and quantitative models of molecular interactions, requiring precise validation through state of the art ab initio calculations and experimental data. The extreme spatial and temporal resolution (individual molecules, femtoseconds) of such simulations allow for very reliable predictions of material properties, even where experiments are impossible or dangerous. However, this extreme resolution also implies substantial computational demands in order to investigate scenarios in a timely manner. The same holds true for nanofluidics: realistic insights, not obtainable experimentally, can be captured through simulation. Complex phenomena as for instance phase transitions (e.g. condensation) can be investigated on the molecular level, allowing new and more fundamental insights. However, as the dynamics of every molecule is evaluated explicitly, the number of simulated molecules needs to be considerable in order to capture the phenomena in question. Determining experimentally elusive properties of matter is attracting increasing attention from industry. Be it in process engineering, where the already highly optimized procedures can only be improved through better and more detailed data and understanding. The computational power required to generate the quantity and quality of data required is significant. Thus, only through the efficient use of cutting-edge hardware can these demands be met. However, many relevant scenarios are far from trivial to simulate at scale, e.g. coinciding fluid and gaseous phases in a highly dynamic environment as in condensation or evaporation.

However, the industrial development of new products and processes will experience a fundamental change in the coming years. Expensive and oftentimes dangerous experiments can be replaced with safe and increasingly efficient and affordable simulations. For this transition to take place, simulations need to be performed with accuracies comparable to highquality experiments. Besides the computational requirements, this calls for extremely accurate molecular models and, for complex scenarios, reliable new methodologies.

The challenges to simulate such scenarios efficiently are huge and will be addressed in SkaSim.

HEPP: International Helmholtz Graduate School for Plasma Physics

Project type Helmholtz Graduate School Scholarship
Funded by Helmholtz Gemeinschaft
Begin November 2011
End October 2017
Leader Univ.-Prof. Dr. Hans-Joachim Bungartz
Staff Dr. rer. nat. Tobias Neckel
Contact person Dr. rer. nat. Tobias Neckel
Co-operation partner Prof. Frank Jenko ( Max-Planck Institut für Plasmaphysik, IPP)

Brief description

The fundamental equations used to understand and predict various phenomena in plasma physics share a very important feature: They are all nonlinear. This implies that analytical techniques - although also very important - are limited in practice, calling for a numerical approach. Fortunately, the capabilities of modern supercomputers have reached a level which allows to tackle some outstanding open issues in theoretical plasma physics, including, e.g., turbulence, nonlinear magnetohydrodynamics, and plasma-wall interaction.

Given the multiscale nature of most problems of interest, advanced algorithms and efficient implementations on massively parallel platforms are usually required in order to tackle them. In this context, a close collaboration of theoretical plasma physicists with applied mathematicians and computer scientists can be of great benefit. Thus, state-of-the-art numerical techniques, hardware-aware implementation strategies, and scalable parallelization approaches are explored in terms of their potential to minimize the overall computational requirements and to maximize the reliability and robustness of the simulations.

Volkswagen Stiftung: ASCETE, ASCETE-II (Advanced Simulation of Coupled Tsunami-Earthquake Events)

Project type Call "Extreme Events: Modelling, Analysis and Prediction"
Funded by Volkswagen Stiftung
Begin February 2012
End January 2018
Leader Univ.-Prof. Dr. Jörn Behrens (KlimaCampus, Univ. Hamburg)
Staff Univ.-Prof. Dr. Michael Bader, Carsten Uphoff; former staff: Alexander Breuer, Kaveh Rahnema
Contact person Univ.-Prof. Dr. Michael Bader
Co-operation partner Univ.-Prof. Dr. Jörn Behrens (KlimaCampus, Univ. Hamburg), Univ.-Prof. Dr. Heiner Igel, Dr. Martin Käser, Dr. Christian Pelties, Dr. Alice-Agnes Gabriel (all: GeoPhysics, Univ. München), Dr. Luis Angel Dalguer, Dr. Ylona van Dinther (ETH Zürich, Swiss Seismological Service).
see official ASCETE webpage

Brief description

Earthquakes and tsunamis represent the most dangerous natural catastrophes and can cause large numbers of fatalities and severe economic loss in a single and unexpected extreme event as shown in Sumatra in 2004, Samoa in 2009, Haiti in 2010, or Japan in 2011. Both phenomena are consequences of the complex system of interactions of tectonic stress, fracture mechanics, rock friction, rupture dynamics, fault geometry, ocean bathymetry, and coast line geometry. The ASCETE project forms an interdisciplinary research consortium that – for the first time – will couple the most advanced simulation technologies for earthquake rupture dynamics and tsunami propagation to understand the fundamental conditions of tsunami generation. To our knowledge, tsunami models that consider the fully dynamic rupture process coupled to hydrodynamic models have not been investigated yet. Therefore, the proposed project is original and unique in its character, and has the potential to gain insight into the underlying physics of earthquakes capable to generate devastating tsunamis.

See the ASCETE website for further information.

Intel Parallel Computing Center: Extreme Scaling on x86/MIC (ExScaMIC)

Project type Intel Parallel Computing Center
Funded by Intel
Begin July 2014
End July 2016
Leader Univ.-Prof. Dr. Michael Bader, Univ.-Prof. Dr. Hans-Joachim Bungartz, Univ.-Prof. Dr. Arndt Bode
Staff Nikola Tchipev, Steffen Seckler, Sebastian Rettenberger; former staff: Alexander Breuer
Contact person Univ.-Prof. Dr. Michael Bader
Co-operation partner Leibniz Supercomputing Centre

Brief description

The project is optimizing four different established or upcoming CSE community codes for Intel-based supercomputers. We assume a target platform that will offer several hundred PetaFlop/s based on Intel's x86 (including Intel® Xeon Phi™ coprocessors) architecture. To prepare simulation software for such platforms, we tackle two expected major challenges: achieving a high fraction of the available node-level performance on (shared-memory) compute nodes and scaling this performance up to the range of 10,000 to 100,000 compute nodes.

We examine four applications from different areas of science and engineering: earthquake simulation and seismic wave propagation with the ADER-DG code SeisSol, simulation of cosmological structure formation using GADGET, the molecular dynamics code ls1 mardyn developed for applications in chemical engineering, and the software framework SG++ to tackle high-dimensional problems in data mining or financial mathematics (using sparse grids). While addressing the Xeon Phi™ coprocessor, in particular, the project tackles fundamental challenges that are relevant for most supercomputing architectures – such as parallelism on multiple levels (nodes, cores, hardware threads per core, data parallelism) or compute cores that offer strong SIMD capabilities with increasing vector width.

Elite Network of Bavaria (ENB):

Bavarian Graduate School of Computational Engineering (BGCE)

Website of the BGCE

Project type Elite Study Program
Funded by Elite Network of Bavaria
Begin April 2005
End April 2015
Leader Univ.-Prof. Dr. Hans-Joachim Bungartz
Staff Dr. rer. nat. Tobias Neckel, Dipl.-Inf. Marion Bendig
Contact person Dr. rer. nat. Tobias Neckel
Co-operation partner International Master's Program Computational Science and Engineering (TUM)

International Master's Program Computational Mechanics (TUM)
International Master's Program Computational Engineering (U Erlangen)

Brief description

The Bavarian Graduate School of Computational Engineering is an association of the three Master programs: Computational Engineering (CE) at the University of Erlangen-Nürnberg, Computational Mechanics (COME), and Computational Science and Engineering (CSE), both at TUM. Funded by the Elitenetzwerk Bayern, the Bavarian Graduate School offers an Honours program for gifted and highly motivated students. The Honours program extends the regular Master's programs by several academic offers:

  • additional courses in the area of computational engineering, in particular block courses, and summer academies.
  • Courses and seminars on "soft skills" - like communication skills, management, leadership, etc.
  • an additional semester project closely connected to current research

Students who master the regular program with an above-average grade, and successfully finish the Honours program, as well, earn the academic degree "Master of Science with Honours".

Numerical Aspects of the Simulation of Quantum Many-body Systems

Project type QCCC project
Funded by Quantum Computing, Control and Communication (QCCC)
Begin January 2008
End December 2012
Leader Univ.-Prof. Dr. Thomas Huckle
Staff Dipl.-Math. Konrad Waldherr
Contact person Univ.-Prof. Dr. Thomas Huckle
Co-operation partner Dr. Thomas Schulte-Herbrueggen (Chemistry, TUM)

Brief description

In the last years a growing attention has been dedicated to many body quantum systems from the point of view of quantum information. Indeed, after the initial investigation of simple systems as single or two qubits, the needs of understanding the characteristics of a realistic quantum information device leads necessary to the study of many body quantum systems. These studies are also driven by the very fast development of experiments which in the last years reach the goal of coherent control of a few qubits (ion traps, charge qubits, etc...) with a roadmap for further scaling and improvement of coherent control and manipulation techniques. Also, new paradigm of performing quantum information tasks, such as quantum information transfer, quantum cloning and others, without direct control of the whole quantum system but using our knowledge of it has increased the need of tools to understand in details the behaviour of many body quantum system as we find them in nature. These new goals of the quantum information community lead to an unavoidable exchange of knowledge with other communities that already have the know-how and the insight to address such problems; for example the condensed matter, computational physics or quantum chaos communities. Applying known techniques and developing new ones from a quantum information perspective have already produced fast and unexpected developments in these fields. The comprehension of many body quantum systems ranging from few qubits to the thermodynamical limit is thus needed and welcome not only to develop useful quantum information devices, but it will lead us to a better understanding of the quantum world.

Reference: Computations in Quantum Tensor Networks

KONWIHR (Bavarian Competence Network for Technical and Scientific High Performance Computing):

Optimization of a Multi-Functional Shallow Water Solver for Complex Overland Flows

Project type KONWIHR III project
Funded by Bayer. Staatsministerium für Wissenschaft, Forschung und Kunst
Begin 2016
End 2017
Leader Univ.-Prof. Dr. Hans-Joachim Bungartz, Dr. rer. nat. Philipp Neumann
Staff Roland Wittmann, M.Sc.
Contact person Dr. rer. nat. Philipp Neumann

Brief description

In this project, we strive for an optimized implementation of an existing shallow water (SWE) solver for overland flows, including the important features to simulate complex overland flow: focus is put on efficient data structures for overland flow simulations as well as data access, vectorization and parallelization, and extensive testing of parallelized, patch-based spatially adaptive simulations. We aim to achieve efficient data access and vectorization by detailed analysis and re-implementation of the FullSWOF kernels and potential re-design of the data structures. With the SWE solver working on regular Cartesian grids only, we further consider an embedding of the solver in libspacetree/PeanoClaw, a light-weight spacetree implementation which extends arbitrary solvers by spatial/temporal adaptivity and parallelism. Patch-based spatial adaptivity may yield algorithmic speedups and pays off in particular for overland flow scenarios where parts of the computational domain are only slightly or completely irrelevant for flooding (e.g., particular plateau or hill regions). Moreover, we plan to evaluate the performance of the optimized implementation on “standard” processors, and also on the Intel MIC architecture as provided by LRZ (cluster SuperMIC).