Student Projects

From Sccswiki
Jump to navigation Jump to search
At our chair we are constantly looking for motivated students. Just have a look at our research topics below and don't hesitate to drop by and discuss how you can get involved.

(English text follows below)

Studentische Arbeiten? Hier?? Wo sonst?

Diese Seite wendet sich an alle, die sich vorstellen können, am Lehrstuhl SCCS eine

studentische Arbeit jeglicher Art
(Bachelor-/Master-/Diplomarbeit, PSE, SEP, Studienarbeit, IDP, Forschungsprojekt unter Anleitung, ...)
in nahezu jedem Studiengang
(Informatik, Mathematik, CSE)

zu beginnen oder als studentische Hilfskraft zu arbeiten.

Da uns individuelle Präferenzen und das konkrete Interesse für Themen wichtiger sind als der formale Rahmen, schreiben wir im Regelfall keine konkreten Arbeiten aus, sondern stellen im Folgenden Themen vor, in denen wir Euch Arbeiten anbieten können.

Falls Ihr Euch für eines davon interessiert: einfach per Mail anfragen (Fragen kostet nix und verpflichtet zu nix!).

(Das SCCS Kolloquium ist eine weitere gute Gelegenheit, etwas über Themen zu lernen, die bei uns bearbeitet werden; Gäste sind immer willkommen!)

Beispiele für vergangene studentische Projekte findet Ihr unter der Rubrik Publications unter den Rubriken Master and Bachelor Thesis sowie Student Theses/SEP/IDP

Student Projects? Here?? Of course!!!

This page is intended for all students who would like to do

any kind of student project
(Bachelor/Master/Diploma thesis, semester project, guided research, ...)
in (almost) any kind of program
(Informatics, Mathematics, CSE)

or work as a student research assistant at our chair.

As we consider individual preferences and interests for specific topics more important than formal frameworks, we usually do not announce specific projects, but provide a list of topics, instead, where you could work on.

Hence, if you're interested in one of the topics listed below, just contact us by email!

(The SCCS Colloquium is a further opportunity to inform yourself about current topics for projects at our chair; guests are always welcome!)

For examples of previous student projects, see our page on Publications, in particular Master and Bachelor Thesis and Student Theses/SEP/IDP

Beside the listed topics for diploma and master theses below, individual topics and suggestions are appreciated and welcome as well! Just contact one of our team so that we can discuss your ideas!

High-Performance Computing (HPC): ‣ Parallel programming ‣ Resource-aware computing ‣ Programming of Supercomputers

HPC concentrates on the development of applications of the "Supercomputers", which solve large-scale science & engineering problems that are either too large for standard computers or would take too long. With the ever advancing technologies and increasing computational power, parallelization and hardware/resource-awareness are THE KEYS to harness the power to achieve outstanding performance.

We target on various systems, ranging from Multiprocessor Systems-on-a-Chip (MPSoC) (shared memory) to clusters (distributed memory) and supercomputers (hybrid, comprising tens of thousands of processors). Load balancing, minimization of the inter-process communication, or software tuning are just some of the issues playing a decisive role in the development of efficient application software for HPC systems.

Students in these projects will typically adapt existing algorithms of practical interests to the requirements of concrete systems, such as MPSoCs or clusters. The applications in focus are mainly from (yet not restricted to) the field of computational science and engineering, such as flows and fluid - structure interaction, molecular dynamics or traffic simulation.


  • Interest in the development of parallel programs for simulations, intended to be run on supercomputers
  • Interest in numerical algorithms of practical value in computational applications
  • MPI or thread programming basic skills

Some keywords: HPC, load balancing, optimization, MPI, OpenMP, X10, hardware-aware, resource-aware, invasive computing

Contact persons:

Tsunami Simulation - Wave Propagation on Fully Adaptive Grids

Output wireframe clusters small.gif Tohoku surface 46min.jpg Tohoku momentum 46min.jpg <br\> The propagation of oceanic waves, such as Tsunamis generated by earth quakes, can be modeled by 2D fluid equations (so-called shallow water equations). To numerically solve these models, we use a discretization on adaptive triangular grids. Adaptivity, i.e. refinement of the grid in critical regions (esp. along the propagating wave front) but also coarsening in less interesting areas, is critical to achieve the desired accuracy in acceptable time.

Such adaptive grids require memory-efficient data structures to store them, but also efficient algorithms and implementations working on these data structures. Our approach is based on the 2D Sierpinski space-filling curves, which allows an inherently local (and therefore cache-efficient) algorithm based on stack and stream data structures.

Further aspects are efficient implementation of the discretized equations - including parallelization, higher order accurate discretization, modeling of boundary and initial conditions (via coupling with a code to simulate dynamic rupture processes), visualization, etc.


  • A certain interest in the simulation of physical phenomena.

Contact: Michael Bader, Leonhard Rannabauer, Philipp Samfass

Further Information: Tsunami Simulation

Videos: Videos

Quantum Computing


Contact: Univ.-Prof. Dr. Christian Mendl

Uncertainty Quantification

UQ1.png       Uq all buildings.gif      UQ2.png

From a simulation we expect to obtain useful and reliable information. Therefore, it has to be carfully designed. Nevertheless, each simulation involves errors which are usually categorized into three types: (1) Due to simplification of the physics the so-called model error is introduced. (2) We need some sort of discretization to work with the model and that introduces numerical errors. Finally, (3) the simulation requires input data that is usually incomplete, has low accuracy, or is simply wrong. Uncertainty quantification deals with quantifying the error that is introduced by data, typically for large-scale (and, thus, expensive) simulations. One can distinguish between Monte Carlo, projection-based (stochastic Galerkin), and interpolation (collocation) methods.

There is a wide range of student projects available. They reach from applying these methods to current problems and codes, to adapting and developing new approaches for particular problem classes.

Contact: Dr. rer. nat. Tobias Neckel, Ionut-Gabriel Farcas, Florian Künzner, Benjamin Uekermann, Christoph Riesinger

Sparse grid methods to efficiently solve high-dimensional problems

en_us_small.jpg Multi-dimensional applications create vast amounts of data - for spatially discretized simulations this can be observed already in two or three dimensional settings, but just consider problems in data mining, computational fincance, or engineering where dozens or hundreds of dimensions have to be dealt with.

Sparse grid methods, scaling moderately in the number of dimensions compared to classical discretizations, allow one to tackle much higher-dimensional problems than it was feasible before.

Sparse grids are required in all different kinds of applications and disciplines which deal with multiple dimensions: In engineering and plasma-physics, one has to optimize, to approximate, and to integrate; in AI applications such as classification and regression, the underlying dependencies have to be learned and reconstructed, crash-tests have to be understood and astro-physical problems have to be solved; in financial mathematics, prices of options have to be determined; simulation results have to be efficiently stored and visualized; … - the application scenarios are as diverse as our daily life.

Deu small.jpg Mehrdimensionale Anwendungen erzeugen gewaltige Datenmengen - schon bei zwei oder drei Dimensionen, wie etwa bei räumlich aufgelösten Simulationen, aber erst recht, wenn etwa bei Problemen des Data Minings, der Finanzmathematik oder der Ingenieurwissenschaften einige dutzend oder auch hundert Dimensionen auftreten.

Dünngitterverfahren, deren Aufwand wesentlich moderater mit Zahl der Dimensionen steigt als bei klassischen Diskretisierungen, bieten hier ein großes Einsparpotential und treiben so die Grenze der handhabbaren Probleme signifikant voran.

Dünngitterverfahren werden in den verschiedensten Anwendungsgebieten, die mit mehreren Dimensionen arbeiten, benötigt: Nicht nur in Ingenieur- und physikalischen Anwendungen muss optimiert, approximiert und integriert werden, auch in KI-Anwendungen wie der Klassifikation oder Regression müssen beispielsweise zugrunde liegende Zusammenhänge rekonstruiert und gelernt werden, Ergebnise von Crashtests müssen verstanden und astrophysikalische Probleme gelöst werden. In finanzmathematischen Aufgabenstellungen muss integriert werden, bei Simulationsaufgaben müssen Kennfelder effizient abgelegt und Simulationsergebnise effizient visualisiert werden, ... - die Einsatzmöglichkeiten sind vielfältig.

StudentProjects sparseGrids.jpg

en_us_small.jpg Tasks for student projects and thesis, on the one hand, can deal with the application of sparse grids in new fields of applications, on the other hand, the methodology of this technique is relatively new and challenges with plenty of open questions: How to select the right grid points and ansatz functions? How to handle and efficiently realize the challenging (and sometimes brain-twisting) multi-recursive algorithms and data structure on current and future hardware? There are plenty of interesting tasks waiting for you!

Concrete Projects in Sparse Grids and High Dimensional Approximation

Prerequisites: None. There are so many different tasks. You definitely should not have an aversion to numerics. If you have attended algorithms of scientific computing, even better, though not required at all.

Deu small.jpg Aufgaben für studentische Arbeiten ergeben sich einerseits durch den Einsatz dieser Techniken in neuen Anwendungsfeldern - andererseits ist die Methodik dieser relativ neuen Technik noch alles andere als abschließend geklärt: Neben Fragen der richtigen Auswahl von Gitterpunkten und Ansatzfunktionen gibt es auch beliebig viel zu tun, um die teilweise verzwickt-rekursiven Algorithmen, die sich aus diesen Datenstrukturen ergeben, besser in den Griff zu bekommen und auf aktueller und zukünftiger Hardware effizient umzusetzen.

Vorkenntnisse: Eine gewisse Freude an numerischen Fragestellungen sollte man hier mitbringen; optimal, aber nicht notwendig, wären Kenntnisse aus der Vorlesung Algorithmen des Wissenschaftlichen Rechnens.


Concrete Projects in Sparse Grids and High Dimensional Approximation

Machine Learning and Data Mining: Classification, Clustering and Dimension Reduction

Learning from data plays a key role in data mining, artificial intelligence as well as in engineering and many other fields of science. We consider special techniques based on sparse grids for learning from huge amounts of high dimensional data. Because the complexity of these techniques grows slower with the number of dimensions than the complexity of classical grid-based discretizations, the class of feasible problems is extended considerably. Currently we consider problems from astrophysics, car crash tests, image processing, plasma physics and from many other fields.

Usually, a training data set (observations, measurements) is given. In the case of supervised learning, we already have labels associated to the data points in the training data set. Then the task is to assign reasonable labels to new data points (classification). If there are no labels we speak of unsupervised learning. Without any further information, the data points have to be grouped or divided into clusters (clustering). Another area of application for our sparse grids techniques is dimension reduction. Here we map the training data from a high dimensional manifold to a low dimensional representation without losing the characteristics of the data.

datamining.jpg crash.gif

At our chair a broadly applicable sparse grids library has been developed to tackle data driven problems. That is why new problems can be treated without much preliminary work. This library is commonly used for student projects.

Usually student projects deal with the application of data mining techniques based on sparse grids (classification, clustering and dimension reduction) in various fields of science and engineering (astrophysics, car crash tests, plasma physics, …). But we also offer projects exploring the sparse grids techniques themselves, like adaptivity, basis functions and algorithms.

Contact: Felix Dietrich, Paul Sarbu, Michael Obersteiner

High Dimensional Approximation in Plasma Physics

The ever increasing need for energy in the coming decades requires new sources of energy to mitigate the increase of the anthropological greenhouse effect. One promising approach is the use of energy produced by nuclear fusion. To be able to control fusion reactions, which is basically the mechanism that powers the sun, the materials to be fused have to be heated up to extremely high temperatures (~100 Mio. K). At those temperatures the material is in an ionized state called plasma. The confinement of this high temperature plasma can only be achieved by strong magnetic fields in machines as Tokamaks or Stellarators, but the confinement times in the currently existing devices are not sufficient to initiate a self-sustained fusion reaction, which would be necessary to use fusion as a source of energy.

The coming large-scale fusion experiment ITER shall be the first tokamak to actually produce more energy than is invested to create and confine the fusion plasma. But the successful confinement of such a hot fusion plasma is complex and requires a lot of numerical simulations to understand the transport processes in the plasma. These simulations require tremendous amounts of computational resources.

In a cooperation with the IPP (Max-Planck-Institute for Plasmaphysics) we are trying to increase the performance of the Plasma Turbulence Code GENE, which solves the five dimensional gyrokinetic equations. Due to its moderately high dimensionality, the sparse grid combination technique can be employed to significantly reduce the computational effort through a mitigation of the curse of dimensionality.

Besides sparse grids, new algorithms in linear algebra are developed to speed up the gyrokinetic computations.

If you are interested in High Dimensional Approximation in the field of Plasma Physics, please contact Michael Obersteiner. No particular knowledge of plasma physics is required, and we can offer you a project that meets your skills and interests: simply ask us!

Concrete Projects in Sparse Grids and High Dimensional Approximation

Interactive Computational Steering using Model Reduction Techniques

cavity.png HansDaniel.jpg

Projects in this area include:

  • M to N Parallel Data Transfers: Connecting a Parallel Simulation to a Parallel Visualization
  • Exploration of Parametrized Simulations at Interactive Rates
  • Tilled Display High Resolution Visualizations

Contact: Dr. Dirk Pflüger, Gerrit Buse

HPC and Topological Data Analysis

Topological data analysis is an emerging topic at the edge of mathematics and computer science. The basic idea is to obtain topology information from data point clouds. Relevant topology information can be connectivity or holes in the underlying input data. The algorithms developed for TDA in the recent years have high potential to run on (highly) parallel systems and to be speeded up by standard and non-standard techniques of High-Performance Computing.

Contact: Dr. rer. nat. Tobias Neckel, Prof. Ulrich Bauer

Präkonditionierung linearer Gleichungssysteme

Zu einem linearen Gleichungssystem Ax=b mit n x n - Matrix A sucht man einen Präkonditionierer M mit der Eigenschaft, dass sich das äquivalente Gleichungssystem MAx=Mb besser iterativ lösen lässt, insbesondere soll die neue Matrix MA eine kleinere Kondition haben. Schwerpunkt sind Präkonditionerer, die z.B. aus einer Normminimierung entstehen, z.B.

min || AM - I ||

für eine vorgegebene Dünnheitsstruktur von A und M. Dazu sind viele kleine Least-Squares-Probleme Cache-effizient zu lösen. Weiterhin können Präkonditionierer betrachtet werden, die - ähnlich wie ILU oder MILU (unvollständige LU-Zerlegungen) - aus einer unvollständigen Lösung von Dreiecksgleichungssystemen entstehen.

Zu bearbeiten sind hier die Definition und die mathematischen Eigenschaften von M, eine effiziente (parallele) Implementierung, und Anwendungen im Bereich Regularisierung und Glättung z.B. in der Bildverarbeitung oder bei der Lösung von PDEs (z.B. Nuklearreaktoranwendungen).


Vorkenntnisse: Numerisches Programmieren oder Numerik, Lineare Algebra, ev. MPI

Kontakt: Univ.-Prof. Dr. Thomas Huckle, Jürgen Bräckle

HPC Eigenvalue Computation

en_us_small.jpg Abstract: The solution of dense symmetric Eigenvalue problems is a crucial step in many simulations in science and engineering. Often solving a series of eigenvalue problems is the most expensive step in this simulations. Therefore powerful and highly scalable parallel algorithms are needed for this task. Together with our collaborators from other universities and several Max-Plank-institues we are working on ELPA, a highly scalable library for solving the Eigenvalue problem for dense symmetric matrices. Our task is to develop, implement and parallelize the algorithms for ELPA.

Keywords: Parallel Numerics, Eigenvalue problem, HPC, Performance Optimization, MPI

Possible Projects:

Prerequisites: Numerical Programming or Numerics, Linear Algebra, Parallel Numerics, MPI

Contact: Univ.-Prof. Dr. Thomas Huckle, Michael Rippl

Deu small.jpgZusammenfassung: Die Lösung dichter symmetrischer Eigenwertprobleme stellt einen zeitkritischen Schritt bei wissenschaftlichen und ingenieurstechnischen Problemen dar. Oft ist die Lösung des Eigenproblems der zeitaufwändigste Schritt. Daher werden effektive and hochgradig skalierende Algorithmen benötigt für diesen Schritt. Zusammen mit unseren Projektpartnern von anderen Universitäten und mehreren Max-Plank Insitiuten arbeiten wir an ELPA, einer hochskalierenden Bibliothek zur Lösung des Eigenwertproblems für dichte, symmetrische Matrizen. Unser Part ist dabei die Entwicklung, Implementierung und Parallelisierung von Algorithmen für ELPA.

Schlüsselwörter: Parallele Numerik, Eigenwertproblem, HPC, Performance Optimierung, MPI

Mögliche Projekte:

Vorkenntnisse: Numerisches Programmieren oder Numerik, Lineare Algebra, Parallele Numerik, MPI

Kontakt: Univ.-Prof. Dr. Thomas Huckle, Michael Rippl

Molecular Dynamics Simulations

Molecular Dynamics Simulations deal with the simulation of materials (or mixtures of materials) on a molecular level. Even for a small simulation the number of particles can reach millions and, therefore, the computational effort is immense. Additional challenges arise if the molecules are not evenly distributed in the simulation domain. Therefore, efficient algorithms and parallelisation strategies are necessary to deal with this computational challenge. The focus of student work in this field is mainly on data structures, parallelisation and vectorization, but can also be in any other field related to molecular dynamics.

Nucleation ober.png

Examples for challenges in this field are:

  • Load-balanced parallelization using space-filling curves and KD-trees
  • Adaptive data structures for finding neighbour particles
  • Improved parallelization schemes
  • Efficient vectorization schemes
  • Hybrid MPI+OpenMP parallelization
  • Auto-Tuning

Further challenges arise when so called "long-range" interactions are present in the simulation. In such cases all interactions in the system need to be taken into account, leading to O(N^2) behavior. A different class of methods needs to be applied, such as the Barnes-Hut Method or the Fast Multipole Method, in order to solve the problem in O(Nlog N) or O(N) time.

StudentProjects spherical harmonics.png


  • C/C++
  • Linux


Computational Fluid Dynamics (CFD)

StudentProjects fsi.jpg

StudentProjects driftRatchet.jpg

The simulation of various flow phenomena and the development of software for this purpose are important problems. Currently, we mainly collaborate with the Gesellschaft für Anlagen- und Reaktorsicherheit (GRS) on thermohydraulic problems.


Multi-Physics and Coupled Simulations

PreCICE Pfs.png

For Further information, please go to Student Projects with preCICE.



Verkehr spielt heutzutage eine immer größere und wichtigere Rolle. So ist Mobilität ein integraler Bestandteil unseres täglichen Lebens geworden. Zudem müssen Tag für Tag große Mengen an Waren um den ganzen Globus transportiert werden. Damit geht ein wachsendes Verkehrsaufkommen einher. Die Folgen kann jeder in Form von Staus, Umwelt- und Lärmbelastungen erleben.

Verkehrssimulationen sind ein geeignetes Mittel kostengünstig Engpässe und Probleme zu identifizieren und Lösungsstrategien zu entwickeln. Simulationen können dabei auf mikroskopischer Ebene, bei der die einzelnen Verkehrsteilnehmer und deren individuelles Verhalten betrachtet werden, oder auf makroskopischer Ebene durchgeführt werden. Hierbei werden lediglich Flüsse und Durchschnittswerte untersucht.

Themen sind dabei:

  • Mikroskopische Simulationen mittels zellulärer Automaten
  • Makroskopische Simulationen
  • Parallelisierungsstrategien für verschiedene Simulationsmodelle
  • Graphpartitionierungen und Hierarchisierungen für Verkehrsnetze

Anmerkung: Im Augenblick haben wir (personell bedingt) kein laufendes Projekt. Wir bieten aber weiterhin Lehrveranstaltungen an (Seminare, PSEs, ...) - mehr dazu unter Teaching.

Kontakt: Univ.-Prof. Dr. Hans-Joachim Bungartz

Reduced Basis Methoden

Wir betrachten Partielle Differentialgleichungen welche von verschiedenen Parametern abhängig (z.B. Materialeigenschaften). Üblicherweise wird die Lösung zu einer beliebigen aber zulässigen Parameterkonfiguration in einem sehr allgemein (z.B. Finite Elemente) Raum bestimmt. Oft sind die "effektiven Lösungen" jedoch nicht gleichmäßig über den gesamten Raum verteilt, sondern liegen auf einer niedrig-dimensionalen und glatten Mannigfaltigkeit. Die Reduced Basis Methoden bestimmen daher eine Lösung in einem problem-abhängigen, niedrig-dimensionalen Unterraum, was zu einer erheblichen Effizienzsteigerung führen kann.

Zusätzlich wird das Verfahren in eine Offline und eine Online Phase unterteilt. Zuerst wird in der Offline Phase der problem-abhängige Unterraum bestimmt (d.h. die reduzierte Basis konstruiert), und anschließend in der Online Phase für beliebige (aber zulässige) Parameter die Reduced Basis Lösungen bestimmt.


Bei den Reduced Basis Methoden ist vieles noch nicht abschließend geklärt. Vor allem die Erweiterung auf neue Problemklassen lässt noch viele Fragen offen. Wir betrachten Reduced Basis Methoden in Hinblick auf Dünne Gitter. Wo können hier Dünne Gitter eingesetzt werden? Unter welchen Umständen bzw. für welche Probleme lohnen sich Dünne Gitter?

Voraussetzung sind Grundkenntnisse im Bereich Numerik und Programmiererfahrung. Kenntnisse über Dünne Gitter und Reduced Basis Methoden können sich während der Bearbeitung angeeignet werden.

Kontakt: Kilian Röhner

Computational Seismology

Laquila 500s.png

Earthquake simulation of the L'Aquila event (in collaboration with S. Wenk, LMU).

Highly resolved dynamic rupture processes and coupled seismic wave simulations are a "computational grand challenge" in seismic modeling. Requirements to resolution in space and time demand an immense amount of computing power, up to millions of cores. Large scale and extensively optimized software is required for an applicable time to solution and energy efficiency. We develop, in close collaboration with multiple groups (i.e., strategies and solutions for high performance infrastructures.

Students are very welcome and will work on state-of-the-art research of the involved fields: High performance computing, Numerics and Seismology.

Contact: Lukas Krenz, Sebastian Wolf, Leonhard Rannabauer