Personal tools

SC²S Colloquium - August 20, 2013

From Sccswiki

Jump to: navigation, search
Date: August 20, 2013
Room: 02.07.023
Time: 3:00 pm, s.t.


Wolfgang Nicka: Clustering Algorithms for the Detection of Clusters in Gases

Simulation software nowadays helps to gain insight into various processes that are being researched in science today. Especially molecular simulations have taken a strong foothold in the field of nano-scale physics. With the goal of gaining data about clustering processes in several homogeneous one-compound gases in order to improve understanding of these processes an input generator for the molecular simulation program MarDyn was implemented. Different reproducable scenarios were generated by it with the goal of observing nucleation behavior at certain temperatures. The compounds argon, ethane and R152A were chosen, whereby the choice thereof was oriented towards existing literature to allow for comparison. Simulations with these compounds were run to validate the expected behavior of these scenarios. Multiple clustering algorithms from literature were evaluated against each other and a graph clustering algorithm was implemented sequentially within MarDyn. Tests were conducted by running extensive simulations to validate the results of the simulation with related data from literature. Finally, a strategy to parallelize the implemented algorithm for more efficient use on multi-core computers was proposed.


Jacob Jepsen: Boosting the explicit Calculation of the Laplacian on Sparse Grids using GPUs

In this work, we present a highly scalable approach for numerically solving the Black-Scholes equation in order to price European basket options. Our approach is based on a spatially adaptive sparse grid discretization with finite elements. The resulting linear system is solved by a conjugate gradient method that uses a parallel operator for applying the system matrix implicitly. We exploit the high-performance computing capabilities of GPU hardware, paired with a distributed memory parallelization using MPI and achieved very good scalability results compared to the standard UpDown approach. Since we exploit all levels of the operator's parallelism, we are able to get a much better scalability than the maximum of 30-40 Intel Sandy Bridge cores of the UpDown scheme. Although our scalability is superior, we need a significant amount of GPU nodes in order to achieve a lower total running time of the Black-Scholes solver. Our results show that our approach, for typical problem sizes, requires 4-16 NVIDIA K20X Kepler GPUs to be faster than the UpDown approach running 16 Intel Sandy Bridge cores (one box), but scales up to 64-128 GPUs, whereas the UpDown scheme can only be scaled up to 2-3 boxes.