SC²S Colloquium - March 4, 2015
|Date:||March 4, 2015|
|Time:||4:00 pm, s.t.|
|Invited by:||Dipl.-Inf. Oliver Meister, Valeriy Khakhutskyy|
Jaclyn Rodrigues Monteiro: Comparison of Cartesian and Dynamically Adaptive Grids for the Parallel Simulation of Shallow Water Waves
The two codes SWE and sam(oa)² both apply finite volume method and Shallow Water Equations, yet SWE uses a Cartesian grid and sam(oa)² a dynamically adaptive grid. A benchmark based on a Riemann problem was implemented to compare those two, yielding that sam(oa)² achieves smaller errors than SWE when using the same amount of cells, though grid refinement needed to be improved first. Furthermore we compare a tsunami simulation to actual measurement data.
Carsten Uphoff: Parallel fitting of additive models
Additive models are well-established tools in statistics. While they have been around for several decades, little attention was devoted to their computational feasibility for large datasets. In 2014, Khakhutskyy and Hegland introduced a novel algorithm based on BiCGSTAB, an iterative solver for linear systems of equations. They show that their algorithm is well-suited for a parallel implementation in order to run on a compute cluster. In this work, we investigate the possibility of preconditioning their algorithm and improve the performance of several linear scatterplot smoothers, which are building blocks of additive models. These include least squares regression, smoothing splines, and Nadaraya Watson kernel smoothers.
We also introduce a flexible communication interface in order to allow the parallel fitting of additive models on a variety of compute clusters. In addition, we show a communicator that uses only TCP/IP sockets and may be easily extended in future research, for instance to build fault tolerant systems.
Our experiments indicate that additive models in combination with smoothing splines show decent performance and good predictive power. For example, we compare them to research concerning the prediction of the unified Parkinson’s disease scale and show that they improve the quality of the predictions. Furthermore, preconditioning may improve the performance of the algorithm if one uses expensive methods like kernel smoothers.