SC²S Colloquium - May 25, 2016
|Date:||May 25, 2016|
|Time:||3:00 pm, s.t.|
Mauro Carminati: Influence of design parameters on turbulent pressure fluctuations in a variable geometry turbine
In modern Diesel- and Gas engines, the development of highly efficient turbochargers has become one of the key technologies to fulfill todays demands regarding power and emissions. In this context, the development of radial turbines with inlet guide vanes plays an important role.
On the one hand, the guiding vanes are beneficial in terms of homogenous and well-defined inflow conditions to the turbine wheel leading to an increase in efficiency. On the other hand, the guiding vanes come in hand with challenges on the structural mechanics side. The resulting disturbances at the trailing edge cause pressure fluctuations acting on the turbine wheel. These excitations can result in resonance, if the excitation frequency matches the eigenfrequency of the structure. Thereby, the turbine wheel can be seriously damaged after a short operating time.
In this presentation, the Sparse Grid method is applied to some turbine design parameter, to interpolate the pressure field on the wheel. This field is needed for a (future) structural analysis: the system must be able to withstand the stresses, also in possible resonance situations. The pressure interpolation results are presented and compared to classical steady RANS and URANS simulations, to prove the good quality of the approximation. The work try to demonstrate the potential of the sparse grid method for different applications in the turbomachinery field. In the end, an outlook over possible future goals and developments is provided.
Maximilian Mumme: Implementation and Evaluation of Deep Residual Learning for Image Recognition
Image recognition has become an important field of research in the past decade. In this thesis we investigate on the recently released paper "Deep Residual Learning for Image Recognition" by He et Al.. Nowadays the image recognition task is usually tackled using deep convolutional neural networks. However, performance of those networks suffers from the degradation problem. This term refers to the counterintutive effect that for models deeper than a certain threshold both training and test error start to increase again. He et Al. cope with this problem by introducing deep residual learning. We implement this novel concept using Google's machine learning library TensorFlow and reproduce the training results from the paper with our implementation