Personal tools

SC²S Colloquium - March 2, 2016

From Sccswiki

Jump to: navigation, search
Date: March 2, 2016
Room: 02.07.023
Time: 3:00 pm, s.t.

Felix Scheffler: Design, Implementation and Evaluation of an Automatic Segmentation Algorithm in Live Cell Microscopy Imaging

In many computer vision applications, segmentation often serves as a preprocessing step for subsequent steps in the image processing pipeline. As such, automatic segmentation has been a large research focus for decades. Practical applications are as diverse as the set of techniques that have come up throught the last years. Recently, more and more effort has been put into research and development of automatic segmentation algorithms in the context of live cell microscopy imaging. It is primarily used as a preprocessing step for subsequent classification, motion tracking or other cell studies. A major driver for pipeline automation is the amount of imaging data that makes it inefficient and expensive to segment cell images manually. Major challenges for improving existent and developing new algorithms, especially in the context of non-invasive imaging techniques such as Phase-Contrast Microscopy, are poor contrast, low Signal-to-Noise ratios, partial cell transparancies, complex intensity inhomogeneities, diffraction-related artifacts (e.g. halos), cell overlapping and broken boundaries. As a result, common techniques such as thresholding or simple Watershed often fail. It has been shown that hybrid approaches, that is combining the strength of mulitple techniques, are likely to give superior performance. In this work, the problem of automatic segmentation in live cell imaging is subdivided into a cell detection step based on an unsupervised SIFT keypoint clustering approach and a subsequent segmentation step based on minimising an energy functional using coupled level sets. SIFT has been frequently proven to be one of the best blob detectors around. Similarly, level sets have shown to be especially useful for "difficult" settings due to its topological adaptivity as well as its capability of considering image-based and contour-based terms. In addition, a major methodological issue is the lack of comparability and reproducibility of different algorithms. This is primarily due to the absence of a common database of ground-truth-segmented reference images as well as inconsistencies in evaluation techniques and metrics. Thus, a common interactive database and segmentation tool, LabelMe, is introduced and a common evaluation strategy is proposed. Both are then used to evaluate the algorithm designed as part of this work against a set of state-of-the-art algorithms.

Felix Müller: Validation of a macroscopic traffic model with a microscopic simulation

This master's thesis is about a macroscopic model which considers a traffic flow from several start to several end points and which minimizes the sum of the travel time of all cars. This model is validated with a microscopic model. We therefore want to show that the results are realistic. The macroscopic model has to be validated because as a model it is a simplification of the reality. On the other hand, the microscopic model is more realistic because cars have in contrast to the macroscopic model an accelerating and decelerating behavior, can influence each other and they can drive on the crossings. This microscopic model uses the fastest ways of the macroscopic model to route the cars from starting points to endpoints. The new positions of the cars are thereby determined by a differential equation. To validate the macroscopic model we compare the evacuation times and use the microscopic assumptions that are not valid in the macroscopic model to draw conclusions about the macroscopic model and to recommend improvements.