Difference between revisions of "SC²S Colloquium - April 20, 2016"

From Sccswiki
Jump to navigation Jump to search
(Created page with "{| class="wikitable" |- | '''Date:''' || April 20, 2016 |- | '''Room:''' || 02.07.023 |- | '''Time:''' || 3:00 pm, s.t. |- |} == Karthikeya Sampa Subbarao: Large-scale elasti...")
 
 
Line 9: Line 9:
 
|}
 
|}
  
== Karthikeya Sampa Subbarao: Large-scale elastic machine learning using sparse grid combination technique   ==
+
== Kevin Strauss: Convergence of the asynchronous and partially-asynchronous ADMM methods for sparse grid model splitting   ==
Large scale machine learning (ML) often requires flexible specification of ML algorithms
+
Due to the explosion in size and complexity of modern data sets in various application fields it is increasingly important to be able to solve problems with a very large number of features. As a consequence, the usage of parallel and distributed systems in the area of data mining has become desirable, if not even necessary. The alternating direction method of multipliers (ADMM) poses in this context a promising algorithm and has already been successfully applied on the sparse grid model. This work represents an edited version of ADMM for sparse grids.
for dynamic scaling, depending on the availability of resources. In this work,
+
Totally and partially asynchronous algorithms have been shown to exhibit possible advan- tageous properties in comparison to their synchronous counterparts. This work therefore provides concrete convergence criterion’s for the synchronous and totally asynchronous version of ADMM. However, the experiments performed here show that asynchronism is detrimental to the convergence behaviour. Thus, a way to influence the convergence behaviour positively is investigated. To illustrate the effects of different splitting strate- gies of the basis functions of the sparse grid the accompanying convergence behaviour is observed.  
we introduce a grid-based technique for regression, which allows dynamic resource utilization and
 
provides the capability to compute the best possible result at a given point of time. We are developing an approach to be implemented in a parallel framework with dynamic resource allocation. We also introduce a technique based on gradient boosting for efficient selection of grids for refinement. We employ iMPI to achieve true elasticity.
 
  
  
 
[[Category:ShowComingUp]]
 
[[Category:ShowComingUp]]
 
[[Category:news]]
 
[[Category:news]]

Latest revision as of 09:51, 19 April 2016

Date: April 20, 2016
Room: 02.07.023
Time: 3:00 pm, s.t.

Kevin Strauss: Convergence of the asynchronous and partially-asynchronous ADMM methods for sparse grid model splitting

Due to the explosion in size and complexity of modern data sets in various application fields it is increasingly important to be able to solve problems with a very large number of features. As a consequence, the usage of parallel and distributed systems in the area of data mining has become desirable, if not even necessary. The alternating direction method of multipliers (ADMM) poses in this context a promising algorithm and has already been successfully applied on the sparse grid model. This work represents an edited version of ADMM for sparse grids. Totally and partially asynchronous algorithms have been shown to exhibit possible advan- tageous properties in comparison to their synchronous counterparts. This work therefore provides concrete convergence criterion’s for the synchronous and totally asynchronous version of ADMM. However, the experiments performed here show that asynchronism is detrimental to the convergence behaviour. Thus, a way to influence the convergence behaviour positively is investigated. To illustrate the effects of different splitting strate- gies of the basis functions of the sparse grid the accompanying convergence behaviour is observed.