AdaptLocalMOR2013: Difference between revisions

From Sccswiki
Jump to navigation Jump to search
(Created page with "Category:news <center> http://www5.in.tum.de/pic/IAS/TUM_IAS_MS_farbig_RGB_S.png <font size="+2">Workshop on Adaptive and Local Model Order Reduction with Machine Learni...")
 
No edit summary
Line 1: Line 1:
[[Category:news]]
[[Category:news]]
 
http://www5.in.tum.de/workshops/AdaptLocalMOR2013/TUM_logo.png
<center>
<center>
http://www5.in.tum.de/pic/IAS/TUM_IAS_MS_farbig_RGB_S.png
<font size="+2">Workshop on Adaptive and Local Model Order Reduction with Machine Learning for Parametrized Systems</font>
<font size="+2">Workshop on Adaptive and Local Model Order Reduction with Machine Learning for Parametrized Systems</font>


Line 12: Line 10:




Organisors: Hans-Joachim Bungartz (TUM), Karen Willcox (MIT), and Benjamin Peherstorfer (TUM)
Organizers: Hans-Joachim Bungartz (TUM), Karen Willcox (MIT), and Benjamin Peherstorfer (TUM)
</center>
</center>


= Abstract =
= Description =


Most of today's simulations in computational science and engineering are solved many times in a row for different parameter configurations, e.g., in optimization, uncertainty quantification, and statistical inverse problems. To cope with the consequentially increased computational costs, model order reduction methods approximate the large-scale simulations with low-cost surrogates by solving the problem not in a general, high-dimensional solution space but in a problem-dependent, low-dimensional subspace. Classical approaches construct one subspace and use it for all parameter configurations and time steps. In contrast, adaptive and local model reduction methods construct multiple low-dimensional subspaces, each of them tailored to a particular region of characteristic system behavior. Machine learning techniques are a versatile way to detect these characteristic system behaviors from data and to derive reduced-order models with the obtained information.
Most of today's simulations in computational science and engineering are solved many times in a row for different parameter configurations, e.g., in optimization, uncertainty quantification, and statistical inverse problems. To cope with the consequentially increased computational costs, model order reduction methods approximate the large-scale simulations with low-cost surrogates by solving the problem not in a general, high-dimensional solution space but in a problem-dependent, low-dimensional subspace. Classical approaches construct one subspace and use it for all parameter configurations and time steps. In contrast, adaptive and local model reduction methods construct multiple low-dimensional subspaces, each of them tailored to a particular region of characteristic system behavior. Machine learning techniques are a versatile way to detect these characteristic system behaviors from data and to derive reduced-order models with the obtained information.
Line 26: Line 24:
| If you would like to attend, please register in advance [https://docs.google.com/forms/d/1UZyTAUDD2txheWXTJeh-RqeAdHdZmB7JygwjDDET1SQ/viewform here].
| If you would like to attend, please register in advance [https://docs.google.com/forms/d/1UZyTAUDD2txheWXTJeh-RqeAdHdZmB7JygwjDDET1SQ/viewform here].
|}
|}
= Program =
Confirmed speakers are
* Felix Albrecht (U Münster)
* Lihong Feng (MPI Magdeburg)
* Bernard Haasdonk (U Stuttgart)
* Markus Hegland (ANU)
* Qifeng Liao (MIT)
* Benjamin Stamm (UPMC)
* Bernard Wieland (U Ulm)
A schedule will follow shortly.
= Abstracts =
tba


= Acknowledgement =
= Acknowledgement =
The workshop is supported by the MIT-Germany Seed Fund and by the TUM-IAS focus group on high-performance computing.
The workshop is supported by the MIT-Germany Seed Fund and by the TUM-IAS focus group on high-performance computing.

Revision as of 16:00, 27 July 2013

TUM_logo.png

Workshop on Adaptive and Local Model Order Reduction with Machine Learning for Parametrized Systems

TUM Institute for Advanced Study
Focus Group HPC
19. September 2013
10:00 AM - 4:00 PM


Organizers: Hans-Joachim Bungartz (TUM), Karen Willcox (MIT), and Benjamin Peherstorfer (TUM)

Description

Most of today's simulations in computational science and engineering are solved many times in a row for different parameter configurations, e.g., in optimization, uncertainty quantification, and statistical inverse problems. To cope with the consequentially increased computational costs, model order reduction methods approximate the large-scale simulations with low-cost surrogates by solving the problem not in a general, high-dimensional solution space but in a problem-dependent, low-dimensional subspace. Classical approaches construct one subspace and use it for all parameter configurations and time steps. In contrast, adaptive and local model reduction methods construct multiple low-dimensional subspaces, each of them tailored to a particular region of characteristic system behavior. Machine learning techniques are a versatile way to detect these characteristic system behaviors from data and to derive reduced-order models with the obtained information.

This workshop brings together scientists to discuss recent advances in adaptive and local model order reduction. It also wraps up an MIT-TUM research collaboration funded by the MIT Germany Seed Fund (MISTI) where adaptive and local methods have been investigated. The workshop is organized together with the TUM Institute for Advanced Study and the focus group on High-Performance Computing (HPC).

Participation

If you would like to attend, please register in advance here.

Program

Confirmed speakers are

  • Felix Albrecht (U Münster)
  • Lihong Feng (MPI Magdeburg)
  • Bernard Haasdonk (U Stuttgart)
  • Markus Hegland (ANU)
  • Qifeng Liao (MIT)
  • Benjamin Stamm (UPMC)
  • Bernard Wieland (U Ulm)

A schedule will follow shortly.

Abstracts

tba

Acknowledgement

The workshop is supported by the MIT-Germany Seed Fund and by the TUM-IAS focus group on high-performance computing.