Accepted Minisymposia

Proposals for Minisymposia (including your name, affiliation, MS title and a short minisymposium description) should be sent via e-mail to the Conference Secretariat at info@uncecomp.org.
The MS code, which is required for the submission of an Abstract to the relevant MS, is provided by the MS Organizers.
Minisymposium 1
"Uncertainty Quantification in Vibration based Monitoring and Structural Dynamics Simulations"
Eleni Chatzi (ETH Zürich, Switzerland)
Manolis Chatzis (The University of Oxford, United Kingdom)
Vasilis Dertimanis (ETH Zürich, Switzerland)
Geert Lombaert (KU Leuven, Belgium)
Costas Papadimitriou (University of Thessaly, Greece)
chatzi@ibk.baug.ethz.ch
manolis.chatzis@eng.ox.ac.uk
v.derti@ibk.baug.ethz.ch
geert.lombaert@kuleuven.be
costasp@uth.gr
More Info »

Due to factors related to manufacturing or construction processes, ageing, loading, environmental & boundary conditions, measurement errors, modeling assumptions / inefficiencies and numerous others, almost every engineering system is characterized by uncertainty. The propagation of uncertainty through the system gives rise to corresponding complexities during simulation of its structural response, yet also during its characterization based on experimental data. Consequently, only a limited degree of confidence can be attributed in the behavior, reliability and safety of structural systems in particular throughout their life cycle. For this purpose, it is imperative to develop models and processes able to encompass the aforementioned uncertainties. 

This mini-symposium deals with uncertainty quantification and propagation methods applicable to the simulation and identification of complex engineering systems. It covers theoretical and computational issues, applications in structural dynamics, earthquake engineering, mechanical and aerospace engineering, as well as other related engineering disciplines. Topics relevant to the session include: dynamics of structural systems, structural health monitoring methods for damage and reliability prognosis, theoretical and experimental system identification for systems with uncertainty, uncertainty quantification in model selection and parameter estimation, stochastic simulation techniques for state estimation and model class selection, structural prognosis techniques, updating response and reliability predictions using data. Papers dealing with experimental investigation and verification of theories are especially welcomed.

Minisymposium 2
"Learning from small data: Data-driven methods and machine learning for uncertainty quantification in engineering applications"
Dimitrios G. Giovanis (Johns Hopkins University, United States)
Audrey Olivier (University of Southern California, United States)
Michael Shields (Johns Hopkins University, United States)
Lori Graham-Brady (Johns Hopkins University, United States)
dgiovan1@jhu.edu
audreyol@usc.edu
michael.shields@jhu.edu
lori@jhu.edu
More Info »

Over the past several years, data-driven modeling has emerged as a means to address the challenges of quantitative predictive modeling considering uncertainties. In this paradigm, observational and experimental data are used to calibrate and validate the computational models, which may include hundreds of uncertainty parameters. However, the performance of data-driven approaches crucially depends on the amount of available data and, despite the numerous public databases, the volume of “useful” experimental data for complex physical and engineering systems is limited. To this end, machine learning algorithms may be utilized in conjunction with uncertainty quantification (UQ) methods to extract useful information from small and incomplete data sets, and enhance the performance of predictive modeling in engineering applications. This MS aims to bring together leading experts in the fields of uncertainty quantification and machine learning to highlight recent advances at their intersection, that will enhance our ability to develop predictive models for real-world applications when data sets are small.

Areas of interest include, but are not limited to

  • Machine learning for forward and inverse UQ for large-scale, high-dimensional problems
  •  Quantification of uncertainties in training data-driven models from small data (Bayesian neural networks, Gaussian processes etc)
  • Active learning and optimal experimental design using machine learning methods
  • Imprecise probabilities for UQ and reliability analysis.
Minisymposium 3
"Bayesian Computation Methods for Inference in Science and Engineering"
Oindrila Kanjilal (TU Munich, Germany)
Iason Papaioannou (TU Munich, Germany)
Daniel Straub (TU Munich, Germany)
Geert Lombaert (KU Leuven, Belgium)
Costas Papadimitriou (University of Thessaly, Greece)
oindrila.kanjilal@tum.de
iason.papaioannou@tum.de
straub@tum.de
geert.lombaert@kuleuven.be
costasp@uth.gr
More Info »

Key words: Bayesian inference, computational models, reliability updating, sampling-based inference, machine learning

Computational models are widely used in science and engineering to predict the response of complex physical and mechanical systems. The parameters of these models often cannot be determined uniquely as they are affected by uncertainties. Bayesian inference provides a powerful tool for making statistical inference of the uncertain model parameters by use of data and other available information. The methods for Bayesian inference most commonly rely on sampling algorithms to explore the outcome space of the uncertain parameters. For problems in which evaluation of the computational model is costly, such exploration can require enormous computational efforts. Hence, recent research focuses on the development of efficient Bayesian inference methods based on novel mathematical formulations or advanced sampling techniques. In this mini-symposium, we look for contributions that address either methodological developments or novel applications on Bayesian inference of computational models of engineering systems. In this respect, topics of interest include, but are not limited to, the following: advanced sampling methods for Bayesian analysis, inference in the presence of spatial/temporal dependence of uncertainty, structural identification, recursive Bayesian inference, virtual sensing, hierarchical Bayesian models, optimal experimental design, Bayesian reliability updating, Bayesian inference with surrogate models, accelerated Bayesian inference using machine leaning and high performance computing.

Minisymposium 4
"Reliability Analysis and Rare Event Simulation"
Max Ehre (TU Munich, Germany)
Iason Papaioannou (TU Munich, Germany)
Elisabeth Ullmann (TU Munich, Germany)
Michael Shields (Johns Hopkins University, United States)
max.ehre@tum.de
iason.papaioannou@tum.de
elisabeth.ullmann@tum.de
michael.shields@jhu.edu
More Info »

Key words: reliability analysis, rare events, reliability-based optimization, reliability updating, reliability sensitivity analysis.

Model-based quantification of the probability of failure is essential for the development, design and assessment of engineering systems. Challenges in computing the probability of failure are associated with non-linear system behaviour, large numbers of uncertain parameters and failure/rare events inducing multiple, disconnected failure domains. We invite talks discussing efficient computational methods for simulating rare events and quantifying failure probabilities based on sampling, surrogate modelling and machine learning as well as other approximation approaches. Relevant applications of these techniques are in assessment of static and dynamical engineering systems, reliability-based design optimization and reliability-oriented sensitivity analysis of such systems as well as Bayesian updating of failure probabilities.

Minisymposium 5
"Safe Residual Life assessment under limited and heterogeneous information"
Alice Cicirello (TU Delft, Netherlands)
Edoardo Patelli (University of Strathclyde, United Kingdom)
a.cicirello@tudelft.nl
edoardo.patelli@strath.ac.uk
More Info »

The health status of structures of critical importance for our society are often tracked by using sensor data, remote sensing, and by inspections to make inference on the structure Safe Residual Life. To make inference, the data must be informative, reliable, and accurate. However, there are several challenges. Very few critical structures are continuously monitored with appropriate instrumentation. Measurements are often of low quality, and collected at insufficient time and space resolutions.  Moreover, the information that could be readily available is heterogeneous since it includes physics-based models of different fidelity describing phenomena occurring at different time-scales, inspection reports, and local testing. Statistical model updating under limited and heterogeneous information is one of the key bottlenecks undermining the Safe Residual Life assessment of critical structures.

This minisymposium welcomes contributions showcasing novel advanced techniques and industrial applications (including civil, mechanical, aerospace, manufacturing, nuclear and other related engineering disciplines) addressing these challenges.

Minisymposium 6
"Surrogate modelling and data-driven approaches for uncertainty quantification"
Jean-Marc Bourinet (SIGMA-Clermont , France)
Michael Shields (Johns Hopkins University , United States)
Bruno Sudret (ETH Zurich , Switzerland)
Alexandros Taflanidis (University of Notre Dame , United States)
bourinet@sigma-clermont.fr
michael.shields@jhu.edu
sudret@ethz.ch
a.taflanidis@nd.edu
More Info »

Computer simulation has become central in all fields of engineering and applied sciences in the last decades. High-fidelity simulators such as finite element models are commonly used all along the design process of complex systems. However, using these models directly to optimize designs, to assess the impact of uncertainties on reliability and robustness, to pursue model calibration from experimental data or to carry out global sensitivity analysis is usually not feasible due to computational costs, even when using high performance computing resources. For this reason, surrogate modelling has gained major interest in the recent years. Surrogate models can be defined as fast-to-evaluate analytical functions that approximate the input/output mapping of the original simulator. They are trained using (limited) data simulated with the complex high-fidelity model.

The aim of this mini-symposium is to highlight new research trends in the field of surrogate modelling. Contributions on established techniques such as (sparse) polynomial chaos expansions, Kriging, support vector regression, sparse grid interpolation, etc. are welcome, but also data-driven approaches stemming from machine learning such as (deep) neural networks, random forests, etc. Topics of interest include, but are not limited to active learning, ensemble modelling, dimensionality reduction. Significant applications in new or emerging fields are also welcome.

Minisymposium 7
"Uncertainty Quantification, Data Assimilation, Machine Learning, and their integrations for effective predictive models ?"
Didier Lucor (Paris-Saclay university, France)
Rossella Arcucci, (Imperial College London (ICL), United Kingdom)
Bojana Rosic (University of Twente, Netherlands)
didier.lucor@lisn.upsaclay.fr
r.arcucci@imperial.ac.uk
b.rosic@utwente.nl
More Info »

Uncertainty quantification (UQ) methods that aim at taking into account modeland parametric uncertainty lead to very challenging high-dimensional forwardpredictive tasks. In such a case, inverse problems and data assimilation that aim to calibrate large-scale complex computational models to reproduce indirect and noisy experimental measurements, then also become high dimensional and subject to uncertainties. In order to address the challenge of quantitative predictive modelling considering uncertainties, many machine learning-based approaches have been recently proposed. Nevertheless, purely data-driven approaches are often out of reach as it remains challenging to simulate many real-world engineering systems in full details. As uncertainty quantification techniques, data assimilation methods and deep learning algorithms share many aspects (including some mathematical foundations) of exploration/ integration/interaction of data with models, we believe that researchers at the forefront of these various data science topics should interact as much as possible.

More specifically, this minisymposium aims to highlight recent theoretical and algorithmic advancements in any form of machine learning deployed in conjunction with uncertainty quantification and data assimilation techniques.The challenges include, but are not limited to: surrogate models and high-dimensional emulators, UQ for complex, coupled or multiscale systems, inverse and learning problems, Bayesian inference, optimization under uncertainty, physics-informed machine learning, prediction in the presence of model discrepancy, dynamical systems, reduced-order models and dimension reduction, multi-fidelity models, …

Minisymposium 9
"UQ and Data Assimilation with Sparse, Low-Rank Tensor, and Machine Learning Methods"
Hermann G. Matthies (RWTH Aachen University, Germany)
Alexander Litvinenko (RWTH Aachen University, Germany)
Martin Eigel (RWTH Aachen University, Germany)
h.matthies@tu-braunschweig.de
Litvinenko@uq.rwth-aachen.de
martin.eigel@wias-berlin.de
More Info »

Keywords of the Minisymposium: Uncertainty Quantification, Data Assimilation and Bayesian Inverse Analysis, Surrogate Modelling, Low-Rank Tensor Representations, Machine Learning, Model reduction

This minisymposium deals with the impact of new advances in reduced-order models, low-rank tensor and machine learning techniques for uncertainty quantification, data assimilation, and Bayesian inversion. The discretisation of parametric or stochastic PDEs and ODEs leads to high-dimensional problems, motivating developments of efficient approximations of high-dimensional functions and Reduced Order Models (ROMs).  

The analysis of their accuracy and numerical performance, as well as examples of their use in applications from engineering science (nonlinear mechanics, fluid mechanics, fluid-structure interaction, structural dynamics, etc.), and data science problems are strongly encouraged and welcomed.
 

Minisymposium 10
"Uncertainty Quantification for Scientific Machine Learning"
Paris Perdikaris (University of Pennsylvania, United States)
M. Giselle Fernández-Godino (Lawrence Livermore National Laboratory, United States)
pgp@seas.upenn.edu
fernandez48@llnl.gov
More Info »

Key words: Deep Learning, Machine Learning, Neural Operators, Computational Sciences, Uncertainty Quantification

Reduced-order models have been used for half a century to make predictions, recommendations, and decisions. In the past decade, advances in computational power have allowed deep learning techniques to extract information from data, greatly boosting many tasks, such as image classification, object detection, speech recognition, dimensionality reduction and many other applications [1]. In recent years, operator learning techniques have emerged as a powerful tool for learning mappings between infinite-dimensional function spaces [2]. While the solution of a physics-based simulation can be understood, many machine learning models have been labeled as black boxes that are difficult to interpret. Exciting advances such as physics-informed neural networks (PINNs) attempt to bridge this gap between the physics we know and the complicated black box [3]. Deep learning models can be much cheaper than high-fidelity physical simulations, so this combination of continuous mechanics and deep learning can be a useful aid for solving complex problems such as design optimization, inverse problems, and quantification of uncertainty. Techniques such as deep operator networks, the family of neural operator methods, and operator-valued kernel methods have shown promise in building fast surrogate models in very high dimensional parameters spaces.

The objective of this mini-symposium is to discuss state-of-the-art machine learning advances in computational science and engineering applications in the context of uncertainty quantification, both from a methodological and applications point of view. Quantifying errors and uncertainties in NN-based inference is more complicated than in traditional methods. This is because in addition to aleatoric uncertainty associated with noisy data, there is also uncertainty due to limited data, but also due to NN hyperparameters, over-parametrization, optimization and sampling errors as well as model misspecification. We will invite speakers that cover different aspects of methods in UQ of scientific machine learning, including both Bayesian type methods as well as ensemble methods.

REFERENCES

[1] LeCun, Yann, Yoshua Bengio, and Geoffrey Hinton. "Deep learning." Nature 521, no. 7553 (2015): 436-444.

[2] L. Lu, P. Jin, G. Pang, Z. Zhang, & G. E. Karniadakis. Learning nonlinear operators via DeepONet based on the universal approximation theorem of operators. Nature Machine Intelligence, 3, 218–229, 2021).

[3] Raissi, Maziar, Paris Perdikaris, and George Em Karniadakis. "Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations." Journal of Computational Physics 378 (2019): 686-707.

Minisymposium 11
"Uncertainty Quantification under limited data"
Alba Sofi (University Mediterranea of Reggio Calabria, Italy)
David Moens (KU Leuven, Belgium)
Edoardo Patelli (KU Leuven, Belgium)
Matthias Faes (Dortmund University, Germany)
Michael Hanss (University of Stuttgart, Germany)
alba.sofi@unirc.it
david.moens@kuleuven.be
edoardo.patelli@strath.ac.uk
matthias.faes@tu-dortmund.de
michael.hanss@itm.uni-stuttgart.de
More Info »

The ability to make decisions under limited data is becoming increasingly important in the context of modelling for engineering applications. Several approaches are currently emerging to perform Uncertainty Quantification in complex models, ranging from purely interval and fuzzy approaches over polymorphic concepts to advanced probabilistic schemes. Also, the application of AI-inspired techniques is becoming ever more popular in this context.

This mini-symposium focuses on novel developments of these techniques for the representation of uncertainty and applications of in advanced engineering modelling activities.

Researchers focusing on Uncertainty Quantification in numerical modelling for engineering applications with limited data, ranging from uncertainty propagation methodologies, inverse identification and quantification techniques to optimisation under uncertainty as well as recent implementations and developments of Artificial intelligence and Machine Learning for dealing with uncertainty are invited to submit an abstract to this mini-symposium.

Minisymposium 12
"Critical components performance under deep uncertainty"
Alice Cicirello (TU Delft, Netherlands)
a.cicirello@tudelft.nl
More Info »

Critical components of a system are those that, if they fail or become damaged, could catastrophically influence the entire system performance or could compromise the end user safety. The information available at the design stage to make predictions on the critical components performance is usually heterogeneous (measurements, observations, reports, experts’ opinion) and limited because of: (i) limited previous knowledge and/or data to learn from towards the future performance; (ii) limited knowledge of the manufacturing variability resulting from the manufacturing process; and (iii) uncertainty in operating and environmental conditions. Moreover, once the component is manufactured, often very few tests can be carried out and those are usually limited to small samples because of costs, time constraints, and availability of appropriate testing strategies. Therefore, there is the challenge of “deep uncertainties” that need to be dealt with to be able to assess the critical components performance.

This minisymposium welcomes contributions showcasing novel advanced techniques and industrial applications (including civil, mechanical, aerospace, manufacturing, nuclear and other related engineering disciplines) focusing on the performance assessment of critical components under deep uncertainties. Of particular interests are contributions focusing on the characterization of uncertainties and variabilities, on the quantification of their effects on the manufacturing process, and on the performance of the manufactured critical component.

Minisymposium 13
"Software for Uncertainty Quantification"
Stefano Marelli (Chair of Risk, Safety & Uncertainty Quantification, ETH Zürich, Switzerland)
Edoardo Patelli (Civil and Environmental Engineering, University of Strathclyde, Glasgow, United Kingdom)
Dirk Pflüger (Simulation Software Engineering, University of Stuttgart, Germany)
marelli@ibk.baug.ethz.ch
edoardo.patelli@strath.ac.uk
Dirk.Pflueger@ipvs.uni-stuttgart.de
More Info »

Uncertainty quantification (UQ) is a staple of engineering design, predictive modelling and more in general scientific and technological applications. Including uncertainty quantification in modelling workflows can be a technical challenge on many fronts, including non-traditional representation of the uncertainty,surrogate models and machine learning, sensitivity analysis, model calibration, robust optimization, etc. Because of this large range of applications, the deployment and further diffusion of modern UQ techniques relies on the availability of proper software and libraries that can be incorporated by researchers and practitioners into their own workflows.

This mini-symposium aims at bringing together leading and innovative players in the international UQ software scene to foster discussions and exchange of ideas between developers and prospective users and identify the need and best strategy to support the UQ community. Contributions are welcome on the following topics: non-intrusive UQ techniques, surrogate modelling, HPC in UQ, general-purpose UQ software, UQ in digital twin and AI, big-data and dimensionality reduction techniques and case studies and applications to real-scale industrial problems.

Minisymposium 14
"Grey-box modelling for uncertainty quantification"
David Moens (KU Leuven, Belgium)
Augustin Persoons (KU Leuven, Belgium)
Matthias Faes (TU Dortmund, Germany)
Enrico Zio (Politecnico di Milano, Italy)
George Stefanou (Aristotle university of Thessaloniki, Greece)
Matteo Broggi (Leibnitz university Hannover, Germany)
david.moens@kuleuven.be
augustin.persoons@kuleuven.be
matthias.faes@tu-dortmund.de
enrico.zio@polimi.it
gstefanou@civil.auth.gr
broggi@irz.uni-hannover.de
More Info »

The focus of this mini-symposium is on the development of grey-box modelling approaches for system reliability analysis and risk assessment.  Grey-box approaches are aimed at combining the benefits of both physics-based (white-box) and data-driven (black-box) models. Black-box models are fast and able to capture very complex behaviors from data but their use is limited in the case of large uncertainties;  physics-based models can give great accuracy but their computation burden often makes them unusable for non-deterministic approaches. Grey-box models optimally integrate both in a variety of methods depending on data availability, black-box architecture, type of uncertainty, industrial application, and problem to be solved (e.g. design optimization, process monitoring, sensitivity analysis, reliability analysis, risk assessment).

The mini-symposium aims at gathering scientific and technical contributions on new developments and applications related to the combination of computationally intensive numerical simulation codes with surrogate models and/or data-driven black-box models.

Minisymposium 15
"Multiscale and Enhanced Methods for Randomly Structured Composite Materials"
George Stefanou (Aristotle University of Thessaloniki, Greece)
Victor Eremeyev (Gdańsk University of Technology, Poland & University of Cagliari, Italy)
Emanuele Reccia (University of Cagliari, Italy)
Patrizia Trovalusci (Sapienza University of Rome, Italy)
Marco Pingaro (Sapienza University of Rome, Italy)
gstefanou@civil.auth.gr
victor.eremeev@unica.it
emanuele.reccia@unica.it
patrizia.trovalusci@uniroma1.it
marco.pingaro@uniroma1.it
More Info »

In the last decades, the development of multiscale methods in a stochastic setting for uncertainty quantification and reliability analysis of composite materials and structures, as well as the integration of stochastic methods into a multiscale framework are becoming an emerging research frontier.

This Mini-Symposium aims at presenting recent advances in the field of multiscale analysis and enhanced methods to study random heterogeneous media and metamaterials. In this respect, topics of interest include but are not limited to:

  • Random field modeling of heterogeneous media
  • Efficient simulation of random microstructure/morphology
  • Design / Optimization of composite structures considering uncertainty
  • Scale-dependent homogenization of random composites
  • Homogenization of materials with random microstructure as generalized continua
  • Finite element solution of multiscale stochastic partial differential equations
  • Stochastic finite element (SFE) analysis of composite materials and structures
  • Efficient algorithms to accelerate the SFE solution of multiscale problems
  • Virtual Elements Method for composites
  • Methods for improving the efficiency of Monte Carlo simulation
  • Large-scale applications
  • Peridynamics for the solution of multiphysics problems
  • Mechanical modeling of linear and non-linear metamaterials for structural and acoustic applications
Minisymposium 16
"Stochastic Finite Element Methods: Improvements and New Approaches."
Roger Ghanem (University of Southern California, United States)
Haidi Meidani (University of Illinois Urbana-Champaign, United States)
Ruda Zhang (University of Houston, United States)
ghanem@usc.edu
meidani@illinois.edu
rudaz@uh.edu
More Info »

Uncertainties to mathematical models of science and engineering problems can often be formulated as random parameters in partial differential equations, specifying physical properties, geometry, loading, initial conditions, or boundary conditions. However, uncertainty quantification for the solution of stochastic partial differential equations (SPDEs), i.e., estimating statistics of the random output, is much more computationally challenging than solving the problem assuming everything is known. Stochastic finite element methods (SFEM) are methods for fast and approximate solution of SPDEs, where the spatial domain is discretized via a finite element method. Classic methods for SFEM include polynomial chaos (PC), generalized PC, stochastic collocation (SC), their variants for piecewise approximation on the parameter space, and Monte Carlo sampling (MCS). Despite their success, challenges remain to improve their scalability and efficiency in the presence of complex physics, high-dimensional parameters, and parameter-sensitive solution maps.

This mini-symposium calls for recent research in stochastic finite elements. Contributions that improve the scalability and efficiency of existing approaches (see previous paragraph) are welcome. Development of new methods are also highly encouraged, for example, using data-driven approaches and machine learning or deep learning methods. Applications of existing methods to complex problems are also considered.
 

Minisymposium 17
"Deterministic and data-driven methods in damage and fracture modeling"
Panos Pantidis (New York University Abu Dhabi, United Arab Emirates)
Mostafa Mobasher (New York University Abu Dhabi, United Arab Emirates)
Haim Waisman (Columbia University, United States)
pp2624@nyu.edu
mostafa.mobasher@nyu.edu
waisman@civil.columbia.edu
More Info »

Keywords:
Continuum damage, fracture, phase field, plasticity, uncertainty

Several novel computational modeling approaches are currently available for capturing damage growth and fracture propagation in materials and structures. However, many of the available methods encounter major challenges, such as the elevated computational cost, need for extensive calibration and validation, uncertainty quantification in the model predictions, and more. This has led to the emergence of research directions focused on enhanced predictive modeling of damage and fracture phenomena, including several classes of deterministic, data-driven, and stochastic  methods as well as methods that attempt to develop combinations of these methods. This minisymposium aims to provide a platform for discussion on the recent advancements in these methods, and offer an outlook on the current needs and future trends. Topics of interest include, but they are not limited to, the following:

  • Numerical simulation techniques, such as continuum damage, phase-field, XFEM, cohesive zone methods, peridynamics.
  • Brittle, cohesive and ductile fracture of materials and structures, including material characterization and model validation
  • Machine learning and data-driven approaches to improve the accuracy of the prediction and efficiency of the computational solution
  • Multi-scale modeling techniques to accelerate simulations across different length scales
  • Multi-physics considerations including fluid transport, thermo-plasticity, chemo-mechanical couplings, etc.
  • Continuous-to-discontinuous modeling formulations
  • Improvements in nonlinear numerical solution algorithms
  • Stochastic analysis and uncertainty quantification in the context of damage modeling
Minisymposium 18
"Advanced Monte-Carlo methods for uncertainty quantification and optimization of civil infrastructure"
Matt DeJong (University of California, Berkeley, United States)
Ziqi Wang (University of California, Berkeley, United States)
Jinyan Zhao (University of California, Berkeley, United States)
Sanjay Govindjee (University of California, Berkeley, United States)
dejong@berkeley.edu
ziqiwang@berkeley.edu
jinyan_zhao@berkeley.edu
s_g@berkeley.edu
More Info »

The construction and operation of civil infrastructure systems involves numerous sources of uncertainty, including imperfections in manufacturing or construction processes, unknown existing conditions, uncertain loading and boundary conditions, and simplified or insufficient models, among others. The propagation of uncertainty through such large systems can be challenging due to heavy computational costs and high dimensional uncertainty of the associated models. Although surrogate modeling methods can reduce computational costs and extensive research has enabled the creation of surrogate models from limited data, surrogate models may not be reliable for high dimensional problems due to the “curse of dimensionality” and unavoidable model errors. In contrast, Monte Carlo methods are generally unbiased with a convergence rate independent to input dimensions, and can be an appropriate way for uncertainty quantification and optimization of large civil infrastructure systems. In recent decades, numerous methods have been proposed to advance Monte Carlo (MC), Quasi-Monte Carlo (QMC), Randomized QMC (RQMC) and others. Variants of QMC and RQMC have also been designed for the simulation of Markov chains, for function approximation, optimization, etc. Such advanced MC methods are extensively applied in computational finance, statistics, physics, and computer graphics, but there are relatively few applications to civil infrastructure systems.

This mini-symposium aims to bring together current researchers working on MC methods in civil engineering (and related areas), to discuss the state-of-the-art MC advances from both a methodology and application point of view.

Areas of interest include, but are not limited to:

  • Monte Carlo, quasi-Monte Carlo, Markov chain Monte Carlo
  • Multi-level Monte Carlo
  • Sequential Monte Carlo and particle methods
  • Rare event simulation
  • Randomized quasi-Monte Carlo
  • Variance reduction methods
  • Applications of above in the uncertainty quantification and optimization of civil infrastructure systems
Minisymposium 20
"VVUQ for Complex Multiphysics Problems"
Filipe S. Pereira (Fluid Dynamics and Solid Mechanics/ XCP8 - Verification and Analysis, Los Alamos National Laboratory, Los Alamos, United States)
Jim Ferguson (XCP8 - Verification and Analysis, Los Alamos National Laboratory, Los Alamos, United States)
Aaron Koskelo (XCP8 - Verification and Analysis, Los Alamos National Laboratory, Los Alamos, United States)
Brandon Wilson (XCP8 - Verification and Analysis, Los Alamos National Laboratory, Los Alamos, United States)
fspereira@lanl.gov
jmferguson@lanl.gov
koskelo@lanl.gov
bwilson@lanl.gov
More Info »

The numerical simulation of complex multiphysics problems is crucial to science and engineering. It enables cost reduction, measuring quantities inaccessible to experimental methods and setting well-characterized initial conditions. However, it is imperative to perform predictive computations to trust the resulting data, i.e., simulations with a quantified and adequate level of uncertainty. In this manner, we can trust the results and apply them in engineering projects and decisions. Such a goal requires verification, validation, and uncertainty quantification (VVUQ). This mini-symposium focuses on using and developing novel VVUQ techniques to assess the quality of complex multiphysics problems. The cases of interest include multi-material mixing, variable-density turbulent flow, material strength modeling, EOS, and hydro algorithms. We strongly invite experimental contributions to present new experiments and promote discussions about i) challenges in numerical and experimental measurements and ii) validation experiments.

Minisymposium 21
"Inverse analysis for large-scale, computationally demanding problems in high-(stochastic) dimensions"
Wolfgang A. Wall (Technical University of Munich, Germany)
Phaedon-Stelios Koutsourelakis (Technical University of Munich, Germany)
wolfgang.a.wall@tum.de
p.s.koutsourelakis@tum.de
More Info »

Accurate inverse analysis for large-scale numerical problems of actual relevance in engineering and the applied sciences is, unfortunately, still mostly prohibitive. Reasons include the limited number of high-fidelity solver runs that are feasible due to the high computational cost and the inflexibility of trusted legacy codes, which are usually not differentiable or do not provide Jacobians with respect to the parameters of interest. This precludes the application of more efficient gradient-based optimization procedures, which are also at the heart of modern machine-learning approaches and probabilistic inference algorithms, especially for high-dimensional parameter spaces. These are present in most relevant application scenarios due to the multitude of unknown model parameters or spatiotemporal fields which need to be calibrated or optimized. Standard data-driven, surrogate-based approaches deteriorate in these cases due to the curse of dimensionality.

In this minisymposium, we want to discuss strategies to mitigate the challenges above. Specifically, we want to focus on methods that allow inverse analysis for a wider range of real-world engineering problems. We particularly encourage approaches that allow integration of legacy codes that can be used in a gray-box fashion and do not necessitate a larger, intrusive restructuring of the code. The focus should additionally be put on the reliability and accuracy of the proposed method. We invite all interested researchers and welcome contributions from different viewpoints, such as Bayesian inference, deterministic optimization, multi-fidelity methods, or machine learning-based approaches.

Minisymposium 22
"Data Driven methods and engineering software for next generation HPC applications"
Vissarion Papadopoulos (NTUA, Greece)
Kyriakos Giannakoglou (NTUA, Greece)
George Goumas (NTUA, Greece)
vpapado@central.ntua.gr
kgianna@central.ntua.gr
goumas@cslab.ece.ntua.gr
More Info »

This MS is devoted to the presentation of advances in computational models and engineering software for next generation extreme scale Computational Mechanics applications. As Machine Learning and AI techniques have shown considerable promise in boosting and automating computationally extreme engineering problems such as uncertainty quantification, optimization and Bayesian inference of complex engineering systems, DDCOMP-EXA emphasizes in the use of ML and AI techniques in the area of model and software development for pre-exascale and exascale high-performance computing. DDCOMP-EXA addresses both academic and commercial software innovation, covering applied science and engineering problems with emphasis in data driven-enhanced mathematical models and algorithms improving energy efficiency, robustness and scalability for extreme scale computational mechanics analysis and optimization problems.

Topics of interest include, but are not limited to

  • Sparse matrix computations - sparse kernels, storage formats, solution algorithms and preconditioners with enhanced scalability properties for extreme computational mechanics problems
  • Resilience for HPC systems - fault detection and prediction, monitoring and control, end-to-end data integrity, enabling infrastructure, and resilient computational algorithms.
  • Data driven and ML enhanced mathematical models and application
    * application specific enhancements (e.g.: Solids, Bioengineering, CFD, Coupled   problems, etc.)
    * applications for improving energy efficiency, robustness and scalability
  • usages at extreme scale computational mechanics problems (e.g.: multiscale analysis, UQ, Bayesian analysis, Optimization etc.)
  • ML-assisted Open Architectures for Exascale Supercomputers
  • ML and AI techniques in software development

Involved EU-funded Projects

This is a joint workshop covering the cross-cutting topic of using ML techniques HPC application development organized by the following EuroHPC projects:

  • DComEX - Data Driven Computational Mechanics at Exascale
  • Regale - Open Architecture for Exascale Supercomputers
  • exaFOAM: Exploitation of Exascale Systems for Open-Source Computational Fluid Dynamics by Mainstream Industry