no code implementations • 22 Dec 2023 • Daniel Koutas, Elizabeth Bismut, Daniel Straub
We propose a novel Deep Reinforcement Learning (DRL) architecture for sequential decision processes under uncertainty, as encountered in inspection and maintenance (I&M) planning.
1 code implementation • 16 Jul 2023 • Giacomo Arcieri, Cyprien Hoelzl, Oliver Schwery, Daniel Straub, Konstantinos G. Papakonstantinou, Eleni Chatzi
The POMDP with uncertain parameters is then solved via deep RL techniques with the parameter distributions incorporated into the solution via domain randomization, in order to develop solutions that are robust to model uncertainty.
1 code implementation • 15 Dec 2022 • Giacomo Arcieri, Cyprien Hoelzl, Oliver Schwery, Daniel Straub, Konstantinos G. Papakonstantinou, Eleni Chatzi
We present a framework to estimate POMDP transition and observation model parameters directly from available data, via Markov Chain Monte Carlo (MCMC) sampling of a Hidden Markov Model (HMM) conditioned on actions.
no code implementations • 12 Mar 2021 • Antonios Kamariotis, Eleni Chatzi, Daniel Straub
We quantify this value by adaptation of the Bayesian decision analysis framework.
Applications Systems and Control Systems and Control
1 code implementation • 9 Jun 2020 • Felipe Uribe, Iason Papaioannou, Youssef M. Marzouk, Daniel Straub
Although some existing parametric distribution families are designed to perform efficiently in high dimensions, their applicability within the cross-entropy method is limited to problems with dimension of O(1e2).
Computation
no code implementations • 23 Dec 2019 • Panagiotis Tsilifis, Iason Papaioannou, Daniel Straub, Fabio Nobile
The challenges for non-intrusive methods for Polynomial Chaos modeling lie in the computational efficiency and accuracy under a limited number of model simulations.