no code implementations • 17 May 2024 • Nisha L. Raichur, Lucas Heublein, Tobias Feigl, Alexander Rügamer, Christopher Mutschler, Felix Ott
The primary objective of methods in continual learning is to learn tasks in a sequential manner over time from a stream of data, while mitigating the detrimental phenomenon of catastrophic forgetting.
1 code implementation • 24 Apr 2024 • Maniraman Periyasamy, Axel Plinge, Christopher Mutschler, Daniel D. Scherer, Wolfgang Mauerer
The computational complexity, in terms of the number of circuit evaluations required for gradient estimation by the parameter-shift rule, scales linearly with the number of parameters in VQCs.
1 code implementation • 16 Apr 2024 • Nico Meyer, Jakob Murauer, Alexander Popov, Christian Ufrecht, Axel Plinge, Christopher Mutschler, Daniel D. Scherer
This objective can be achieved using policy iteration, which requires to solve a typically large linear system of equations.
1 code implementation • 15 Apr 2024 • Nico Meyer, Martin Röhn, Jakob Murauer, Axel Plinge, Christopher Mutschler, Daniel D. Scherer
Linear systems of equations can be found in various mathematical domains, as well as in the field of machine learning.
1 code implementation • 9 Apr 2024 • Nico Meyer, Christian Ufrecht, Maniraman Periyasamy, Axel Plinge, Christopher Mutschler, Daniel D. Scherer, Andreas Maier
Quantum computer simulation software is an integral tool for the research efforts in the quantum computing community.
no code implementations • 9 Feb 2024 • Felix Ott, Lucas Heublein, Nisha Lakshmana Raichur, Tobias Feigl, Jonathan Hansen, Alexander Rügamer, Christopher Mutschler
We recorded a dataset at a motorway with eight interference classes on which our FSL method with quadruplet loss outperforms other FSL techniques in jammer classification accuracy with 97. 66%.
no code implementations • 14 Nov 2023 • Maximilian Stahlke, George Yammine, Tobias Feigl, Bjoern M. Eskofier, Christopher Mutschler
However, current channel-charting approaches lag behind fingerprinting in their positioning accuracy and still require reference samples for localization, regular data recording and labeling to keep the models up to date.
no code implementations • 29 Sep 2023 • Alexander Mattick, Christopher Mutschler
A big challenge in branch and bound lies in identifying the optimal node within the search tree from which to proceed.
1 code implementation • 25 May 2023 • Dinesh Parthasarathy, Georgios Kontes, Axel Plinge, Christopher Mutschler
We propose Constrained MCTS (C-MCTS), which estimates cost using a safety critic that is trained with Temporal Difference learning in an offline phase prior to agent deployment.
no code implementations • 23 May 2023 • Mark Deutel, Georgios Kontes, Christopher Mutschler, Jürgen Teich
Deploying Deep Neural Networks (DNNs) on tiny devices is a common trend to process the increasing amount of sensor data being generated.
no code implementations • 27 Apr 2023 • Marco Wiedmann, Marc Hölle, Maniraman Periyasamy, Nico Meyer, Christian Ufrecht, Daniel D. Scherer, Axel Plinge, Christopher Mutschler
We introduce a novel approach that uses the approximated gradient from SPSA in combination with state-of-the-art gradient-based classical optimizers.
no code implementations • 27 Apr 2023 • Maniraman Periyasamy, Marc Hölle, Marco Wiedmann, Daniel D. Scherer, Axel Plinge, Christopher Mutschler
Deep reinforcement learning (DRL) often requires a large number of data and environment interactions, making the training process time-consuming.
1 code implementation • 26 Apr 2023 • Nico Meyer, Daniel D. Scherer, Axel Plinge, Christopher Mutschler, Michael J. Hartmann
Reinforcement learning is a growing field in AI with a lot of potential.
no code implementations • 14 Apr 2023 • Felix Ott, Lucas Heublein, David Rügamer, Bernd Bischl, Christopher Mutschler
In this work, we propose recurrent fusion networks to optimally align absolute and relative pose predictions to improve the absolute pose prediction.
no code implementations • 16 Jan 2023 • Felix Ott, David Rügamer, Lucas Heublein, Bernd Bischl, Christopher Mutschler
The goal of domain adaptation (DA) is to mitigate this domain shift problem by searching for an optimal feature transformation to learn a domain-invariant representation.
1 code implementation • 18 Nov 2022 • Thomas Altstidl, An Nguyen, Leo Schwinn, Franz Köferl, Christopher Mutschler, Björn Eskofier, Dario Zanca
We also demonstrate that our family of models is able to generalize well towards larger scales and improve scale equivariance.
no code implementations • 7 Nov 2022 • Nico Meyer, Christian Ufrecht, Maniraman Periyasamy, Daniel D. Scherer, Axel Plinge, Christopher Mutschler
Quantum reinforcement learning is an emerging field at the intersection of quantum computing and machine learning.
1 code implementation • 7 Oct 2022 • Maximilian Stahlke, George Yammine, Tobias Feigl, Bjoern M. Eskofier, Christopher Mutschler
While CC has shown promising results in modelling the geometry of the radio environment, a deeper insight into CC for localization using multi-anchor large-bandwidth measurements is still pending.
no code implementations • 14 Sep 2022 • George Yammine, Georgios Kontes, Norbert Franke, Axel Plinge, Christopher Mutschler
Our algorithm is based on a recommender system that associates groups (i. e., UEs) and preferences (i. e., beams from a codebook) based on a training data set.
no code implementations • 1 Aug 2022 • Felix Ott, Nisha Lakshmana Raichur, David Rügamer, Tobias Feigl, Heiko Neumann, Bernd Bischl, Christopher Mutschler
We show accuracy improvements for the APR-RPR task and for the RPR-RPR task for aerial vehicles and hand-held devices.
no code implementations • 26 Jul 2022 • Christoffer Loeffler, Kion Fallah, Stefano Fenu, Dario Zanca, Bjoern Eskofier, Christopher John Rozell, Christopher Mutschler
We adapt an entropy-based active learning method with recent work from triplet mining to collect easy-to-answer but still informative annotations from human participants and use them to train a deep convolutional network that generalizes to unseen samples.
1 code implementation • 23 Jul 2022 • Sebastian Rietsch, Shih-Yuan Huang, Georgios Kontes, Axel Plinge, Christopher Mutschler
Reinforcement learning (RL) has shown to reach super human-level performance across a wide range of tasks.
no code implementations • 16 Jul 2022 • Mohammad Alawieh, Ernst Eberlein, Stephan Jäckel, Norbert Franke, Birendra Ghimire, Tobias Feigl, George Yammine, Christopher Mutschler
The models that capture the physical effects observed in a realistic deployment scenario are essential for assessing the potential benefits of enhancements in positioning methods.
no code implementations • 16 Jul 2022 • Mohammad Alawieh, George Yammine, Ernst Eberlein, Birendra Ghimire, Norbert Franke, Stephan Jäckel, Tobias Feigl, Christopher Mutschler
Based on our measurement and simulation results, we propose a model for incorporating the signal reflection by obstacles in the vicinity of transmitter or receiver, so that the outcome of the model corresponds to the measurement made in such scenario.
no code implementations • 17 Jun 2022 • Andreas Klaß, Sven M. Lorenz, Martin W. Lauer-Schmaltz, David Rügamer, Bernd Bischl, Christopher Mutschler, Felix Ott
For many applications, analyzing the uncertainty of a machine learning model is indispensable.
no code implementations • 20 May 2022 • Mark Deutel, Philipp Woller, Christopher Mutschler, Jürgen Teich
Large Deep Neural Networks (DNNs) are the backbone of today's artificial intelligence due to their ability to make accurate predictions when being trained on huge datasets.
no code implementations • 6 May 2022 • Maniraman Periyasamy, Nico Meyer, Christian Ufrecht, Daniel D. Scherer, Axel Plinge, Christopher Mutschler
Encoding high dimensional data into a quantum circuit for a NISQ device without any loss of information is not trivial and brings a lot of challenges.
1 code implementation • 7 Apr 2022 • Felix Ott, David Rügamer, Lucas Heublein, Bernd Bischl, Christopher Mutschler
To mitigate this domain shift problem, domain adaptation (DA) techniques search for an optimal transformation that converts the (current) input data from a source domain to a target domain to learn a domain-invariant representation that reduces domain discrepancy.
no code implementations • 24 Mar 2022 • Sebastian Kram, Christopher Kraus, Tobias Feigl, Maximilian Stahlke, Jörg Robert, Christopher Mutschler
We propose a novel localization framework that adapts well to sparse datasets that only contain CMs of specific areas within the environment with strong multipath propagation.
no code implementations • 16 Mar 2022 • Lukas M. Schmidt, Sebastian Rietsch, Axel Plinge, Bjoern M. Eskofier, Christopher Mutschler
This paper proposes SafeDQN, which allows to make the behavior of autonomous vehicles safe and interpretable while still being efficient.
no code implementations • 15 Mar 2022 • Lukas M. Schmidt, Johanna Brosig, Axel Plinge, Bjoern M. Eskofier, Christopher Mutschler
Multi-Agent Reinforcement Learning (MARL) is a research field that aims to find optimal solutions for multiple agents that interact with each other.
1 code implementation • 14 Mar 2022 • Christoffer Loeffler, Wei-Cheng Lai, Bjoern Eskofier, Dario Zanca, Lukas Schmidt, Christopher Mutschler
Explanatory visual interpretation approaches for image, and natural language processing allow domain experts to validate and understand almost any deep learning model.
no code implementations • 16 Feb 2022 • Felix Ott, David Rügamer, Lucas Heublein, Bernd Bischl, Christopher Mutschler
We perform extensive evaluations on synthetic image and time-series data, and on data for offline handwriting recognition (HWR) and on online HWR from sensor-enhanced pens for classifying written words.
no code implementations • 14 Feb 2022 • Felix Ott, David Rügamer, Lucas Heublein, Tim Hamann, Jens Barth, Bernd Bischl, Christopher Mutschler
While there exist many offline HWR datasets, there is only little data available for the development of OnHWR methods on paper as it requires hardware-integrated pens.
1 code implementation • 10 Feb 2022 • Maja Franz, Lucas Wolf, Maniraman Periyasamy, Christian Ufrecht, Daniel D. Scherer, Axel Plinge, Christopher Mutschler, Wolfgang Mauerer
In this work, we examine a class of hybrid quantum-classical RL algorithms that we collectively refer to as variational quantum deep Q-networks (VQ-DQN).
no code implementations • 29 Sep 2021 • Christoffer Löffler, Wei-Cheng Lai, Lukas M Schmidt, Dario Zanca, Bjoern Eskofier, Christopher Mutschler
(Explanatory) visual interpretation approaches for image and natural language processing allow domain experts to validate and understand almost any deep learning model.
1 code implementation • 9 Jul 2020 • Christoffer Löffler, Christopher Mutschler
Active learning (AL) prioritizes the labeling of the most informative data samples.
no code implementations • 19 Dec 2019 • Leonid Butyrev, Thorsten Edelhäußer, Christopher Mutschler
This paper presents a novel motion and trajectory planning algorithm for nonholonomic mobile robots that uses recent advances in deep reinforcement learning.
no code implementations • 17 Dec 2019 • Felix Ott, Tobias Feigl, Christoffer Löffler, Christopher Mutschler
Visual Odometry (VO) accumulates a positional drift in long-term robot navigation tasks.
no code implementations • 25 Sep 2019 • Christopher Mutschler, Sebastian Pokutta
This generates pairs of state encodings, i. e., a new representation from the environment and a (biased) old representation from the forward model, that allow us to bootstrap a neural network model for state translation.
no code implementations • 25 Sep 2019 • Leonid Butyrev, Georgios Kontes, Christoffer Löffler, Christopher Mutschler
Learning neural networks with gradient descent over a long sequence of tasks is problematic as their fine-tuning to new tasks overwrites the network weights that are important for previous tasks.