no code implementations • 4 Jan 2024 • Ezgi Korkmaz
Reinforcement learning research obtained significant success and attention with the utilization of deep neural networks to solve problems in high dimensional state or action spaces.
no code implementations • 9 Jun 2023 • Ezgi Korkmaz, Jonah Brown-Cohen
Learning in MDPs with highly complex state representations is currently possible due to multiple advancements in reinforcement learning algorithm design.
no code implementations • 17 Jan 2023 • Ezgi Korkmaz
Learning from raw high dimensional data via interaction with a given environment has been effectively achieved through the utilization of deep neural networks.
no code implementations • 16 Dec 2021 • Ezgi Korkmaz
We argue that these high sensitivity directions support the hypothesis that non-robust features are shared across training environments of reinforcement learning agents.
no code implementations • 29 Sep 2021 • Ezgi Korkmaz
We demonstrate that the perceptual similarity distance of the minimal natural perturbations is orders of magnitude smaller than the perceptual similarity distance of the adversarial perturbations to the unperturbed observations (i. e. minimal natural perturbations are perceptually more similar to the unperturbed states than the adversarial perturbations), while causing larger degradation in the policy performance.
no code implementations • 29 Sep 2021 • Ezgi Korkmaz
The utilization of deep neural networks as function approximators for the state-action value function created a new research area for self learning systems, and made it possible to learn optimal policies from high dimensional state representations.
no code implementations • 29 Sep 2021 • Ezgi Korkmaz, Jonah Brown-Cohen
The non-robustness of neural network policies to adversarial examples poses a challenge for deep reinforcement learning.
no code implementations • NeurIPS Workshop ICBINB 2021 • Ezgi Korkmaz
Deep neural networks have made it possible for reinforcement learning algorithms to learn from raw high dimensional inputs.
no code implementations • 30 Aug 2021 • Ezgi Korkmaz
For the second approach, we propose a novel method to measure the feature sensitivities of deep neural policies and we compare these feature sensitivity differences in state-of-the-art adversarially trained deep neural policies and vanilla trained deep neural policies.
no code implementations • ICML Workshop AML 2021 • Ezgi Korkmaz
Reinforcement learning policies based on deep neural networks are vulnerable to imperceptible adversarial perturbations to their inputs, in much the same way as neural network image classifiers.
no code implementations • ICML Workshop AML 2021 • Ezgi Korkmaz
We conduct several experiments in the Arcade Learning Environment (ALE), and with our proposed feature mapping algorithms we show that while the state-of-the-art adversarial training method eliminates a certain set of non-robust features, a new set of non-robust features more intrinsic to the adversarial training are created.
no code implementations • 1 Jan 2021 • Ezgi Korkmaz
In this paper we propose a more realistic threat model in which the adversary computes the perturbation only once based on a single state.
no code implementations • 1 Jan 2021 • Ezgi Korkmaz
Deep reinforcement learning algorithms have recently achieved significant success in learning high-performing policies from purely visual observations.
no code implementations • 1 Jan 2021 • Ezgi Korkmaz, Henrik Sandberg, Gyorgy Dan
Adversarial attacks against deep neural networks have been widely studied.