📜 I gave a AAAI 2025 Tutorial !
📜 Paper accepted to AAAI 2026 !
📜 2 Papers accepted to ICLR 2026!
📜 Paper accepted as a ✨Spotlight✨ to NeurIPS 2025 !
AAAI Conference on Artificial Intelligence [Acceptance Rate: 19.6%], AAAI 2023.
Learning from raw high dimensional data via interaction with a given environment has been effectively achieved through the utilization of deep neural networks. Yet the observed degradation in policy performance caused by imperceptible worst-case policy dependent translations along high sensitivity directions (i.e. adversarial perturbations) raises concerns on the robustness of deep reinforcement learning policies. In our paper, we show that these high sensitivity directions do not lie only along particular worst-case directions, but rather are more abundant in the deep neural policy landscape and can be found via more natural means in a black-box setting. Furthermore, we show that vanilla training techniques intriguingly result in learning more robust policies compared to the policies learnt via the state-of-the-art adversarial training techniques. We believe our work lays out intriguing properties of the deep reinforcement learning policy manifold and our results can help to build robust and generalizable deep reinforcement learning policies.
✨Robust reinforcement learning is vulnerable to black-box adversarial attacks.
✨Standard reinforcement learning can generalize much better than robust reinforcement learning.
✨Standard reinforcement learning is robust to imperceptible natural perturbations.
✨ Robust reinforcement learning has extreme sensitivity to imperceptible natural perturbations.
@inproceedings{Korkmazaaai23, author={Korkmaz, Ezgi}, title={Adversarial Robust Deep Reinforcement Learning Requires Redefining Robustness}, booktitle={Thirty-Seventh AAAI Conference on Artificial Intelligence}, pages={8369--8377}, year={2023}, url={https://doi.org/10.1609/aaai.v37i7.26009} }