I am a machine learning researcher and a PhD candidate in artificial intelligence and machine learning. I wrote my MSc thesis at University of California, Berkeley. Recently, I have been at DeepMind. My research focus is robustness and generalization in machine learning.
[1] Ezgi Korkmaz. Understanding and Diagnosing Deep Reinforcement Learning. International Conference on Machine Learning [Acceptance Rate: 27.54%], ICML 2024.
[ICML 2024]
[Paper]
[Cite]
[BibTeX]
[2] Ezgi Korkmaz. Adversarial Robust Deep Reinforcement Learning Requires Redefining Robustness. AAAI Conference on Artificial Intelligence [Acceptance Rate: 19.6%], AAAI 2023.
[AAAI 2023]
[Paper]
[Cite]
[BibTeX]
[In UCL Blog]
[3a] Ezgi Korkmaz et al. Detecting Adversarial Directions in Deep Reinforcement Learning to Make Robust Decisions. International Conference on Machine Learning [Acceptance Rate: 27.94%], ICML 2023.
[ICML 2023]
[Paper]
[Cite]
[BibTeX]
[4] Ezgi Korkmaz. Deep Reinforcement Learning Policies Learn Shared Adversarial Features Across MDPs. AAAI Conference on Artificial Intelligence [Acceptance Rate: 14.58%], AAAI 2022.
[AAAI 2022]
[Paper] [Abstract]
[BibTeX] [Cite]
[In MILA Blog]
[In French]
[Twitter]
[5] Ezgi Korkmaz. Investigating Vulnerabilities of Deep Neural Policies. Conference on Uncertainty in Artificial Intelligence (UAI), Proceedings of Machine Learning Research (PMLR) [Acceptance Rate: 26.38%], PMLR 2021.
[PMLR 2021]
[Paper]
[Abstract]
[News]
[BibTeX]
[Cite]
[6] Ezgi Korkmaz. Revealing the Bias in Large Language Models via Reward Structured Questions. Conference on Neural Information Processing Systems (NeurIPS) Workshop on Interactive Learning for Natural Language Processing, 2022 & Conference on Neural Information Processing Systems (NeurIPS) Foundation Models for Decision Making Workshop, 2022 & Conference on Neural Information Processing Systems (NeurIPS) Robustness in Sequence Modeling Workshop, 2022 & Conference on Neural Information Processing Systems (NeurIPS) Machine Learning Safety Workshop, 2022. [Paper] [Cite]
[7] Ezgi Korkmaz. A Survey Analyzing Generalization in Deep Reinforcement Learning. Conference on Neural Information Processing Systems (NeurIPS) Robot Learning Workshop, NeurIPS 2023. [ArXiv] [Paper] [Cite]
[8] Ezgi Korkmaz. Adversarial Robust Deep Reinforcement Learning is Neither Robust Nor Safe. Conference on Neural Information Processing Systems (NeurIPS) Workshop on Statistical Foundations of LLMs and Foundation Models, NeurIPS 2024.
[9] Ezgi Korkmaz. Spectral Robustness Analysis of Deep Imitation Learning. Conference on Neural Information Processing Systems (NeurIPS) Machine Learning Safety Workshop, 2022.
[10] Ezgi Korkmaz. Inaccuracy of State-Action Value Function For Non-Optimal Actions in Adversarially Trained Deep Neural Policies. IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPR) [Oral Presentation], 2021 & International Conference on Learning Representation (ICLR) Robust and Reliable Machine Learning in the Real World Workshop, 2021.[Paper] [BibTeX] [Slides] [Cite]
[11] Ezgi Korkmaz. Robustness of Inverse Reinforcement Learning. International Conference on Machine Learning (ICML) Artificial Intelligence for Agent Based Modelling Workshop, 2022 & Conference on Neural Information Processing Systems (NeurIPS) Machine Learning Safety Workshop, 2022. [Cite]
[12] Ezgi Korkmaz. Adversarial Attacks Against Deep Imitation and Inverse Reinforcement Learning. International Conference on Machine Learning (ICML) Complex Feedback in Online Learning Workshop, 2022. [Cite]
[12] Ezgi Korkmaz. A Brief Summary on COVID-19 Pandemic and Machine Learning Approaches. International Joint Conference on Artificial Intelligence (IJCAI) Workshop on Artificial Intelligence for Social Good, 2021 & Conference on Neural Information Processing Systems (NeurIPS) Machine Learning in Public Health Workshop.
[Oral Presentation]. [Paper NeurIPS] [Abstract] [Cite]
[14] Ezgi Korkmaz. Non-Robust Feature Mapping in Deep Reinforcement Learning. International Conference on Machine Learning (ICML) A Blessing in Disguise: The Prospects and Perils of Adversarial Machine Learning Workshop, 2021 & Conference on Neural Information Processing Systems (NeurIPS) Metacognition in the Age of AI: Challenges and Opportunities [Spotlight Presentation], 2021. [Cite]
[15] Ezgi Korkmaz. Adversarial Training Blocks Generalization in Neural Policies. International Conference on Learning Representation (ICLR) Robust and Reliable Machine Learning in the Real World Workshop, 2021 & Conference on Neural Information Processing Systems (NeurIPS) Workshop on Distribution Shifts: Connecting Methods and Applications, 2021 & Conference on Neural Information Processing Systems (NeurIPS) Safe and Robust Control of Uncertain Systems Workshop, 2021 & Conference on Neural Information Processing Systems (NeurIPS) I Can’t Believe It’s Not Better Workshop, 2021. [Paper ICLR] [Paper NeurIPS] [Cite]
[16] Ezgi Korkmaz. Nesterov Momentum Adversarial Perturbations in the Deep Reinforcement Learning Domain. International Conference on Machine Learning (ICML) Inductive Biases, Invariances and Generalization in Reinforcement Learning Workshop, 2020. [Paper] [Cite]
[17] Ezgi Korkmaz. Adversarially Trained Neural Policies in Fourier Domain. International Conference on Learning Representation (ICLR) Robust and Reliable Machine Learning in the Real World Workshop,2021 & International Conference on Machine Learning (ICML) A Blessing in Disguise: The Prospects and Perils of Adversarial Machine Learning Workshop, 2021. [Cite]
a Excluding the study [3].
Lectures
Hoeffding’s Inequality [Slides]