
Challenges and Threats in Generative AI: Misuse and Exploits
Keynote at AISec colocated with ACM CCS 2024
Our research covers various attacks on machine learning models, including LLMs and generative AI across all types of media and multimodal models. We aim to develop domain-adaptive detection mechanisms and countermeasures, focusing on preventing the misuse of AI-generated media and enhancing model robustness.
During my Ph.D., I focused on the robustness of neural networks and the security of speech-based systems. I received my Ph.D. in 2021 from Ruhr University Bochum, where I was advised by Prof. Dr.-Ing. Dorothea Kolossa at the Cognitive Signal Processing group at Ruhr University Bochum (RUB), Germany. I received two scholarships from UbiCrypt (DFG Research Training Group) and CASA (DFG Cluster of Excellence).
I obtained my Master's degree in Electrical Engineering and Information Technology at RUB in 2015 after graduating from the University of Applied Science in Mannheim in Biomedical Engineering.
Challenges and Threats in Generative AI: Misuse and Exploits
Keynote at AISec colocated with ACM CCS 2024
Challenges and Threats in Generative AI: Misuse and Exploits
HIDA Lecture Series on AI and LLMs 2024
Brave New World: Challenges and Threats in Multimodal AI Agent Integrations
Keynote at AdvML-Frontiers@ICML 2023
Adversarially Robust Speech Recognition
ISCA SIGML Seminar Series 2021
Alexa, who else is listening?
rC3 2020