In my Dormant Neurons research group our vision is to build secure, safe, and fair AI that people can trust.
Our work includes research on attacks and defenses for AI including LLM, multimodal systems, speech recognition systems, and generative models, as well as methods to prevent the misuse of generative AI. This includes robust feature analysis and studying human factors involved in recognizing generated media. In addition, we investigate code-generating models, focusing on understanding the strengths and limitations of automated systems. For more information on our research, please also see the group website from Dormant Neurons.
During my Ph.D., I was advised by Prof. Dr.-Ing. Dorothea Kolossa at the Cognitive Signal Processing group at Ruhr University Bochum (RUB), Germany. I also received two scholarships from UbiCrypt (DFG Research Training Group) and CASA (DFG Cluster of Excellence).