My teaching focuses on secure and trustworthy machine learning, adversarial AI, and the risks and chances of generative models on graduate and undergraduate levels.
I aim to equip students with both strong technical foundations and a critical understanding of real-world security risks in modern AI systems, as well as research practice. I regularly teach lectures, seminars, and at international summer schools. My courses combine theoretical lectures with hands-on exercises and projects which are continously curated to reflect current research trends and practical applications.
Trustworthy Machine Learning
Master Lecture
since Summer 2025
Machine learning systems are increasingly deployed in high-stakes and security-sensitive
environments, ranging from generative AI assistants and code generation systems to
autonomous decision-making pipelines. As these systems become more capable,
ensuring their robustness, security, fairness, and accountability becomes essential.
This lecture introduces the foundations of Trustworthy Machine Learning.
We examine how modern ML systems can fail, how they can be attacked, and how
principled defenses can be designed. The course emphasizes a security-oriented
perspective on machine learning, integrating robustness, privacy, and interpretability.
Core Topics
- Robustness: Adversarial examples across modalities (image, audio, text) and adversarial training.
- Privacy: Membership inference, model inversion, and machine unlearning.
- Integrity & Security: Data poisoning, model stealing, and threat modeling.
- Interpretability & Accountability: Explainable AI (XAI) and transparency mechanisms.
- Fairness: Biases in training data and their societal and legal implications.
- Trustworthy Generative AI: LLM security, prompt injection, data leakage, and deepfake detection.
Trustworthy Agentic Systems
Master Seminar
Winter 2025
As artificial intelligence systems evolve from pattern recognition tools to autonomous decision-making systems,
they increasingly function as agents rather than passive models. These agentic systems are capable of planning,
acting, and adapting to dynamic environments. While such capabilities significantly enhance their utility,
they also introduce fundamental questions regarding reliability, safety, and alignment with human objectives.
This seminar examines the conceptual and technical foundations required to build trustworthy agentic AI systems.
Rather than emphasizing scale or efficiency, the course focuses on principled system design, evaluation,
and oversight mechanisms.
Core Questions
- How can we rigorously evaluate whether an AI system warrants trust?
- Which design principles and architectural choices mitigate the risk of unintended or harmful behaviour?
- How can human–AI collaboration be structured to preserve oversight, accountability, and control?
Trustworthy Generative Machine Learning
Bachelor Seminar
Summer 2024
Generative machine learning systems such as large language models, image generators,
and speech synthesis tools are increasingly integrated into everyday applications.
While these systems demonstrate impressive capabilities, they also raise important
questions about reliability, security, fairness, and societal impact.
This bachelor-level seminar introduces students to the foundations of
trustworthy generative machine learning. We examine how generative
models work, where they can fail, and how they may be misused. The focus lies on
understanding risks and developing principled approaches to make these systems
more robust, transparent, and aligned with human values.
Topics Covered
- Introduction to generative models (LLMs, diffusion models, multimodal systems)
- Hallucinations and reliability of generated content
- Prompt injection and misuse of language models
- Bias and fairness in generative AI systems
- Deepfakes and synthetic media detection
- Privacy risks and data leakage
- Human–AI interaction and oversight
Machine Learning Security Reproducibility
Master Seminar
Winter 2023
Machine Learning and Security
Master Seminar
Winter 2022
Summer 2023