I am an Assistant Professor specializing in The Human Factor in New Technologies within the Department of Communication Science at the University of Amsterdam. I am affiliated with the interdisciplinary research priority area Humane AI and represent the Program Group Political Communication and Journalism on the ethical board of ASCoR (Amsterdam School of Communication Research).
After earning my PhD from the University of Münster in Germany, I worked as a postdoctoral researcher in the Department of Social Sciences at the University of Düsseldorf. My academic journey has been driven by a passion for exploring the intersection of artificial intelligence and societal well-being.
My research and teaching examine how AI can strengthen and challenge democracy and social cohesion. Specifically, my work is organized around four key pillars:
Through my interdisciplinary approach, I aim to contribute to building AI systems that are ethically sound, socially beneficial, and aligned with democratic values.
This project focuses on the emerging trend of synthetic relationships between humans and AI systems. We investigate the potential risks of these relationships, such as emotional dependence and the erosion of genuine human connection. We further propose policy measures to mitigate these risks, such as advocating for guardrails that protect users' well-being and promote the responsible development of AI agents.
This project examines the effects of a GenAI literacy intervention. We investigate whether providing information about AI-generated disinformation increases (1) people’s ability to discern true from false online news and (2) overall skepticism toward online news.
Cognitive Biases in Human Oversight of AI
This project examines the common policy approach of using human oversight to mitigate the risks of algorithmic bias in AI systems. We caution against assuming that human intervention is a simple solution to bias, as human judgement is also prone to systematic errors based on (1) limitations in human cognitive abilities, (2) the influence of personal preferences and biases, and (3) the potential for over- or under-reliance on AI.
The project investigates the potential of AI to combat corruption, examining both the opportunities and challenges associated with its implementation. We examine the effectiveness of AI-based anti-corruption tools (AI-ACT) implemented top-down (by governments) or bottom-up (by citizens and non-governmental organizations).
Based on the EU’s Digital Services Act, this project investigates user contestation of personalised recommender systems on very large online platforms (VLOPs). We explore user preferences for non-personalised content curation, focusing on the choice to opt out of default personalised systems.