For best experience please turn on javascript and use a modern browser!
You are using a browser that is no longer supported by Microsoft. Please upgrade your browser. The site may not present itself correctly if you continue browsing.
The RPA AI & Politics is delighted to announce seed grant awards for 10 research projects across the FMG. Please join us in congratulating the awardees! We look forward to the exciting research ahead.

Rewarded projects

AI-Anxiety and Misinformation Perceptions: Citizen Perspectives on Generative AI in European Democracies

Understanding citizen’s perceptions of AI in the digital era is of primary importance not only to ensure that AI development and impact on societies is in line with citizen’s needs but also to remedy potentially emerging tensions around the issues of AI, as another dimension of societal polarization. Combining survey and experimental methodology this project will gauge citizens’ attitudes towards genAI in relation to its perceived role in politics, including genAI’s use for the creation of political mis- and disinformation. The project will also test how these attitudes relate to citizens’ political behaviour (such as news media use, misinformation vulnerability, political attitudes and participation). Based on this understanding, the project ultimately aims to test interventions to remedy potentially nefarious perceptions of AI and their repercussions. The results of the project seek to contribute to a smooth integration of AI in today’s digital societies, while safeguarding democratic processes.  

Algorithmic Profiling in the Schengen Visa Regime: A Politics of Bias? 

Project description: As part of my PhD research on the Schengen visa regime, I examine the role of algorithmic profiling in Dutch visa operations. A central focus is an algorithm known as Information-supported decision-making (Dutch: Informatie Ondersteund Beslissen), which is used in processing Schengen visa applications. This algorithm categorizes visa applicants into three distinct tracks during the decision-making process: fast, regular, and intensive. These tracks reflect varying levels of risk assessment required for each applicant and provide decision-makers with extra insight to shape their decision process. The intensive track involves a more thorough review, whereas the fast track entails minimal scrutiny. By combining algorithmic input with human judgment, this profiling system plays an active role in shaping visa decisions within the Schengen framework. Through this research, I aim to explore how European border governance is transforming in the digital age. 

Are the Kids (AI)right? Algorithmic Content Curation of Political Material on Youth’s Instagram 

This project examines how Instagram’s algorithm curates political search results for Dutch youth, assessing whether safety measures implemented in the recently introduced 'teen accounts' inadvertently limit political socialization. Social media, especially Instagram, is a key information source for young people (Newman, 2024), yet little research has explored its role in political information searching. While prior studies highlight algorithmic bias in search engines (van Hoof et al., 2024), the impact of Instagram’s algorithms remains understudied, despite heightened use by youth and its potential consequences for political socialization (Jakubowski, 2021) and knowledge inequalities (Kümpel 2020). 

Using an innovative data collection approach, this study tests how Instagram’s ‘teen account’ and ‘limit political content’ settings affect political exposure among adolescents (16–17, soon-to-be voters) and young voters (18–24). Combining best-practices from the digital trace data donation approach (Ohme et al 2024), and the algorithmic auditing tradition (Ulloa et al 2024), participants will complete a survey where they are asked to share screenshots of their Explore Page and political search results with different account settings. Content analysis will assess differences in political exposure within individuals (aalgorithmic moderation by account types) and across individuals (algorithmic curation due to pre-existing political interest, and other political attitudes).

Defending Democracy: Using AI to Enhance Political Literacy 

New generative AI technologies are rapidly transforming traditional modes of news production and consumption, sparking concerns about misinformation and declining trust in information ecosystems. While these risks are real and deserve attention, much of the current discourse overlooks an equally important question: Can GenAI also foster democracy? Our project investigates whether GenAI tools can contribute to political literacy by helping citizens to understand and retain complex political information more efficiently.  

By comparing AI-assisted and traditional information search methods in the context of a complex political topic, we explore whether generative AI can meaningfully improve citizens’ political understanding. Our approach aims to determine not only if but also how GenAI can contribute to acquiring political knowledge. The project will contribute to the growing literature exploring how generative AI can be used to strengthen democracies. At the same time, it addresses an urgent societal need: helping citizens navigate an increasingly complex information environment.  

Digital Diversity: The Impact of AI and Human Influencers on Bias, Social Equality, and Political Engagement 

This project explores how AI-driven virtual influencers shape public attitudes toward marginalized groups and influence political engagement. Using a four-week longitudinal Experience Sampling Method (ESM) study, participants will interact daily with either a diverse human or diverse virtual influencer. By comparing responses to influencers labeled as virtual versus human—and including a non-diverse control group—the study examines how exposure affects implicit and explicit biases, political beliefs, and social attitudes. The project investigates key psychological mechanisms such as parasocial relationships and perceived authenticity to understand how and why attitude change occurs. Integrating communication science, psychology, political science, and AI studies, this research addresses the growing role of virtual influencers in shaping democratic discourse and societal cohesion. The findings will inform academic debates, policymaking, and platform governance, highlighting how AI-driven personas can either challenge or reinforce political polarization and social inequality. 

 ‘I spy with my little eye’: Synchronising civil sousveillance practices in the Mediterranean border zone 

This project investigates how the Regensburg-based NGO Space-Eye, active since 2018, combines artificial intelligence (AI) and satellite imagery to support civil search-and-rescue (SAR) operations in the Mediterranean. Emerging in response to the EU’s intensified border controls and the criminalization of migration from 2017 onwards, Space-Eye exemplifies how civil SAR actors engage in sousveillance—repurposing surveillance technologies to monitor state activities and assist migrants in distress. Through ethnographic research, the project examines how Space-Eye and its partners develop alternative infrastructures for SAR, adapting AI tools for humanitarian purposes. This research offers valuable insights into how AI and satellite technologies are reimagined to support humanitarian efforts, providing a counter-narrative to the increasingly militarized EU border security system. It critically explores the intersection of surveillance technologies, EU border politics, and ethical innovation, highlighting how such technologies can be reclaimed to challenge militarized approaches to migration governance. 

Public debates, everyday injustice, and AI in the majority world 

Recent years have witnessed an upsurge in the use of AI by governments for purposes of policing, surveillance, welfare delivery, and control of borders in many countries in the global South without adequate legal and regulatory frameworks preventing potential harms from these technologies. Yet there is a dearth of scholarship that takes a comparative, interdisciplinary approach on the meaning of and everyday, lived experiences with AI in postcolonial, global South contexts.  This project brings together scholars working on everyday negotiations with AI enabled algorithmic governance in global South contexts marked by either the absence of legal, regulatory frameworks or gaps between the law and lived realities of experiences with AI. We aim to  generate conceptual frameworks and methodological tools for studying AI and the everyday in a comparative perspective that considers generalizable similarities even while being mindful of the unique histories and socio-political dynamics that shape the implementation and reception of AI technologies in global South contexts. We aim to create a network of scholars exploring the everyday life of AI in the global South. 

 Regulating AI-generated Deepfakes: A Comparative Study of Legal Frameworks and Public Perceptions in Japan and The Netherlands 

Deepfakes employ artificial intelligence to make it look like an event took place that never happened. As such, they constitute a dangerous form of visual disinformation and cause ethical concerns. Governments worldwide are trying to regulate deepfakes, but legal approaches vary across cultures and political systems. Yet, we know little about how deepfakes are dealt with in different contexts, even though comparative research is crucial in informing and improving policy making. To fill this gap, this project examines legal regulations of deepfakes and public perceptions thereof in Japan and the Netherlands. These countries were selected due to their contrasting legal frameworks and attitudes toward artificial intelligence. Our project is not only innovative due to its comparative, but also its interdisciplinary nature. In collaboration with a Japan-based colleague, we will conduct legal research and survey research among citizens in both countries to assess perceptions of deepfake regulations and citizens’ competences. 

Shield or sieve? The effects of fact-checks and community notes on mitigating AI-generated (visual) propaganda about the war in Ukraine 

Generative AI enables bad faith actors to create new forms of visual propaganda that may not per se look realistic, but can still stir up emotions and promote particular narratives. Thereby, such visual propaganda may elicit particular political attitudes and behaviours that align with the goals of those who spread it.  

Focusing on the context of propaganda targeting Ukrainian president Volodymyr Zelenskyy, our project aims to test how such visual propaganda can effectively be corrected. To this end, we carry out a 2 (authentic vs non-authentic AI generated image) by 3 (fact check vs community note vs no correction) survey experiment and test effects on the perceived credibility of and agreement with false claims, as well as evaluations of the targeted politician and willingness to take action.  

The results will help understand if, when and for whom common corrections can counteract the potential negative effects of (AI generated) visual propaganda. 

Tracking the use of AI in Election Campaigns  

This project led by Fabio Votta (University of Amsterdam) and Simon Kruschinski (University of Mainz), aims to empirically investigate the use of Generative AI (GAI) in political campaigns. Amidst concerns about GAI's potential for disinformation and manipulation, this research will measure its scope, reach, and strategic deployment by political actors on social media in Canada, Australia, and the Netherlands. 

Using a semi-automated approach combining machine learning with human coding, the study will analyze content (organic and paid) from platforms like Facebook and Instagram. It seeks to identify if GAI is used for benign content enhancement or more nefarious purposes like negative campaigning. Expected outcomes include comparative analyses, a public dashboard, and policy reports, providing crucial evidence for policymakers and researchers on AI's impact on democratic processes and information quality.