For best experience please turn on javascript and use a modern browser!
You are using a browser that is no longer supported by Microsoft. Please upgrade your browser. The site may not present itself correctly if you continue browsing.

Dr. C.D.R.O. (Chris) Starke

Faculty of Social and Behavioural Sciences
CW : Political Communication & Journalism
Photographer: Kathleen Brenner

Visiting address
  • Nieuwe Achtergracht 166
Postal address
  • Postbus 15791
    1001 NG Amsterdam
Contact details
  • Profile

    I am an Assistant Professor specializing in The Human Factor in New Technologies within the Department of Communication Science at the University of Amsterdam. I am affiliated with the interdisciplinary research priority area Humane AI and represent the Program Group Political Communication and Journalism on the ethical board of ASCoR (Amsterdam School of Communication Research).

    After earning my PhD from the University of Münster in Germany, I worked as a postdoctoral researcher in the Department of Social Sciences at the University of Düsseldorf. My academic journey has been driven by a passion for exploring the intersection of artificial intelligence and societal well-being.

    My research and teaching examine how AI can strengthen and challenge democracy and social cohesion. Specifically, my work is organized around four key pillars:

    • (Perceived) Fairness of AI: Exploring public perceptions of AI (un)fairness and the societal implications.
    • Opportunities and Risks of Synthetic Relationships between Humans and AI: Investigating the dynamics and consequences of human-AI relationships.
    • Effective and Responsible Human Oversight of AI Systems: Scrutinizing and developing AI governance frameworks.
    • AI as an (Anti)Corruption Tool: Analyzing AI’s potential to combat or exacerbate corruption.

    Through my interdisciplinary approach, I aim to contribute to building AI systems that are ethically sound, socially beneficial, and aligned with democratic values.

  • Research

    Research methods

    • (Behavioral) Experimental Research
    • Survey
    • (Automated) Content Analysis
    • Experience Sampling

     

    Current research projects

    Understanding the Opportunities and Risks of Synthetic Relationships

    This project focuses on the emerging trend of synthetic relationships between humans and AI systems. We investigate the potential risks of these relationships, such as emotional dependence and the erosion of genuine human connection. We further propose policy measures to mitigate these risks, such as advocating for guardrails that protect users' well-being and promote the responsible development of AI agents.

    Starke, C., Ventura, A., Bersch, C., Cha, M., de Vreese, C., Doebler, P., Dong, M., Krämer, N., Leib, M., Peter, J., Schäfer, L., Soraperra, I., Szczuka, J., Tuchtfeld, E., Wald, R., & Köbis, N. (2024). Risks and protective measures for synthetic relationships. Nature Human Behaviour8(10), 1834–1836. https://doi.org/10.1038/s41562-024-02005-4

    The Impact of GenAI on Perceptions of Disinformation

    This project examines the effects of a GenAI literacy intervention. We investigate whether providing information about AI-generated disinformation increases (1) people’s ability to discern true from false online news and (2) overall skepticism toward online news.

    Cognitive Biases in Human Oversight of AI

    This project examines the common policy approach of using human oversight to mitigate the risks of algorithmic bias in AI systems. We caution against assuming that human intervention is a simple solution to bias, as human judgement is also prone to systematic errors based on (1) limitations in human cognitive abilities, (2) the influence of personal preferences and biases, and (3) the potential for over- or under-reliance on AI.

    Understanding Political Corruption in Digital Societies

    The project investigates the potential of AI to combat corruption, examining both the opportunities and challenges associated with its implementation. We examine the effectiveness of AI-based anti-corruption tools (AI-ACT) implemented top-down (by governments) or bottom-up (by citizens and non-governmental organizations).

    Forjan, J., Köbis N., & Starke, C. (2024). Artificial Intelligence as a Weapon to Fight Corruption: Civil Society Actors on the Benefits and Risks of Existing Bottom-up Approaches. In A. Mattoni (Ed.), Digital Media and Anticorruption. Routledge.

    Christopher Starke, Kimon Kieslich, Max Reichert, Nils Köbis (2023). Algorithms against Corruption: A Conjoint Study on Designing Automated Twitter Posts to Encourage Collective Action. Pre-Print published on OSF

    Köbis, N., Starke, C. & Rahwan, I. (2022). The Promise and Perils of Using Artificial Intelligence to Fight Corruption. Nature Machine Intelligence, 4, 418–424. DOI: 10.1038/s42256-022-00489-1

    Algorithmic Contestation on Social Media Platforms

    Based on the EU’s Digital Services Act, this project investigates user contestation of personalised recommender systems on very large online platforms (VLOPs). We explore user preferences for non-personalised content curation, focusing on the choice to opt out of default personalised systems.

    Starke, C., Metikoš, L., de Vreese, C. H., & Helberger, N. (2024). Contesting personalized recommender systems: a cross-country analysis of user preferences. Information, Communication & Society, 1-20. https://doi.org/10.1080/1369118X.2024.2363926

     

    Research grants

    05/2024 – 05/2029   Rescuing Democracy from Political Corruption in Digital Societies (RESPOND), Horizon Europe project funded by the European Commission

    07/2023 – 07/2024   Understanding the Human in the Loop: Behavioral Insights to Develop Responsible Algorithms (HumAIne), Collaboration with behavioral economists, funded by the UvA IP theme “Responsible Digital Transformations”

    03/2021 – 02/2024   Responsible Academic Performance Prediction: Factual and Perceived Fairness of Algorithmic Decision-Making (FAIR/HE) , Collaboration with computer scientists, funded by the German Federal Ministry for Education & Research

    08/2020 – 10/2022   Discourse Data 4 Policy: AI-based Understanding of Online Discourses for Evidence-based Policy-Making (DD4P), Collaboration with computer scientists, funded by the Heinrich-Heine University of Düsseldorf

    01/2020 – 01/2023   Corruption & Anti-Corruption in Empirical Research: Critical Reflections on Concepts, Data & Methods, Funded by the Constructive Advanced Thinking Initiative

  • Publications

    2024

    • Starke, C. D. R. O., Metikoš, L., de Vreese, C. H., & Helberger, N. (2024). Contesting personalized recommender systems: a cross-country analysis of user preferences. Information, Communication & Society, 1-20. https://doi.org/10.1080/1369118X.2024.2363926

    2022

    2021

    • Lünich, M., Starke, C., Marcinkowski, F., & Dosenovic, P. (2021). Double Crisis: Sport Mega Events and the Future of Public Service Broadcasting. Communication & Sport, 9, 287-307. https://doi.org/10.1177/2167479519859208
    • Starke, C. (2021). European Solidarity Under Scrutiny: Empirical Evidence for the Effects of Media Identity Framing. (Palgrave Studies in European Political Sociology). Palgrave Macmillan. https://doi.org/10.1007/978-3-030-67179-2 [details]

    2020

    • Marcinkowski, F., Kieslich, K., Starke, C., & Lünich, M. (2020). Implications of AI (un-)fairness in higher education admissions: The effects of perceived AI (un-)fairness on exit, voice and organizational reputation. In FAT* '20: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (pp. 122–130). ACM. https://doi.org/10.1145/3351095.3372867
    • Starke, C., & Lünich, M. (2020). Artificial Intelligence for Political Decision-Making in the European Union: Effects on Citizens’ Perceptions of Input, Throughput, and Output Legitimacy. Data & Policy, 2, Article e16. https://doi.org/10.1017/dap.2020.19
    • Starke, C., Marcinkowski, F., & Wintterlin, F. (2020). Social Networking Sites, Personalization, and Trust in Government: Empirical Evidence for a Mediation Model. Social Media + Society, 6. https://doi.org/10.1177/2056305120913885
    • Wallaschek, S., Starke, C., & Brüning, C. (2020). Solidarity in the Public Sphere: A Discourse Network Analysis of German Newspapers (2008-2017). Politics and Governance, 8, 257–271. https://doi.org/10.17645/pag.v8i2.2609

    2018

    • Marcinkowski, F., & Starke, C. (2018). Trust in Government: What's News Media Got to Do with it? Studies in Communication Sciences, 18(1), 87-102. https://doi.org/10.24434/j.scoms.2018.01.006
    • Marcinkowski, F., Starke, C., & Lünich, M. (2018). Spontaneous Trait Inferences from Candidates’ Faces: The Impact of the Face Effect on Election Outcomes in Germany. Acta Politica, 53, 231–247. https://doi.org/10.1057/s41269-017-0048-y

    2017

    • Flemming, F., Lünich, M., Marcinkowski, F., & Starke, C. (2017). Coping with Dilemma: How German Sport Media Users Respond to Sport Mega Events in Autocratic Countries. International Review for the Sociology of Sport, 52, 1008-1024. https://doi.org/10.1177/1012690216638545
    • Starke, C., & Flemming, F. (2017). Who is Responsible for Doping in Sports? The Attribution of Responsibility in the German Print Media. Communication & Sport, 5, 245-262. https://doi.org/10.1177/2167479515603712

    2016

    • Starke, C. D. R. O., & Hofmann, L. (2016). Is the Euro-crisis a catalyst for European identity? The complex relationship between conflicts, the public sphere and collective identity. European Policy Review, 1, 15-26.
    • Starke, C., Naab, T., & Scherer, H. (2016). Free to Expose Corruption: The Impact of Media Freedom, Internet Access and Governmental Online Service Delivery on Corruption. International Journal of Communication, 10, 4702–4722.

    2025

    • Starke, C. D. R. O. (in press). Political Consumerism. In A. Nai, M. Groemping, & D. Wirz (Eds.), Elgar Encyclopedia of Political Communication Edward Elgar Publishing.

    2024

    • Forjan, J., Köbis, N., & Starke, C. (2024). Artificial intelligence as a weapon to fight corruption: Civil society actors on the benefits and risks of existing bottom-up approaches. In A. Mattoni (Ed.), Digital Media and Grassroots Anti-Corruption: Contexts, Platforms and Practices of Anti-Corruption Technologies Worldwide (pp. 229–249). Edward Elgar Publishing. https://doi.org/10.4337/9781802202106.00020 [details]
    • Starke, C., Ventura, A., Bersch, C., Cha, M., de Vreese, C., Doebler, P., Dong, M., Krämer, N., Leib, M., Peter, J., Schäfer, L., Soraperra, I., Szczuka, J., Tuchtfeld, E., Wald, R., & Köbis, N. (2024). Risks and protective measures for synthetic relationships. Nature Human Behaviour, 8(10), 1834–1836. https://doi.org/10.1038/s41562-024-02005-4 [details]

    2022

    2019

    • Marcinkowski, F., & Starke, C. D. R. O. (2019). Wann ist Künstliche Intelligenz (un)fair? Ein sozialwissenschaftliches Konzept von KI-Fairness. In J. Hofmann, N. Kersting, C. Ritzi, & W. J. Schünemann (Eds.), Politik in der digitalen Gesellschaft: Zentrale Problemfelder und Forschungsperspektiven (pp. 269-288). Transcript .

    2018

    • Köbis, N. C., Iragorri-Carter, D., & Starke, C. (2018). A social psychological view on the social norms of corruption. In I. Kubbe, & A. Engelbert (Eds.), Corruption and norms: Why informal rules matter (pp. 31-52). Palgrave Macmillan. https://doi.org/10.1007/978-3-319-66254-1_3 [details]

    2017

    Journal editor

    Others

    This list of publications is extracted from the UvA-Current Research Information System. Questions? Ask the library or the Pure staff of your faculty / institute. Log in to Pure to edit your publications. Log in to Personal Page Publication Selection tool to manage the visibility of your publications on this list.
  • Ancillary activities
    No ancillary activities