For best experience please turn on javascript and use a modern browser!
You are using a browser that is no longer supported by Microsoft. Please upgrade your browser. The site may not present itself correctly if you continue browsing.

Polarizing issue publics: The interplay of human choice and algorithmic personalisation

Many argue that political polarisation is rising and societal cohesion is dwindling. In contrast to the US and UK, polarisation in The Netherlands is not centred around the left-right axis, but in polarizing and radicalising issue publics. In this project, we will study the role of the digital media environment and AI in causes and effects of polarisation of these issue publics using two case studies.


Platforms such as Google Search, Facebook, and other social media allegedly afford the creation of an online echo-chamber in which sympathisers of an ideology or viewpoint surround themselves with attitude-confirming information, which further strengthens their existing beliefs. This self-selection is feared to be amplified by personalisation algorithms that are used to curate information. These algorithms pick up on this input by filtering out information that challenges one’s viewpoint.  We want to disentangle bias as a result from self-selection, i.e. by actively curating one’s social media feed or by using specific keywords while searching for information online, and bias resulting from algorithmic selection, i.e. where AI-driven systems pick up on tendencies and offer more content that is in line with one’s earlier online behaviour.  Additionally, we examine how important individual characteristics, such as issue literacy and general literacy, prior knowledge and existing attitudes, are amplified when citizens engage with algorithmically curated information.
Using two case studies, we aim to investigate in how far AI-curated information contributes to increased polarisation of issue publics. And to do so, we need to answer three sub questions:

  1. Does AI-afforded personalisation in information searches lead to bias?
  2. To what extent is a biased information menu related to biases in knowledge and perception of public opinion?
  3. What are the consequences of biased information for learning and behavioural change?

To answer our questions, we will conduct three studies:

  • a study of bias in online search, taking into account that user on different ends of the poles use specific keywords (agent-based testing);
  • a study of the relationship of information use, attitudes and perception of public opinion (digital trace data);
  • understanding how biased information searches and exposure translate into behaviour change (experiment).

 

M. (Marieke) van Hoof

PhD Student

Prof. dr. C.H. (Claes) de Vreese

Promotor

Dr. J.E. (Judith) Möller

Supervisor

Dr. D.C. (Damian) Trilling

Supervisor

Dr. C.S. (Corine) Meppelink

Supervisor