Companies collect and algorithmically process large amounts of digital traces of individuals with the aim of increasing persuasiveness of online content (algorithmic persuasion). Such persuasion is impactful as it can be used to persuade individuals into behaviours that are not necessarily in their own interest. This project aims to investigate the unintended and potentially harmful impact of algorithmic persuasion on individuals and the society.
The current understanding of unintended effects of algorithmic persuasion strongly focuses on economic and privacy harms. However, normative literature suggests that such persuasion can also result in harms through reduction in the space of available choices and through exploitation of cognitive biases. However, how consumers experience such harms and how they cope with them remains understudied in empirical communication research. Hence, the aim of the proposed PhD project is to investigate the persuasive power of algorithms in terms of potential unintended negative effects (harms). Unravelling the experience of harms and how consumers cope with it will eventually allow to design effective intervention strategies that will increase individual knowledge related to harms and hence foster coping with algorithmic persuasion.
Overall aim of the project is to