29 October 2025
 
            When over 1,000 Dutch citizens were asked how likely they were to ask ChatGPT or another AI tool for advice on political candidates, parties, or election issues, the results were worrying:
If we extrapolate this to the general population, this means that roughly 1.8 million citizens nationwide could be asking an AI chatbot for advice. And given how people often underreport behaviors they think are “socially undesirable”, the real number could be even higher.
This is what a survey fielded by the AlgoSoc Consortium (a collaboration of the University of Amsterdam, Utrecht University, TU Delft, Tilburg , Erasmus University Rotterdam) tells us. In the lead up to the elections, they asked 1,220 Dutch citizens how likely they were to ask ChatGPT or another AI chatbot tool for advice on political candidates, parties, or election issues.
The data, drawn from the LISS Panel of Centerdata, revealed pronounced differences across age groups in the likelihood of using AI for advice, with likelihood of using AI for political advice decreasing sharply with age. Among respondents aged 18-34, 17.2% said they were likely to use AI for such advice, while 18.6% said they might, and 64.2% said they were not likely to do so. In the 35-54 age group, only 8.4% were likely to use AI, and 15.9% might, while a large majority (75.7%) said they were not likely. Among those aged 55 and older, willingness was even lower: just 5.9% were likely and 9.9% might use AI for political advice, compared to 84.2% who said they were not likely.
 
            The Dutch Data Protection Authority (AP) recently tested several chatbots against trusted tools like Kieskompas and StemWijzer. The results were clear: these chatbots are not ready to guide voters.
According to the AP’s October 2025 report: Chatbots “surprisingly often” give the same two parties, PVV or GroenLinks-PvdA, as the top match, regardless of what the user asks. In more than 56% of cases, one of these two came out on top. For one chatbot, that number was over 80%.
Meanwhile, parties like D66, VVD, SP, or PvdD barely appeared, and others (BBB, CDA, SGP, DENK) were practically invisible, even when the voter’s positions matched theirs perfectly.
In other words, AI is biased and gets it completely wrong: “Chatbots may seem like clever tools,” said AP vice-chair Monique Verdier, “but as a voting aid, they consistently fail. Voters may unknowingly be steered toward parties that don’t align with their views. This threatens the integrity of free and fair elections.”
Unlike official voting aids, chatbots are not designed for politics. Chatbot answers are generated from messy, unverifiable data scraped from across the internet, including social media and outdated news.
That means they can:
So while a chatbot might sound confident, it’s not necessarily correct.
This is worrying because voters often look for voting advice late in the campaign, when attention is high but time to reflect is short. Research shows that many people turn to tools like StemWijzer or Kieskompas just days before an election (Van de Pol et al 2014). It’s reasonable to expect the same pattern for AI chatbots, except these tools aren’t designed, tested, or transparent enough to handle that responsibility.
So, if you’re on your way to the polls today… maybe don’t stop to ask ChatGPT, “Who should I vote for?”
It’s not just experts raising the alarm: the public is also concerned too. In our survey, nearly seven in ten (69.2%) Dutch citizens said they are worried about others receiving incorrect political information from AI tools.
This shows that most people understand the risks: chatbots might sound smart, but they’re not neutral, and their influence can quietly distort how voters see the political landscape.