10 March 2025
Sometimes a conversation with a chatbot or virtual assistant feels very natural, like talking to a human being. Other times, you immediately notice that it is a computer. Jochen Peter and his colleagues have been trying to find out what is behind this. They discovered that people like a chatbot better than a website. However, users remain critical: if a chatbot seems human but works on the basis of AI, it may actually feel uncomfortable.
We talk to chatbots and robots for a variety of reasons. In customer service, chatbots are used to answer questions quickly. People find this convenient, but get annoyed if the chatbot does not understand their problem properly. At home, more and more people are using voice assistants such as Siri or Google Assistant. Parents and children alike use them for games, music or reading out stories. Chatbots are also being used in health care. Some people enjoy sharing feelings with a chatbot, because it does not judge. Others doubt whether a machine is able to provide real support. This raises various questions, such as: how social is technology allowed to be? And what role do we want machines to play in our lives?
You can read all about the research conducted by Jochen Peter and his colleagues in the book Communication Research into the Digital Society: Fundamental insights from ASCoR’s research. This book was published to mark ASCoR’s 25th anniversary. In the book, scholars look back on – and ahead to – different areas of communication research at ASCoR. You can download the online version of the book free of charge.
Chatbots can sometimes be more persuasive than a news article on a website. This is because people see a chatbot as more credible and more human. However, chatbots also have to come across as personal. Customer service chatbots that appear human make people feel more at ease. This may lead to an increase in trust and even a stronger inclination to make purchases.
Artificial intelligence is becoming ever smarter. AI systems are already making decisions in health care, education and finance. For example, they determine who qualifies for a loan or what medical treatment someone needs. This raises questions. For a variety of reasons, such as the notion that AI has no emotions, people sometimes think AI is fairer than humans. On the other hand, AI can make mistakes or be biased, for example if it is trained with information that contains biases. Privacy is another important issue. How many data are AI systems allowed to collect? And who checks how they use them? Our ASCoR scholars still have a lot of research to do, but remaining critical is a task for us all.