For best experience please turn on javascript and use a modern browser!

This is a lecture in the RPA Communication Lecture Series, which is being given by Dr. Huma Shah, Senior Lecturer and AI research scientist in the School of Computing, Electronics and Mathematics at Coventry University.

Detail Summary
Date 22 November 2018
Time 15:30 - 17:30
Location Roeterseilandcampus - building B/C/D (entrance B/C)
Organised by Dr. Huma Shah (Coventry University, UK)

The demand for conversational Artificial Intelligence (CAI) is increasing to address customer service solutions [1]. Recent advances in speech recognition technology has accelerated the embedding of personal digital assistants across a range of use-cases in the digital economy, including artificial humans in e-retail, and virtual nurses for healthcare.  Early Conversational Artificial Intelligence (CAI) was experienced through Joseph Weizenbaum’s natural language understanding system Eliza [2]. Eliza’s question-answer format, based on a psychotherapist, allowed humans to interact with the dialogue agent [3]. A similar format emerged in a text-based simulation of schizophrenia through Colby et al.’s 1971 PARRY system [4] helping to train psychiatrists [5].

E-commerce introduced the question-answer paradigm through text-based avatars augmenting online keyword search. One well known is Anna (Fig1) for Swedish furniture company Ikea’s website. A 24/7 accessible virtual agent, Anna produced a return on investment of 200%; increased sales from Ikea’s digital catalogue by 10%, and reduced Ikea’s call centre costs by 20% [6]. Since 2011 an increase in the use of digital assistants followed the launch of Apple’s Siri personal assistant on its iPhone6 smart ‘phone.   

Does the fun of using conversational AI empower our trust in the convenience that personal digital assistants, such as Amazon’s Alexa, afford? Could conversational artificial intelligence be the next scandal in the digital economy, or could these technologies help to keep our identities safe, our personal data private and our trust maintained? These are important questions as we develop more business and social AI agents, and robots with human language capability to interact with us in shopping malls, airports, hospitals, in our homes as information assistants, carers or companions, and as more businesses transform their processes ready for 5G to keep  competitive in the fourth industrial revolution.

More information:


[1] Peart, A. 2018. Conversational AI is Becoming More Mainstream As Demand Increases. Datanami, August 21, 2018. Accessed from:

[2] Weizenbaum. J. (1966). ELIZA – A Computer Programme for the Study of Natural Language Communication between Men and Machines. Communications of the ACM, 9, pp 36-45

[3] Shah, H., Warwick, K., Vallverdú, J. and Wu, D. 2016. Can Machines Talk? Comparison of Eliza with Modern Dialogue Systems. Computers in Human Behavior, 58, pp. 278-295 10.1016/j.chb.2016.01.004

[4] Colby, K.M., Weber, S.,& Hilf, F.D. (1971). Artificial Paranoia. Artificial Intelligence Vol. 2, pp 1-25

[5] Heiser, J.F., Colby, K. M., Fraught, W.S. and Parkison, R.C. (1979). Can Psychiatrists Distinguish a Computer Simulation of Paranoia from the Real Thing?: The Limitation of Turing-like Tests as Measures of the Adequacy of Simulations. Journal of Psychiatric Research. Vol. 15(3):  pp 149-16

[6] Shah, H. 2005. Text-based dialogical e-query systems: gimmick or convenience? Invited talk in Inaugural colloquium on conversational systems. Digital World Research Centre, University of Surrey: November 25

Roeterseilandcampus - building B/C/D (entrance B/C)
Roeterseilandcampus - building B/C/D (entrance B/C)

Room This lecture will take place in REC C10.20

Nieuwe Achtergracht 166
1018 WV Amsterdam

Please register via: