Home Improving chatbot design and intent recognition: An approach through the methods of intercultural pragmatics
Article
Licensed
Unlicensed Requires Authentication

Improving chatbot design and intent recognition: An approach through the methods of intercultural pragmatics

  • Doris Dippold

    Doris Dippold is Associate Professor in Intercultural Communication at the University of Surrey. She is a fellow of the Surrey Institute for People-Centred Artificial Intelligence and program leader for the MA Intercultural Business Communication and Marketing. Her research interests include human-machine interaction, chatbots, intercultural pragmatics, intercultural communication, professional communication and English as a lingua franca.

    EMAIL logo
    , Freda Mold

    Freda Mold is Senior Lecturer in Integrated Care at the University of Surrey and Program Director of the PhD program in Health Sciences. Her research interests focus on digital inclusion related to online access to primary & community health care services, virtual consultations, chronic disease management and the co-design of health technologies.

    and Priyanki Ghosh

    Priyanki Ghosh has a PhD in Linguistic and is a Postdoctoral Researcher at the University of Surrey. Her research interests focus on how linguistics can be used to address real-world challenges and create positive social impact, with specific interests in inclusion in higher education.

Published/Copyright: August 4, 2025
Become an author with De Gruyter Brill

Abstract

Designing chatbots that provide a good user experience, guide users to their goals, and are inclusive and accessible is vital to ensure that key public and commercial services are available to a wide range of users who hold varying social norms and exhibit different patterns of social interaction. However, chatbot design can be undermined by a lack of consideration of user needs and LLMs that are not fine-tuned to account for these differences. Using examples from the testing and development of a medical appointment booking chatbot, this paper showcases how the established methods, approaches and insights of intercultural pragmatics can be used to optimize the dialogue design of chatbots and improve the intent recognition of the language models which drive them. Specifically, this work draws on the CCSARP framework for classification of requests and the GAAFFE framework, which were applied to both naturally occurring and simulated data. The paper also discusses the boundaries and limitations of these approaches.


Corresponding author: Doris Dippold, University of Surrey, Guildford, UK, E-mail:

About the authors

Doris Dippold

Doris Dippold is Associate Professor in Intercultural Communication at the University of Surrey. She is a fellow of the Surrey Institute for People-Centred Artificial Intelligence and program leader for the MA Intercultural Business Communication and Marketing. Her research interests include human-machine interaction, chatbots, intercultural pragmatics, intercultural communication, professional communication and English as a lingua franca.

Freda Mold

Freda Mold is Senior Lecturer in Integrated Care at the University of Surrey and Program Director of the PhD program in Health Sciences. Her research interests focus on digital inclusion related to online access to primary & community health care services, virtual consultations, chronic disease management and the co-design of health technologies.

Priyanki Ghosh

Priyanki Ghosh has a PhD in Linguistic and is a Postdoctoral Researcher at the University of Surrey. Her research interests focus on how linguistics can be used to address real-world challenges and create positive social impact, with specific interests in inclusion in higher education.

Acknowledgement

This project was supported by a University of Surrey ESRC IAA grant.

Appendix 1

Sample chatbot booking invitation – appointment request pair

Chatbot: Hi [name] – I’m Asa, your AI receptionist from the Clerkenwell Medical Practice. I can help you book your cervical cancer screening appointment with one of the practice nurses. I can also answer any questions you have about cervical screening.
Our records show that you are now due a cervical screening. Let’s get you booked in. If you have a preferred date in mind, please let me know the date. Or would you like me to check for the next available appointment?
Response: Please can you find me an appointment on February 28th? Ideally first thing in the morning

Appendix 2

A. Survey elicitation of booking request

B. Survey elicitation of cancellation request

Appendix 3

Survey elicitation of patient perceptions on appointment invitation

Appendix 4

A. Initial chatbot booking invitation – autumn 2024

Hi [name], I’m Asa, your AI receptionist from the [name of medical practice]. I’m not human, but you can chat to me like you would with a real person.

It’s time for your cervical screening appointment, let’s get you booked in. Do you have a date in mind, or would you like me to check for the next available appointment?

B. Initial chatbot booking invitation – from February 2025

Hi [name], I’m Asa, your AI receptionist from the [name of medical practice]. I can help you book your cervical cancer screening appointment with one of the practice nurses. I can also answer any questions you have about cervical screening.

Our records show that you are now due a cervical screening. Let’s get you booked in. If you have a preferred date in mind, please let me know the date. Or would you like me to check for the next available appointment?

References

Ada Lovelace Institute. 2023. Access denied? Inequalities in data-driven health systems and digital health services. https://www.adalovelaceinstitute.org/wp-content/uploads/2023/09/Ada-Lovelace-Institute-policy-briefing-health-inequalities.pdf (accessed 26 October 2023).10.61608/9783775749619Search in Google Scholar

Archer, Dawn, Jonathan Culpeper & Matthew Davies. 2008. Pragmatic annotation. In Anke Lüdeling & Merja Kytö (eds.), Corpus linguistics: An international handbook, 613–642. Berlin: Mouton de Gruyter.Search in Google Scholar

Barikeri, Soumza, Anne Lauscher, Ivan Vulić & Goran Glavaš. 2021. RedditBias: A real-world resource for bias evaluation and debiasing of conversational language models. In Proceedings of the 59th annual meeting of the association for computational linguistics and the 11th international joint conference on natural language processing (Volume 1: Long Papers), 1941–1955. Association for Computational Linguistics (Online).10.18653/v1/2021.acl-long.151Search in Google Scholar

Bauer, Greta, Siobhan Churchill, Mayuri Mahendran, Chantel Walwyn, Daniel Lizotte & Alma Villa-Rueda. 2021. Intersectionality in quantitative research: A systematic review of its emergence and applications of theory and methods. SSM – Population Health 14. https://doi.org/10.1016/j.ssmph.2021.100798.Search in Google Scholar

Blum-Kulka, Shoshana & Elite Olshtain. 1984. Requests and apologies: A cross-cultural study of speech act realization patterns (CCSARP). Applied Linguistics 5(3). 196–213. https://doi.org/10.1093/applin/5.3.196.Search in Google Scholar

Brandt, Adam & Spencer Hazel. 2024. Towards interculturally adaptive conversational AI. Applied Linguistics Review 16(2). 775–786. https://doi.org/10.1515/applirev-2024-0187.Search in Google Scholar

Brennan, Karagh & Jonathan Reay. 2024. Emotional barriers pose the greatest threat to cervical cancer screening for young adult women in the United Kingdom. Preventive Medicine 189. https://doi.org/10.1016/j.ypmed.2024.108160.Search in Google Scholar

Bunt, Harry. 2013. Computational pragmatics. In Yan Huang (ed.), The Oxford handbook of pragmatics, 567–585. Oxford: Oxford University Press.Search in Google Scholar

Chang, Z., Feihong Lu, Zhu Zigin, Li Qian, Ji Cheng, Chen Zhou, Liu Yang, Xu Ruifent, Song Yanqui, Wang Shangguang & Liianxin. 2025. Bridging the gap between LLMs and human intentions: Progresses and challenges in instruction understanding, intention reasoning, and reliable generation. https://arxiv.org/abs/2502.09101 (accessed 26 October 2023).Search in Google Scholar

Chaves, Ana, Jesse Egbert, Toby Hocking, Eck Doerry & Marco Aurelio Gerosa. 2022. Chatbots language design: The influence of language variation on user experience with tourist assistant chatbots. ACM Transactions on Computer-Human Interaction 29(2). 1–38. https://doi.org/10.1145/3487193.Search in Google Scholar

Concannon, Shauna, Ian Roberts & Marcus Tomalin. 2023. An interactional account of empathy in human-machine communication. Human-Machine Communication 6. 87–116. https://doi.org/10.30658/hmc.6.6.Search in Google Scholar

Dippold, Doris. 2023. “Can I have the scan on Tuesday?” User repair in interaction with a task-oriented chatbot and the question. of communication skills for AI. Journal of Pragmatics 204. 21–32. https://doi.org/10.1016/j.pragma.2022.12.004.Search in Google Scholar

Dippold, Doris. 2024. Making the case for audience design in conversational AI: Users’ pragmatic strategies and rapport expectations in interaction with a task-oriented chatbot. Applied Linguistics. 1–18. https://doi.org/10.1093/applin/amae033.Search in Google Scholar

Elo, Satu & Helvi Kyngäs. 2008. The qualitative content analysis process. Journal of Advanced Nursing 62(1). 107–115. https://doi.org/10.1111/j.1365-2648.2007.04569.x.Search in Google Scholar

Fatima, Johra Kayeser, Md Irfanuzzaman, Khan, Somayeh, Bahmannia, Sarvjeet Kaur, Chatrath, Naomi, F. Dale & Raechel, Johns. 2024. “Rapport with a chatbot? The underlying role of anthropomorphism in socio-cognitive perceptions of rapport and e-word of mouth.” Journal of Retailing and Consumer Services 77.10.1016/j.jretconser.2023.103666Search in Google Scholar

Ferrara, Emilio. 2023. Fairness and bias in artificial intelligence: A brief survey of sources, impacts, and mitigation strategies. https://arxiv.org/abs/2304.07683 (accessed 26 October 2023).10.2196/preprints.48399Search in Google Scholar

Gal, Susan & Kathryn Wollard. 1995. Constructing languages and publics: Authority and representation. Pragmatics 5(2). 129–138.10.1075/prag.5.2.01galSearch in Google Scholar

Guzman, Andrea & Seth Lewis. 2020. Artificial intelligence and communication: A human–machine communication research agenda. New Media & Society 22(1). 70–86. https://doi.org/10.1177/1461444819858691.Search in Google Scholar

He, Zihao, Leili Tavabi, Kristina Lerman & Mohammed Soleymani. 2021. Speaker turn modeling for dialogue act classification. https://arxiv.org/pdf/2109.05056 (accessed 26 October 2023).Search in Google Scholar

Höhn, Sviatlana, Bettina Migge, Doris Dippold, Britta Schneider & Sjouke Mauw. 2023. Language ideology bias in conversational technology. In International workshop on chatbot research and design, 133–148. Cham: Springer Nature Switzerland.10.1007/978-3-031-54975-5_8Search in Google Scholar

Hoyos, Adriana. 2023. Unpacking ChatGPT: The pros and cons of AI’s hottest language model. https://www.ie.edu/insights/articles/unpacking-chatgpt-the-pros-and-cons-of-ais-hottest-language-model/(accessed 26 October 2023).Search in Google Scholar

Johnson, Deborah & Mario Verdicchio. 2017. Reframing AI discourse. Minds and Machines 27. 575–590. https://doi.org/10.1007/s11023-017-9417-6.Search in Google Scholar

Kamikubo, Rie, Lining Wang, Crystal Marte, Amnah Mahmood & Hernisa Kacorri. 2022. Data representativeness in accessibility datasets: A meta-analysis. In Proceedings of the 24th international ACM SIGACCESS conference on computers and accessibility, 1–15. Athens Greece: Association for Computing Machinery.10.1145/3517428.3544826Search in Google Scholar

Kirner-Ludwig, Monika. 2022. Research methods in intercultural pragmatics. In Istvan Kecskes (ed.), The Cambridge handbook of intercultural pragmatics, 361–394. Cambridge: Cambridge University Press.10.1017/9781108884303.015Search in Google Scholar

Milmo, Dan & Alex Hern. 2024. ‘We definitely messed up’: Why did Google AI tool make offensive historical images? The Guardian. 8 March https://www.theguardian.com/technology/2024/mar/08/we-definitely-messed-up-why-did-google-ai-tool-make-offensive-historical-images (accessed 26 October 2023).Search in Google Scholar

NHS. 2016. A no cost way of to increase the update of cervical screening: Results from a randomised controlled trial. https://www.enhertsccg.nhs.uk/sites/default/files/primarycare/CPP/2.5%20k.%20Cervical%20screening%20information.pdf (accessed 26 October 2023).Search in Google Scholar

Ogiermann, Eva. 2018. Discourse completion tasks. In Andreas Jucker, Klaus Schneider & Wolfram Bublitz (eds.), Methods in pragmatics, 229–255. Berlin/Boston: DeGruyter Mouton.10.1515/9783110424928-009Search in Google Scholar

Oppenlaender, Jonas, Rhema Linder & Johanna Silvennoinen. 2023. Prompting Ai art: An investigation into the creative skill of prompt engineering. arXiv:2303.13534. 1–42. https://doi.org/10.48550/arXiv.2303.13534.Search in Google Scholar

Parthasarathy, V. B., A. Zafar, A. Khan & A. Shahid. 2024. The ultimate guide to fine-tuning llms from basics to breakthroughs: An exhaustive review of technologies, research, best practices, applied research challenges and opportunities. arXiv:2408.13296. 1–113. https://doi.org/10.48550/arXiv.2408.13296.Search in Google Scholar

Pickering, Martin & Simon Garrod. 2006. Alignment as the basis for successful communication. Research on Language and Computation 4. 203–228. https://doi.org/10.1007/s11168-006-9004-0.Search in Google Scholar

Spencer-Oatey, Helen & Peter, Franklin. 2024. Intercultural interaction: A multidisciplinary approach to intercultural communication. London: Palgrave MacMillan.Search in Google Scholar

Spencer-Oatey, Helen & Domna Lazidou. 2024. Making working relationships work: The TRIPS Toolkit for handling relationship challenges and promoting rapport. Melbourne: Castledown Publishers.Search in Google Scholar

Timpe-Laughlin, Veronika & Judit Dombi. 2020. Exploring L2 learners’ request behavior in a multi-turn conversation with a fully automated agent. Intercultural Pragmatics 17(2). 221–257. https://doi.org/10.1515/ip-2020-0010.Search in Google Scholar

Verdonik, Darinka. 2023. Annotating dialogue acts in speech data: Problematic issues and basic dialogue act categories. International Journal of Corpus Linguistics 28(2). 144–171. https://doi.org/10.1075/ijcl.20165.ver.Search in Google Scholar

Weil, Elizabeth. 2023. You are not a parrot and a chatbot is not a human. And a linguist named Emily M. Bender is very worried what will happen when we forget this. New York Magazine 27. https://nymag.com/intelligencer/article/ai-artificial-intelligence-chatbots-emily-m-bender.html (accessed 26 October 2023).Search in Google Scholar

Weng, Jinta, Jiarui Zhang, Yue Hu, Daidong Fa, Xiaofeng Xu & Heyan Huan. 2023. Helping language models learn more: Multi-dimensional task prompt for few-shot tuning. In IEEE international conference on systems, man, and cybernetics (SMC), 746–752.10.1109/SMC53992.2023.10394280Search in Google Scholar

Zamfirescu-Pereira, J. D., Richmond Wong, Bjoern Hartmann & Qian Yang. 2023. Why Johnny can’t prompt: How non-AI experts try (and fail) to design LLM prompts. In Proceedings of the 2023 CHI conference on human factors in computing systems, 1–21.10.1145/3544548.3581388Search in Google Scholar

Published Online: 2025-08-04
Published in Print: 2025-04-28

© 2025 Walter de Gruyter GmbH, Berlin/Boston

Downloaded on 6.9.2025 from https://www.degruyterbrill.com/document/doi/10.1515/ip-2025-2010/html
Scroll to top button