Home Towards interculturally adaptive conversational AI
Article Open Access

Towards interculturally adaptive conversational AI

  • Adam Brandt

    Adam Brandt is Senior Lecturer in Applied Linguistics at Newcastle University (UK). He researches language use and social interaction across a range of applied settings. This work includes multilingual and intercultural settings, such as international workplaces and educational contexts. He also researches how people adapt their communication practices when using technologies such as conversational AI systems and video-conferencing platforms.

    ORCID logo EMAIL logo
    and Spencer Hazel

    Spencer Hazel is Reader in Applied Linguistics & Communication at Newcastle University (UK). His research deals first and foremost with co-present interaction, seeking to describe the multifarious resources people draw on in their interactions with each other. His research focuses mainly on linguistically dynamic settings such as international workplaces, language classrooms, and interactions involving people living with dementia. More recently, he has worked on conversational AI systems.

    ORCID logo
Published/Copyright: July 2, 2024
Become an author with De Gruyter Brill

Abstract

Among the many ways that AI technologies are becoming embedded in our social worlds is the proliferation of Conversational User Interfaces, such as voice assistants (e.g. Apple Siri and Amazon Alexa), chatbots and voice-based conversational agents. Such conversational AI technologies are designed to draw upon the designers’ understanding of interactional practices employed in human–human conversation, and therefore have implications for intercultural communication (ICC). In this paper, we highlight some of the current shortcomings of conversational AI, and how these relate to ICC. We also draw on findings from Conversation Analysis to discuss how pragmatic norms vary across linguacultural groups (see Risager 2019 for a discussion of the term ‘linguaculture’), noting that this poses further challenges for designers of conversational AI systems. We argue that the solution is to work towards what we call interculturally adaptive conversational AI. Finally, we propose a framework for how this can be conceptualised and researched, and argue that researchers with expertise in language and ICC are uniquely placed to contribute to this endeavour.

1 Introduction: the emergence of conversational AI

The past decade has witnessed an exponential growth in conversation-framed technologies being adopted into our everyday lives and activities. Such conversational AI systems (or conversational user interfaces, CUIs) include the text-based chatbots that populate the webpages of services that we use, the voice-based intelligent personal assistants (IPAs) which operate from our smartphones and smart speaker devices, and increasingly the voice-based conversational agents which provide us with customer services in healthcare, finance, retail, and other contexts. For a variety of personal, social and professional purposes, people increasingly find themselves using technologies as if they were interacting with a conversational partner, without any fellow humans being involved.

This direction of travel is only predicted to increase over the coming years, with more of our conversations becoming automated. It was recently estimated that there will be around 8.4 billion digital voice assistants in use globally by 2024 (Statista 2022). Similar growth is expected in customer service, as companies and other institutional bodies recognise these interactive tools as a means for increasing efficiency in their organisations. For example, it is estimated that $112bn was spent globally on ‘conversational commerce’ in 2023 (Juniper Research 2023). The potential efficiencies for organisations are twofold: in financial terms, it allows a single technological user interface to perform the work that traditionally would require paying large teams of employees to do; in productivity terms, the technology then frees up members of the workforce from more routine transactional tasks, so they can attend to tasks that require greater human involvement.

While there are clear benefits for organisations, engaging with a conversational AI poses numerous challenges to users; the novelty of speaking or writing to a machine instead of a person being one. Such challenges are however amplified when the ‘conversation’ is in a language which the speaker is less accustomed to use; or where the speaker’s language production, be it written or spoken, is of a variety on which that the system has not been trained (see also Dai et al. 2024/this issue). The latter for example can lead to problems with the automatic speech recognition (ASR) tool not being able to perform with the levels of accuracy required to keep a ‘conversation’ going.

Moreover, linguistic production is only one aspect of talk and chat-based correspondence. Any number of other interaction-implicative norms are also in play at any given moment. As has been shown over decades of Conversation Analytic research, people do things with how they format their turns-at-talk. They produce singular regularised patterns for marking out one type of social event from another, for example. Or they format their talk to produce the relevant social identity and relationship to their interlocutor. Importantly, these are patterns not universally distributed across all human societies, but patterns that members are socialised into through their engagements with and exposure to the cultural groups to which they belong or alongside which they live. While many of our basic social actions (asking a question, making an offer, rejecting a request, etc.) show similarities across linguacultural groups, there are also some key differences in the practices used to perform these (e.g. Betz et al. 2021; Enfield et al. 2010; Reiter 2006; Rossi et al. 2023; Sidnell 2009; Zimmerman 1999). Further, such practices for producing social actions are adapted in the production of institutionality (Drew and Heritage 1992), and accordingly differ across institutional contexts (such that making a request to a customer service provider will differ from making a request to a bank, or in a medical consultation).

This presents designers of AI-powered CUIs with a challenge: whether for text or voice-based interfaces, conversation designers at the helm of product development need to understand what these patterns are, in order to design them into their system (Brandt et al. 2023; Hazel and Brandt 2023). Furthermore, even within a single context or setting, a one-size-fits-all approach to the design can be remiss, as it can neglect how users from different cultural groups produce idiosyncratic patterns for doing the same activity: and these interaction-organisational practices might not align with the choices made by the design team. Indeed, such a unitary design approach may unwittingly incorporate a bias towards certain groups within an organisation’s user base or clientele (see e.g., Jenks 2025/this issue; Jones 2024/this issue), with less widely represented groups being marginalised by a system that privileges those around whom the system has been designed.

In this paper, we argue that the solution is to work towards what we call interculturally adaptive conversational AI. We also propose that researchers with expertise in language and intercultural communication (ICC) are uniquely placed to contribute to this endeavour. Before presenting our argument, we further highlight some of the current shortcomings of conversational AI.

2 Current challenges of language and culture in conversational AI

It is widely accepted that many aspects of AI technologies replicate the prejudices which exist in society, for example racial and gender biases, whether this is in relation to machine learning (Mehrabi et al. 2021), large language models (LLMs, such as ChatGPT; Bender et al. 2021), or generative AI more broadly (Fischer 2023).

While the present discussion focusses on these issues in relation to interculturality, much of the research to date has examined the current limitations of AI technology, such as ASR, for certain English-speaking languages groups, including speakers of African American Vernacular English, Indian speakers of English, and L1 Chinese speakers of English (Ngueajio and Washington 2022). Beyond a negative user experience in the moment, such biases in ASR can increase issues of social discrimination for such groups, for example in work and healthcare settings (Martin and Wright 2023). The most obvious way to address issues of language recognition is through the diversification of the speech datasets and corpora used to train the systems (Koenecke et al. 2020). The same can be said about system design and research more broadly: over the period 2017–2022, around 90 % of CUI design research was based upon participants from Europe and North America (Seymour et al. 2023; see also Jones 2024/this issue). We would echo the argument that research needs to be replicated across countries, especially for developing understanding of social order and social interaction among different groups (Seymour et al. 2023).

If the conversational AI systems are not able to adapt to their users, then it is the user who is required to adapt. For example, where Intelligent Process Automation tools (IPAs) currently have limited language options, users are then required to engage with their devices in a second language (L2). This can result in challenges for the user, for example where devices are perceived to be insensitive to the additional time it may take an L2 speaker to formulate a command (Wu et al. 2020, 2022). Drawing conclusions from studies of L1 English speakers and Chinese L1 speakers using English with Google Home smart speakers, research suggests that L1 speakers reported finding the device more usable than the L2 speakers did (Pyae and Scifleet 2018; Pyae et al. 2020). Most relevant to this discussion, researchers’ observations deemed usability to be an issue of cultural distinctions in English expression, rather than language proficiency per se (Pyae and Scifleet 2019). Regardless of the reason, it is possible that these devices can currently enhance an L2 user’s sense of being an outsider or a cultural novice; research suggests that, when things go wrong, L1 users focussed on the limits of the device, while L2 users focussed on their own perceived linguistic limitations (Wu et al. 2020).

Based on such research-informed observations about L2 users, there have been suggestions for ways that conversational AI systems could accommodate L2 users. These include the development of algorithms which can detect when a user is speaking in a L2, and adapt accordingly (Wu et al. 2022), or chatbots which can respond to user ‘code-mixing’, and imitate (Choi et al. 2023). Others still have designed and trialled chatbots which can ‘nudge’ users to mix between (in this case) English and Hindi, and adapt to the user’s response (Bawa et al. 2020). This, they argue, would lead to a more natural experience for those users.

This aligns with broader calls for enhanced recipient design (formatting an utterance in a way type-fitted for the interlocutor, based on their presumed knowledge, competence, etc.) to be embedded into conversational AI agents (An et al. 2021). At present, however, this kind of user-centred adaptivity remains an aspiration in relation to language varieties and L2 speaker status, and for simple engagements with IPAs, which are typically made up of straightforward five-part sequences (Due and Lüchow forthcoming). When exchanges with conversational AI are more complex, for example when performing customer service requests or engagement in a clinical consultation, and especially when doing so without sufficient familiarity of the relevant linguacultural normative practices for that particular context, one can expect that the complexities will increase manifold.

Such challenges are exacerbated further if user recipient design is to factor in linguacultural group membership, thereby encompassing language proficiency/ies, as well as context-appropriate cultural practices. A solution to this may present itself in the seismic shift that the conversational AI sector is currently undergoing, with organisations looking to integrate Large Language Models (LLMs) technology into the system architecture. With this approach to the design of conversational user interfaces, the LLM is tasked to take on the role traditionally ear-marked for the conversation designer, namely formulating appropriate speech output. We will discuss how this may lead to improvements in the user experience of those from different linguacultural backgrounds, by providing opportunities for generating a bespoke interface.

3 Potential of intercultural conversational AI: shifting practices and linguacultural variation

Mlynar and Arminen (2023) show how the social practices through which we engage in everyday interactions are not static, but change over time, for example brought about by technological change. Using the telephone call as an example, they discuss how developments in landline and mobile telephony have prompted changes to how members routinely carry out a phone call. By delving into CA studies of phone calls of the past 50 years, they demonstrate how at one time taken-for-granted practices for how to engage in a phone call have become obsolete, replaced with subsequent generations of practices that are more finely-attuned to later iterations of the technology mediating the conversations, an “anchoring of social life in its historical time” (p1). This diachronic lens is useful for helping us understand that social practices are not static, but are in constant flux, and trying to fix a social practice within the design of an interactive tool may fail to acknowledge its slippery, ephemeral nature. However, variation across time is only one consideration for the design of conversational AI.

Adopting a synchronic lens, we see how interactional practices emerge in distinct ways across different communities, including for interactions mediated through the very same technologies. Many of the formatting practices described in Conversation Analytic literature on phone call openings or closings will be specific to those who happened to be captured in the data. If this concerns calls between a clinic and patient in a contemporary UK setting, then we are likely to find different elements in play, or formatted differently, than an equivalent call in, say, particular social groups in Japan (e.g., Nishizaka 2012; Takami 1991) or The Netherlands (e.g., Houtkoop-Steenstra 1991). Language is a difference that is likely the most obvious, but other features may also vary to a lesser or greater extent (Reiter and Luke 2010). How to open a call; what the opening utterance displays about the nature of the call; what appropriate level of formality is encoded in the choice of turn-design; what address terms are used; what patterns of vocal productions (pitch, intonation contouring) are adopted and what this indexes about the interlocutors and the business-at-hand; all these may evidence patterns of talk-in-interaction that are particular to the people of this or that linguacultural group. With it being crucial for members to display their group membership credentials by producing normative patterns for interacting with other members, it is incumbent on conversational AI systems to be able to recognise and reproduce these in the engagements with users.

A challenge for conventional CUI designers is that they must project and anticipate possible trajectories that a conversation-framed script may follow. As we have argued elsewhere (Hazel and Brandt 2023), designers can, on the one hand, draw upon their imaginations, adopting a naturalistic dramaturgical approach to producing dialogue prompts and responses. On the other hand, they can adopt the empirical approach of the social scientist, basing the design on observations of patterns found in data recordings of equivalent human-human interaction or findings from research literature on social interaction. Each of these methods has its advantages and disadvantages. However, the approaches share one limitation, namely that the designers are constrained by what they themselves know, or the limitations of the data collections or social interaction research to which they have access. If they rely on their own linguacultural norms for imagining and scripting the system outputs, these will be the ones embedded in the user interface. If they rely on analysis of data recordings of equivalent social interaction from one community or segment of society, then it is those groups who the system will offer a smoother experience than for others. Ultimately, at the user end, the choices made by conversation designers will impact the experience of those required to interact with the system, privileging certain clients over others.

Incorporating LLM-based generative AI tools into the conversation design offers one possible solution to this. LLMs have their own limitations, including one similar to those described above: they can predict patterns on the basis of the data on which they have been trained, and this again may introduce a bias into the outputs they generate, privileging certain groups and interactional practices over others. Further biases may result from the kinds of text the LLM is directed to train on. If these are limited to descriptions of interaction, the LLM may be constrained by the imaginations of those who produced these texts, prompting the system to ‘hallucinate’ what actually happens in the social interactions being described.

However, the range of data an LLM is trained on can be broadened, for example accessing texts generated by linguacultural groups in all corners of the world, or using transcripts of interaction rather than descriptions. In doing so, the LLM should in time be able to produce a variety of interactional patterns, accommodating those from outside the conversation designer’s own sociocultural groups in a more equitable manner than is currently possible. The aspiration should be a conversational system that adapts to users’ linguacultural styles; that LLM-powered conversational AI may one day be able to be triggered by detecting a specific language, a particular language variety, or even certain practices, for example how a telephone is answered or how a request is formatted.

At the time of writing, this may seem like a moonshot. But the speed of technological change and innovation gives hope that such possibilities may not be too far away. Indeed, until very recently, the very notion of AI systems which could generate new and original content, such as the written output produced by ChatGPT, seemed implausible. We should approach the idea of interculturally adaptive conversational AI as not only an ideal, but as a realistic and necessary target. Its successful development and implementation will require collaborative enterprise and research from many academic and practitioner communities with relevant expertise.

4 A possible framework for interculturally adaptive conversational AI to inform future research

A useful heuristic through which to consider the complexities of any rollout of conversational AI across a population is provided by Spolsky’s (2004) framework for conceptualising language policy. This is the process of regulating language use within a social group, governing norms and rules for how people speak and interact with one another within particular social settings. Considering speech communities, Spolsky proposes that language policy works at three intersecting levels: “its language practices – the habitual pattern of selecting among the varieties that make up its linguistic repertoire; its language beliefs or ideology – the beliefs about language and language use; and any specific efforts to modify or influence that practice by any kind of language intervention, planning or management” (2004: 5, emphasis added).

Applying this tripartite framework to the development and rollout of conversational AI systems helps us understand the different regulatory dynamics in play:

interactional practices are the seen-but-unnoticed regularised patterns in evidence within the cultural groups of those involved in the interaction. Although socially constructed and culturally specific, they are treated as recognisable social objects that members would expect to encounter in their engagements with their everyday world. Examples are how to format an invitation or a complaint, or what levels of deference to embed when addressing one person or another.

interaction beliefs or ideology encompass both those of the organisation management as well as that of the user, with each harbouring ideas about what the system should be able to do, and what can reasonably be expected of the user

intervention, planning or management would be most closely aligned with the work of the conversation design and engineering team, constructing the product in such a way that necessitates the user to interact with it within the constraints of the design.

For conversational AI use, a fourth dynamic may be added to Spolsky’s. It can be glossed as capability, namely the ability of system to meet the interactional requirements of the transaction (functionality), and of the user to adapt to the limitations of the system (competence). Where the system has not yet developed the technological capabilities to align with the interactional practices of the community in which it is embedded, or what people assume the system should be able to handle, or meet the aims of those developing the system for some or other purpose, this impacts on all of the above, and will do so until the system has developed its functionality to such an extent that this constraint disappears. Likewise, where human participants have not developed the abilities to adapt their interactional conduct to work with machines such as this, then here too may be limits to how successful the interactions would be.

The complexity of intersecting forces that are in play in any conversation-framed engagement between AI and human is further compounded when the various parties to the event (e.g., management, designer, engineer, user) have divergent expectations and practices for carrying out equivalent interactions in human-human interaction. People are socialised in their respective cultural groups to believe certain things about what is appropriate conduct, they produce patterns that diverge from those of other groups, they have different levels of competence in the different languages they might use. Organisations turning to AI to conduct their interactions with their customers and clients can therefore not simply work from an assumption that the same conversation design will have the same outcome for members of different linguacultural groups; these systems should be equitable and wholly inclusive. This could be argued that this is holding AI to higher standards than we do for humans, who are seldom, if ever, wholly equitable and inclusive. However, individuals and organisations are increasingly aspiration to this standard, and AI tools should be put to use in a way that transcends, not perpetuates, human limitations and shortcomings.

The challenge faced by the conversational AI sector to develop products that can operate across diverse linguacultural communities requires the types of knowledge, expertise and methodological tools that is central to research on language and ICC. All four elements of the proposed framework above would be well-served by empirical research to support our understanding. Potential areas for development could include ethnographic research on how CUIs are used across different cultural groups, and by individuals whose linguacultural norms diverge from those designed into the system. It can involve Discourse and Conversation Analysts showing the nuances embedded in user practices, including those that evidence differences across a population. Qualitative research approaches may further our understanding of perceptions of different groups of users, and designers, towards conversation-framed products and their design features. In sum, it will require researchers with expertise in language, culture and pragmatics to collaborate with software engineers, conversation designers, and other stakeholders to make conversational AI fit for purpose in culturally diverse populations.

At the very least, this presents researchers working within ICC with opportunities to collaborate with industry. However, it presents us also with a call-to-arms. If the world is entering an era in which conversational AI is becoming increasingly embedded in how people organise their everyday lives, then ICC researchers must become increasingly embedded within the research and development of these technologies.


Corresponding author: Adam Brandt, Newcastle University, Newcastle upon Tyne, UK, E-mail:

About the authors

Adam Brandt

Adam Brandt is Senior Lecturer in Applied Linguistics at Newcastle University (UK). He researches language use and social interaction across a range of applied settings. This work includes multilingual and intercultural settings, such as international workplaces and educational contexts. He also researches how people adapt their communication practices when using technologies such as conversational AI systems and video-conferencing platforms.

Spencer Hazel

Spencer Hazel is Reader in Applied Linguistics & Communication at Newcastle University (UK). His research deals first and foremost with co-present interaction, seeking to describe the multifarious resources people draw on in their interactions with each other. His research focuses mainly on linguistically dynamic settings such as international workplaces, language classrooms, and interactions involving people living with dementia. More recently, he has worked on conversational AI systems.

References

An, Sungeun, Robert Moore, Eric Young Liu & Guang-Jie Ren. 2021. Recipient design for conversational agents: Tailoring agent’s utterance to user’s knowledge. In CUI 2021 – 3rd conference on conversational user interfaces. ACM.10.1145/3469595.3469625Search in Google Scholar

Bawa, Anshul, Pranav Khadpe, Pratik Joshi, Kalika Bali & Monojit Choudhury. 2020. Do multilingual users prefer chat-bots that code-mix? Let’s nudge and find out! Proceedings of the ACM on human-computer interaction 4(CSCW1), 1–23. New York, NY: ACM.10.1145/3392846Search in Google Scholar

Bender, Emily M., Timnit Gebru, Angelina Mcmillan-Major & Shmargaret Shmitchell. 2021. On the dangers of stochastic parrots. Proceedings of the 2021 ACM Conference on fairness, accountability, and transparency. New York, NY: ACM.10.1145/3442188.3445922Search in Google Scholar

Betz, Emma, Arnulf Deppermann, Lorenza Mondada & Marja-Leena Sorjonen (eds.). 2021. OKAY across languages: Toward a comparative approach to its use in talk-in-interaction. Amsterdam: John Benjamins.10.1075/slsi.34Search in Google Scholar

Brandt, Adam, Hazel Spencer, Rory Mckinnon, Kleopatra Sideridou, Joe Tindale & Nikoletta Ventoura. 2023. From writing dialogue to designing conversation: Considering the potential of conversation analysis for voice user interfaces. Proceedings of the 5th international conference on conversational user interfaces. New York, NY: ACM.10.1145/3571884.3603758Search in Google Scholar

Choi, Yunjae J., Minha Lee & Sangsu Lee. 2023. Toward a multilingual conversational agent: Challenges and expectations of code-mixing multilingual users. Proceedings of the 2023 CHI conference on human factors in computing systems. New York, NY: ACM.10.1145/3544548.3581445Search in Google Scholar

Dai, David Wei, Shungo Suzuki & Guanling Chen. 2024. Generative AI for professional communication training in intercultural contexts: Where are we now and where are we heading? Applied Linguistics Review.10.1515/applirev-2024-0184Search in Google Scholar

Drew, Paul & John Heritage (eds.). 1992. Talk at work: Interaction in institutional settings. Cambridge: Cambridge University Press.Search in Google Scholar

Due, Brian L & Louise Lüchow. forthcoming. Vui-Speak: There is nothing conversational about ‘conversational user interfaces. In Florian Muhle & Indra Bock (eds.), Social robots in institutional interaction. Bielefeld University Press.Search in Google Scholar

Enfield, Nick J., Stivers Tanya & Steven C. Levinson. 2010. Question-response sequences in conversation across ten languages: An introduction. Journal of Pragmatics 42(10). 2615–2619. https://doi.org/10.1016/j.pragma.2010.04.001.Search in Google Scholar

Fischer, Joel E. 2023. Generative Ai considered harmful. In Proceedings of the 5th international conference on conversational user interfaces. ACM.10.1145/3571884.3603756Search in Google Scholar

Hazel, Spencer & Adam Brandt. 2023. Enhancing the natural conversation experience through conversation analysis – a design method. In Hci international 2023 – late breaking papers, 83–100. Nature Switzerland: Springer.10.1007/978-3-031-48038-6_6Search in Google Scholar

Houtkoop-Steenstra, Hanneke. 1991. Opening sequences in Dutch telephone conversations. In Talk and social structure, 232–250. Berkeley: University of California Press.Search in Google Scholar

Jenks, Christopher J. 2025. Communicating the cultural other: Trust and bias in generative AI and large language models. Applied Linguistics Review 16(2). 787–795. 10.1515/applirev-2024-0196Search in Google Scholar

Jones, Rodney H. 2024. Culture machines. Applied Linguistics Review.10.1515/applirev-2024-0188Search in Google Scholar

Juniper Research. 2023. Conversational commerce market: 2023–2028. Available at: https://www.juniperresearch.com/research/telco-connectivity/communication-services/conversational-commerce-research-report/.Search in Google Scholar

Koenecke, Allison, Andrew Nam, Emily Lake, Joe Nudell, Minnie Quartey, Zion Mengesha, Connor Toups, John R. Rickford, Dan Jurafsky & Sharad Goel. 2020. Racial disparities in automated speech recognition. Proceedings of the National Academy of Sciences 117(14). 7684–7689. https://doi.org/10.1073/pnas.1915768117.Search in Google Scholar

Martin, Joshua L. & Kelly Elizabeth Wright. 2023. Bias in automatic speech recognition: The case of African American language. Applied Linguistics 44(4). 613–630. https://doi.org/10.1093/applin/amac066.Search in Google Scholar

Mehrabi, Ninareh, Fred Morstatter, Nripsuta Saxena, Kristina Lerman & Aram Galstyan. 2022. A survey on bias and fairness in machine learning. ACM Computing Surveys 54(6). 1–35. https://doi.org/10.1145/3457607.Search in Google Scholar

Mlynář, Jakub & Arminen Ilkka. 2023. Respecifying social change: The obsolescence of practices and the transience of technology. Frontiers in Sociology 8. https://doi.org/10.3389/fsoc.2023.1222734.Search in Google Scholar

Ngueajio, Mikel K. & Gloria Washington. 2022. Hey asr system! Why aren’t you more inclusive? In Lecture notes in computer science, 421–440. Nature Switzerland: Springer.10.1007/978-3-031-21707-4_30Search in Google Scholar

Nishizaka, Aug. 2012. Doing “being friends” in Japanese telephone conversations. Interaction and everyday life: Phenomenological and ethnomethodological essays in honor of George Psathas, 297. Mayland, USA: Lexington Books.Search in Google Scholar

Pyae, Aung & Paul Scifleet. 2018. Investigating differences between native English and non-native English speakers in interacting with a voice user interface. Proceedings of the 30th Australian conference on computer-human interaction. New York, NY: ACM.10.1145/3292147.3292236Search in Google Scholar

Pyae, Aung & Paul Scifleet. 2019. Investigating the role of user’s English language proficiency in using a voice user interface. Extended abstracts of the 2019 CHI conference on human factors in computing systems. New York, NY: ACM.10.1145/3290607.3313038Search in Google Scholar

Pyae, Aung, Swe Zin Hlaing, Nyein Thwet Thwet Aung, Nang Mo Mo Kham, Myo Myo Khant & Min Khant Kyaw. 2020. Understanding non-native English speakers’ perceptions of voice user interfaces with and without a visual display: A usability study. 2020 international conference on advanced information technologies (ICAIT). New York, NY: IEEE.10.1109/ICAIT51105.2020.9261771Search in Google Scholar

Reiter, Rosina Márquez. 2006. Interactional closeness in service calls to a montevidean carer service company. Research on Language and Social Interaction 39(1). 7–39. https://doi.org/10.1207/s15327973rlsi3901_2.Search in Google Scholar

Reiter, Rosina Márquez & Kang-kwong Luke. 2010. Telephone conversation openings across languages, cultures and settings. In Anna Trosborg (ed.), Pragmatics across languages and cultures, 103–138. Berlin: De Gruyter Mouton.10.1515/9783110214444.1.103Search in Google Scholar

Risager, Karen. 2019. Linguaculture. In Carol A. Chapelle (ed.), Encyclopedia of applied linguistics. Wiley-Blackwell.10.1002/9781405198431.wbeal0709.pub2Search in Google Scholar

Rossi, Giovanni, Mark Dingemanse, Simeon Floyd, Julija Baranova, Joe Blythe, Kobin H. Kendrick, Jörg Zinken & Nick J. Enfield. 2023. Shared cross-cultural principles underlie human prosocial behavior at the smallest scale. Scientific Reports 13(1). https://doi.org/10.1038/s41598-023-30580-5.Search in Google Scholar

Seymour, William, Zhan Xiao, Mark Cote & Jose Such. 2023. Who are cuis really for? Representation and accessibility in the conversational user interface literature. Proceedings of the 5th international conference on conversational user interfaces. New York, NY: ACM.10.1145/3571884.3603760Search in Google Scholar

Sidnell, Jack. 2009. Conversation analysis: Comparative perspectives. Cambridge: Cambridge University Press.10.1017/CBO9780511635670Search in Google Scholar

Spolsky, Bernard. 2004. Language policy. Cambridge: Cambridge University Press.Search in Google Scholar

Statista. 2022. Number of digital voice assistants in use worldwide 2019–2024. Available at: https://www.statista.com/statistics/973815/worldwide-digital-voice-assistant-in-use/.Search in Google Scholar

Takami, Tomoko. 2002. A study on closing sections of Japanese telephone conversations. Working Papers in Educational Linguistics 18(1). 67–85.Search in Google Scholar

Wu, Yunhan, Daniel Rough, Anna Bleakley, Justin Edwards, Orla Cooney, Philip R. Doyle, Leigh Clark & Benjamin R. Cowan. 2020. See what I’m saying? Comparing intelligent personal assistant use for native and non-native language speakers. 22nd international conference on human-computer Interaction with mobile devices and services. New York, NY: ACM.10.1145/3379503.3403563Search in Google Scholar

Wu, Yunhan, Martin Porcheron, Philip Doyle, Justin Edwards, Daniel Rough, Orla Cooney, Anna Bleakley, Leigh Clark & Cowan Benjamin. 2022. Comparing command construction in native and non-native speaker ipa interaction through conversation analysis. Proceedings of the 4th conference on conversational user interfaces. New York, NY: ACM.10.1145/3543829.3543839Search in Google Scholar

Zimmerman, Don H. 1999. Horizontal and vertical comparative research in language and social interaction. Research on Language and Social Interaction 32(1-2). 195–203. https://doi.org/10.1207/s15327973rlsi321&2_23.10.1080/08351813.1999.9683623Search in Google Scholar

Received: 2024-06-06
Accepted: 2024-06-13
Published Online: 2024-07-02
Published in Print: 2025-03-26

© 2024 the author(s), published by De Gruyter, Berlin/Boston

This work is licensed under the Creative Commons Attribution 4.0 International License.

Articles in the same Issue

  1. Frontmatter
  2. Special Issue 1 : Applied Linguistics, Ethics and Aesthetics of Encountering the Other; Guest Editors: Maggie Kubanyiova and Angela Creese
  3. Introduction
  4. Introduction: applied linguistics, ethics and aesthetics of encountering the Other
  5. Research Articles
  6. “When we use that kind of language… someone is going to jail”: relationality and aesthetic interpretation in initial research encounters
  7. The humanism of the other in sociolinguistic ethnography
  8. Towards a sociolinguistics of in difference: stancetaking on others
  9. Becoming response-able with a protest placard: white under(-)standing in encounters with the Black German Other
  10. (Im)possibility of ethical encounters in places of separation: aesthetics as a quiet applied linguistics praxis
  11. Unsettled hearing, responsible listening: encounters with voice after forced migration
  12. Special Issue 2: AI for intercultural communication; Guest Editors: David Wei Dai and Zhu Hua
  13. Introduction
  14. When AI meets intercultural communication: new frontiers, new agendas
  15. Research Articles
  16. Culture machines
  17. Generative AI for professional communication training in intercultural contexts: where are we now and where are we heading?
  18. Towards interculturally adaptive conversational AI
  19. Communicating the cultural other: trust and bias in generative AI and large language models
  20. Artificial intelligence and depth ontology: implications for intercultural ethics
  21. Exploring AI for intercultural communication: open conversation
  22. Review Article
  23. Ideologies of teachers and students towards meso-level English-medium instruction policy and translanguaging in the STEM classroom at a Malaysian university
  24. Regular articles
  25. Analysing sympathy from a contrastive pragmatic angle: a Chinese–English case study
  26. L2 repair fluency through the lenses of L1 repair fluency, cognitive fluency, and language anxiety
  27. “If you don’t know English, it is like there is something wrong with you.” Students’ views of language(s) in a plurilingual setting
  28. Investments, identities, and Chinese learning experience of an Irish adult: the role of context, capital, and agency
  29. Mobility-in-place: how to keep privilege by being mobile at work
  30. Shanghai hukou, English and politics of mobility in China’s globalising economy
  31. Sketching the ecology of humor in English language classes: disclosing the determinant factors
  32. Decolonizing Cameroon’s language policies: a critical assessment
  33. To copy verbatim, paraphrase or summarize – listeners’ methods of discourse representation while recalling academic lectures
Downloaded on 19.9.2025 from https://www.degruyterbrill.com/document/doi/10.1515/applirev-2024-0187/html
Scroll to top button