Home When AI meets intercultural communication: new frontiers, new agendas
Article Open Access

When AI meets intercultural communication: new frontiers, new agendas

  • David Wei Dai ORCID logo EMAIL logo and Zhu Hua ORCID logo
Published/Copyright: July 22, 2024
Become an author with De Gruyter Brill

Artificial Intelligence (AI) is in the spotlight again. Earlier this year there was a widespread controversy over how Gemini, Google’s Gen AI model, generated images of German Second World War soldiers as people of colour. Google swiftly suspended Gemini’s image generation function and its chief executive Sundar Pichai, apologized for Gemini’s “completely unacceptable” conduct (The Guardian 2024).

This is not the first time that AI, or more specifically, Generative AI (Gen AI) has attracted negative attention from the public. Ever since ChatGPT was first released in November 2022, debates, expectations, suspicions, and perturbations surrounding this technology have never ceased. At a head-spinning speed, we are seeing colleagues using AI to compose work emails, our favourite singers creating songs with their voices although they did not sing them, and our friends and family generating images and videos of themselves on social media in places where they have never been.

It should be noted that AI itself is not a new technology; it traces back to the 1950s when computer scientists first started examining to what extent a machine can display intelligent behaviour. Siri, Alexa and Google Assistant are just some recent examples of AI. The difference between AI and the latest surge of Gen AI is that the predominant focus of AI is to analyze data and automatize the search process. Gen AI, on the other hand, has the capacity to generate new content based on deep learning of the patterns in its training datasets. Although Gen AI has been developing rapidly since the 1990s, with the launch of the free-to-use ChatGPT, the year 2022 became a watershed moment where ordinary members of society – whether they are familiar with AI or not, like AI or not – have to grapple with the very palpable, and ever-expanding presence of AI in our everyday life.

What does all of this have to do with Intercultural Communication? Well, a great deal. Intercultural Communication is about interactions between people from diverse backgrounds, including but not confined to ethnicity, language, religion, age, gender, sexual orientation or profession. It is about how stereotypes towards Other emerge and how we can overcome them. And it is also about how we can tackle prejudice, discrimination, inequality, and social injustice between people of different cultures and create space for meaningful dialogues, new perspectives and crossing differences.

The issues with which Intercultural Communication research concerns itself are now playing out in the AI space. When a non-German person asks Gemini to generate images of German soldiers in 1943 because they are genuinely curious about that period of German history, Gemini displays its own understanding of how Germans should look during the Second World War. When a non-Chinese person who has never been to China asks Gencraft (a free online Gen AI image generator) to produce images of Chinese women, it creates images of Han Chinese women wearing Tang Dynasty dresses (618–907 CE) with red lantern-shaped earrings. When instructing Midjourney v4 (another Gen AI image generator) to generate images using prompt words “patriotic”, “dog” and “superhero”, we get amalgamations of male dogs wearing skin-tight suits with the patterns of the United States flag. These are the kinds of stereotypes Intercultural Communication seeks to address.

Whatever Gen AI produces is based on its training datasets but then the question becomes what output produced by Gen AI says about the representation of culture in the datasets. Why does Gen AI link “femininity” and “Chinese” with Tang Dynasty costumes and red lanterns? And why does Gen AI interpret “patriotism” and “superhero” narrowly as “masculinity” and “the US”? Every time we ask Gen AI to generate new content, whether it is text, images, music, audio or videos, we are producing and reproducing essentialized artefacts of culture and shaping content consumers’ understanding of culture. When we spot biases and stereotypes in Gen AI’s production, we can try to prompt Gen AI to embody certain principles of diversity and inclusiveness, such as a more balanced representation of ethnicity in China, instead of privileging the Han Chinese. However, this could also backfire, as we have seen with Gemini’s representation of WWII German soldiers.

The scenarios discussed above are some examples of pressing issues we face as language and Intercultural Communication scholars. There is an urgent need to examine the impact of AI on social and intercultural relationships as well as knowledge and practice of Intercultural Communication. Equally significant is its potential impact on our profession as language and Intercultural Communication specialists. If AI can render translation and one can talk with robots and machines to get things done, do we still need to train interpreters and develop skills to relate to others (otherwise known as Intercultural Communication skills)? While these are valid concerns, we remain confident in the enduring relevance of our profession. Extensive research has demonstrated that communication is much more than just words, but a range of dynamic semiotic possibilities that include touches, gestures, movements, senses, and objects, in addition to words, which arise in situ (e.g., Zhu et al. 2017). Communication is also about the ability to build relationships, mediate inferences, orient to sociocultural-pragmatic cues and construct identities on a moment-by-moment basis (Dai 2024). Gen AI, as it stands now, cannot fully replicate the way human speakers draw on a wide range of semiotic resources to interact (termed Interactional Competence). However, we do have new frontiers and agendas for our work in language and Intercultural Communication research, education and training.

It is within this context that we have curated this discussion forum where researchers of Intercultural Communication and learning analytics share their differing but complementary perspectives on the topic.

Probing into the relationship between Gen AI and culture, Jones (2025) starts the forum with the observation that no matter how we try to diversify the training datasets Gen AI is based on, Gen AI’s predilection will always be stereotypes simply because stereotypes are omnipresent in our existing discourse. Furthermore, as Gen AI generates more content replete with biases and stereotypes, such content feeds back into the datasets Gen AI draws on and perpetuates discourses of essentialized cultural artefacts.

Complementing Jones’ perspective, Dai et al. (2025) offer insights into how we can leverage Gen AI creatively to mitigate some challenges in professional communication training, in particular, the need to provide training scenarios that are not only tailored to professional contexts such as healthcare, but also simulate speakers of diverse profiles in terms of gender, age, accent, and health conditions. Through a case study on the Tutorial English AI project, the authors illustrate lessons learned in developing teaching and assessment materials for professional communication education. They identify the interpersonal dimension of communication and the importance of linguistic and cultural representation in training datasets as crucial areas for future Gen AI development in professional communication training.

Acknowledging the sometimes ineluctable cultural biases and stereotypes in the AI-Large Language Model (LLM), Brandt and Spencer (2025) raise a counterargument: if we take a step back and rethink how human script writers design conversations for Conversational User Interfaces pre-LLM, can we be certain that human conversation designers were always prejudice- and bias-free? Following this line of argument, the authors opine that there is hope that AI-LLM might generate a more interculturally competent conversationist, the achievement of which requires careful curation of the dataset it is trained on and interdisciplinary collaboration between Intercultural Communication researchers and AI specialists.

Situated within a posthumanist and new materialist framework, Jenks (2025) discusses two topical issues in AI-LLM: trust and bias. Jenks argues that cultural biases are omnipresent in any form of meaning-making process, with or without AI-LLM. It is therefore incumbent on Intercultural Communication researchers to engage with AI-LLM and explore how we can promote trust and reduce bias in the use of AI-LLM. Jenks remains hopeful that there is possibility for Gen AI to achieve a net positive for humanity.

Last but not least, O’Regan and Ferri (2025) discuss the ethical implications of AI development. This piece starts with differentiating two types of ethics: moral judgement of right and wrong versus a form of regularized practice in which ethical choices are relativized. The authors argue that AI fails on the first type of ethics, as it relies on the closed system of biased data pools. On the second type of ethics – as a regularized practice – it reproduces the relativism of this kind of perspective by not being able to judge between its own epistemic outputs. The authors also caution against AI’s potential to obscure corporate and capitalist influences, ultimately evading responsibilities, and raise the critical issue of unequal access to AI and the risks of exacerbating global inequalities.

In the spirit of open conversation, in the final piece, we invite the contributors to reflect on the pressing issues and possibilities of taking on these challenges. Collaboration and building on what we know well are some examples of possible ways forward. Meanwhile, we need to be mindful that Gen AI is an emerging technology where challenges and opportunities co-exist. Andrew Rogoyski from the Institute for People-Centred AI at the University of Surrey made this excellent point: “We are expecting them (Gen AI) to be creative, generative models but we are also expecting them to be factual, accurate and to reflect our desired social norms – which humans don’t necessarily know themselves, or they’re at least different around the world” (The Guardian 2024).

Are we expecting too much of AI? Are we asking AI to resolve issues or undertake tasks that human beings cannot even do that well all the time? We invite you to consider these questions as you go through the forum. However, if there is one thing we shall remain optimistic about in the age of AI, it is the potential of our discipline to navigate and shape AI’s impact for more effective intercultural communication.


Corresponding author: David Wei Dai, UCL Institute of Education, University College London, London, UK, E-mail:

References

Brandt, Adam & Hazel Spencer. 2025. Towards interculturally adaptive conversational AI. Applied Linguistics Review 16(2). 775–786. https://doi.org/10.1515/applirev-2024-0187.Search in Google Scholar

Dai, David Wei. 2024. Interactional Competence for professional communication in intercultural contexts: Epistemology, analytic framework and pedagogy. Language, Culture and Curriculum 1–21. https://doi.org/10.1080/07908318.2024.2349781.Search in Google Scholar

Dai, David Wei, Suzuki Shungo & Chen Guanling. 2025. Generative AI for professional communication training in intercultural contexts: Where are we now and where are we heading? Applied Linguistics Review 16(2). 763–774. https://doi.org/10.1515/applirev-2024-0184.Search in Google Scholar

Jenks, Christopher J. 2025. Communicating the cultural Other: Trust and bias in generative AI and large language models. Applied Linguistics Review 16(2). 787–795. https://doi.org/10.1515/applirev-2024-0196.Search in Google Scholar

Jones, Rodney. 2025. Culture machines. Applied Lingusitics Review 16(2). 753–762. https://doi.org/10.1515/applirev-2024-0188.Search in Google Scholar

O’Regan, John P & Ferri Giuliana. 2025. Artificial intelligence and depth ontology: Implications for intercultural ethics. Applied Linguistics Review 16(2). 797–807. https://doi.org/10.1515/applirev-2024-0189.Search in Google Scholar

The Guardian. 2024. ‘We definitely messed up’: Why did Google AI tool make offensive historical images? Available at: https://www.theguardian.com/technology/2024/mar/08/we-definitely-messed-up-why-did-google-ai-tool-make-offensive-historical-images.Search in Google Scholar

Zhu, Hua, Emi Otsuji & Alastair Pennycook (eds.). 2017. Multilingual, multisensory and multimodal repertoires in corner shops, streets and markets: A special issue of Social Semiotics 27(4).10.1080/10350330.2017.1334383Search in Google Scholar

Published Online: 2024-07-22
Published in Print: 2025-03-26

© 2024 the author(s), published by De Gruyter, Berlin/Boston

This work is licensed under the Creative Commons Attribution 4.0 International License.

Articles in the same Issue

  1. Frontmatter
  2. Special Issue 1 : Applied Linguistics, Ethics and Aesthetics of Encountering the Other; Guest Editors: Maggie Kubanyiova and Angela Creese
  3. Introduction
  4. Introduction: applied linguistics, ethics and aesthetics of encountering the Other
  5. Research Articles
  6. “When we use that kind of language… someone is going to jail”: relationality and aesthetic interpretation in initial research encounters
  7. The humanism of the other in sociolinguistic ethnography
  8. Towards a sociolinguistics of in difference: stancetaking on others
  9. Becoming response-able with a protest placard: white under(-)standing in encounters with the Black German Other
  10. (Im)possibility of ethical encounters in places of separation: aesthetics as a quiet applied linguistics praxis
  11. Unsettled hearing, responsible listening: encounters with voice after forced migration
  12. Special Issue 2: AI for intercultural communication; Guest Editors: David Wei Dai and Zhu Hua
  13. Introduction
  14. When AI meets intercultural communication: new frontiers, new agendas
  15. Research Articles
  16. Culture machines
  17. Generative AI for professional communication training in intercultural contexts: where are we now and where are we heading?
  18. Towards interculturally adaptive conversational AI
  19. Communicating the cultural other: trust and bias in generative AI and large language models
  20. Artificial intelligence and depth ontology: implications for intercultural ethics
  21. Exploring AI for intercultural communication: open conversation
  22. Review Article
  23. Ideologies of teachers and students towards meso-level English-medium instruction policy and translanguaging in the STEM classroom at a Malaysian university
  24. Regular articles
  25. Analysing sympathy from a contrastive pragmatic angle: a Chinese–English case study
  26. L2 repair fluency through the lenses of L1 repair fluency, cognitive fluency, and language anxiety
  27. “If you don’t know English, it is like there is something wrong with you.” Students’ views of language(s) in a plurilingual setting
  28. Investments, identities, and Chinese learning experience of an Irish adult: the role of context, capital, and agency
  29. Mobility-in-place: how to keep privilege by being mobile at work
  30. Shanghai hukou, English and politics of mobility in China’s globalising economy
  31. Sketching the ecology of humor in English language classes: disclosing the determinant factors
  32. Decolonizing Cameroon’s language policies: a critical assessment
  33. To copy verbatim, paraphrase or summarize – listeners’ methods of discourse representation while recalling academic lectures
Downloaded on 28.9.2025 from https://www.degruyterbrill.com/document/doi/10.1515/applirev-2024-0185/html
Scroll to top button