Abstract
Despite increasing concerns over the use of AI in surveillance, privacy, public health, climate change, global migration and warfare, the implications of its use in the field of intercultural communication are still not clearly defined. This paper critically examines the contemporary emergence of AI through the lens of a critical realist depth ontology to argue that AI, with its unending interplay of signs and symbols, is the ultimate simulacrum. As such, AI vacates the normative terrain of judgemental rationality in favour of the relativist terrain of endless simulacra and the fetish appearances of postmodernism. To illustrate this, it is argued that the inability of AI to make judgements based on judgemental rationality (or Ethics1) occludes the possibility of intervening in the world to ameliorate real injustice. Therefore, if intercultural ethics remains within the realm of judgmental relativism (or Ethics2) it abdicates the possibility to have an impact in the material world.
1 Introduction
As its name declares, Artificial Intelligence – henceforth AI – is a type of intelligence that is artificial. Rather than issuing from the minds of human beings, AI – especially in its semiotic mode – is the product of complex, machine-generated, mathematically coded algorithms whose lines of reasoning are humanly inaccessible, including to the developers responsible for designing the machines and the software that have made such algorithmic reasoning possible. In any AI outcome, especially at the most advanced technological levels, no one really knows the precise algorithmic path that has been followed that has led to a specific AI semiotic output, whether as a student paper, an AI-generated image, or an official report (Stahl et al. 2023). What we do know is that AI can produce outputs that appear to be humanly generated and therefore ‘authentic’, principally by trawling through data that is already available and producing from that data hybridized outputs that appear plausibly coherent and real. Plausible, because they appear to follow a logic that is based on human reason, and real because they appear to be humanly, as opposed to algorithmically, generated. Inevitably, this has led to concerns around deception, fakeness, dishonesty and fraud – for example, in student written papers in universities and in the political manipulation of truth in the public sphere. More broadly, the development of AI also has implications for how the nations of the world wish to approach issues such as surveillance, privacy, public health, climate change, global migration and warfare. The signs are not encouraging. Despite this, reactions to AI range from the highly celebratory – it will revolutionize our lives for the better – to the deeply dystopian – it presents an existential threat to humanity itself. It thus has ethical implications, both ontologically for us as human beings and epistemologically for what we think of as reality.
2 Intercultural ethics and judgemental relativism
When people speak of intercultural ethics, or indeed ethics in general, it is often with reference to an implied moral code, so that what is considered right and good may be differentiated from what is considered wrong and bad. This is certainly one way of thinking about ethics – as a normative moral compass for dealing with ethical questions. Let us refer to this as Ethics1. The other way is to think of ethics as a type of practice itself; that is, as a regularized way of being in the world that gives shape and purpose to the social contexts that we find ourselves in. It is this kind of ethics that we often find in approaches that are informed by poststructuralism. Let us refer to this as Ethics2. An early example of Ethics2 – although not itself poststructuralist – is Weber’s The Protestant Ethic and the Spirit of Capitalism (1905). Such was Weber’s conviction concerning labour under capitalism as a type of regularized practice, that he referred to it as an ethic. There was a moral dimension to Weber’s account as well – i.e. Ethics1 – since according to the puritan religious precepts which governed this understanding, the act of labour itself was conceived as morally good and not to labour – or to be economically inactive – was conceived as morally bad. But putting the moral dimension to one side, the notion of an ethic of practice that was concerned with regularized or standardized routines was central to Weber’s view. This notion of ethics – Ethics2 – also has resonances with the poststructuralist order of discourse of Foucault, where, as a result of the circulating operations of power, human beings are made subjects within ‘discourses [that] systematically form the objects of which they speak’ (1989/1969: 49). This purview has nothing to do with having a moral compass – Ethics1 – and everything to do with the practices which make for human identity formation – Ethics2 – without there being any grounded or normative basis for making judgements about right and wrong. Here, ethical choices, if they can be called that, are merely discursive components of a regularized epistemic practice in which such choices are always ineluctably relativized, such that one outcome is neither better nor worse than another outcome. It is this conception of ethics that has found its way into much ethnographically inspired thinking on intercultural communication and the role of language within that, such that a primary focus has been on the thick description of local communicative events and the diverse linguistic practices that are associated with them. As part of this, there has been much less attention paid to structures and underlying generative mechanisms, with an emphasis placed instead on local language production, small culture formation and – in some instances – micro-acts of resistance (Canagarajah and Dovchin 2019; Holliday and MacDonald 2020; Li 2018). This is fine as far as it goes, but if one’s interest is material social justice, societal amelioration and planetary human flourishing, then this is plainly insufficient.
It is at this juncture of epistemological difference – i.e. between Ethics1 and Ethics2 – that we locate our discussion of intercultural ethics and AI in order to demonstrate how AI offers nothing new by being firmly dedicated – like the order of discourse of Foucault – to the pursuit of empirical realism (see later) and the occlusion of the ontologically material world. This is due to AI’s incapacity to select ethically between its epistemic outputs – one is as valid as any other. With this limitation, AI inevitably vacates the normative terrain of judgemental rationality in favour of the relativist terrain of endless simulacra and fetish appearances (Baudrillard 1994; Marx 1976/1867). To explain this, we first turn to a theoretical framing that follows the critical realism of Bhaskar (2008/1975, 2016) and apply this to a critique of positivism/post-positivism and poststructuralism in social science. We then turn to a consideration of intercultural ethics and AI in order to show how, like positivism and poststructuralism, AI engages in ontological reductionism. In consequence of this, it follows poststructuralism in abdicating judgemental rationality in favour of the relativism of the simulacrum.
3 Open and closed systems: depth ontology and empirical realism
In critical realism the world is understood as an open system, whereas in natural science and in the positivist domains of social science the world is understood as a closed system. According to Bhaskar, science, and by extension positivism/post-positivism in social science, has depended upon a view of the world as a closed system so to be able to undertake ‘controlled’ – i.e. objectivist – experimental/empirical analysis which allows for the discovery of a constant conjunction of events – viz. Hume’s law of causality. This is because ‘It is only under conditions that are experimentally produced and controlled that a closure, and hence a constant conjunction of events, is possible’ (Bhaskar 2008/1975: 65). It is the discovery by human beings of conjunctions and ‘objectively’ measurable consistencies that make science and positivism what they are; that is, empirical activities conducted by human beings which rely on the artificial creation of ‘closed’ conditions which do not obtain in a world where open systems predominate, and which are implicitly presupposed by the fact of that activity.
Inasmuch as empirical – i.e. positivist and post-positivist – social science has the ambition to ape the supposed objectivity of science, it too treats the (social) world as a closed system in which ‘the practical application of our knowledge in open-systems [cannot] be sustained’ (Bhaskar, ibid: 14; parenthesis supplied). With its preoccupation with closed systems, positivist research implicitly elides from the world and from its consideration everything that is not to be empirically accounted for in the human observation of the world, whether statistical or experimental, so reducing questions about what is (ontology) to questions about what we know (epistemology). The metaphysical dogma of reducing ontology to epistemology is referred to by Bhaskar as the epistemic fallacy: ‘that statements about being can always be transposed into statements about our knowledge of being’ (ibid: 16). Ontology is thereby ‘flattened out’, such that it is made to refer to a much-narrowed range of reality – one that is lacking in ontological depth.
With the closure of the world in accordance with Hume’s law of causality, Bhaskar maintains that three levels of reality – the real (mechanisms), the actual (events), and the empirical (experiences) – have been collapsed into one: “The collapse of the real to the actual is what I call actualism; it presupposes the collapse of open to closed systems and, when coupled with the additional collapse of the actual to the empirical, results in empirical realism” (Bhaskar 2016: 24). Actualism, or empirical realism, is a shallow ontology which may be contrasted with a stratified critical realist depth ontology that incorporates all three levels of reality – the real, the actual and the empirical. Poststructuralism – viz. Foucault’s order of discourse – while evidently eschewing positivism and positivist/post-positivist objectivism, also finds itself in the same epistemic space as science and positivism/post-positivism, by means of the widespread poststructuralist reduction of reality to discourse (Best and Kellner 1991; Bhaskar 2011; Harland 1987; Weedon 1987) and the eschewal of a consideration of underlying structural mechanisms and explanatory theories due to their supposed association with ‘grand-narrative’ structural determinism and epistemic totality. The difference being that while science and positivism/post-positivism seek objectivism – i.e. as judgemental reason – albeit within a closed system, poststructuralism is wholly relativist, both judgementally and epistemologically.
4 Capitalist simulacra and intercultural ethics
The relativist drift of poststructuralism aligns with the postmodernist critique of modernity and the end of grand or ‘master’ narratives such as the Enlightenment and its faith in the universal values of reason and progress. Lyotard (1984: 4) describes the postmodern condition as a state of relativism and technological advancement characterized by ‘the miniaturization and commercialization’ of information processing machines. This pervasive presence of information systems and the relativism following the end of grand narratives is encapsulated in the concept of the simulacrum (Baudrillard 1994), signifying the vanishing of reality into an ongoing interplay between the original and its duplicates, reaching a stage where we interact with mere copies of copies. According to Baudrillard, our experience is reduced to a simulated version of reality leading to a hyperreal state where distinctions between the real and the imaginary, true and false, become indistinguishable. In this context, simulation encompasses the entire structure of representation creating an uninterrupted circuit that divorces itself from reality and becomes pure simulacrum. For Baudrillard (1995), the televised nature of the Gulf War exemplifies this transformation of reality into the simulacrum. With technological advancements enabling simulated exercises and ‘live’ feeds for the public, Baudrillard argues that individuals are reduced to hostages on the world media stage to Ethics2, virtually exiled in the simulacrum while catastrophes unfold around them. Thus, while immersed in the simulacrum we are stripped of agency and of our ability to make ethical choices – Ethics1 – that have a real impact in the material world.
Examining the contemporary emergence of AI through the lens of critical realism we argue that AI, with its unending interplay of signs and symbols, is the ultimate simulacrum, and as such it is only able to curate what is already in existence. By being algorithmically confined to what is known, and lacking a stratified depth ontology, as outlined earlier, AI commits the epistemic fallacy par excellence. Hence, just as positivism and post-positivism rely on closed systems to determine the nature of the real world, and poststructuralism is confined to the discursive realm, so AI relies on the closed system of already existing data pools to produce simulacra in the form of semiotic outputs which it then presents as ‘real’, but which are clearly nothing of the sort. AI outputs as simulacra are thus the ultimate illusion of an empirical realism that is based on a closed system. In this landscape, the inability of AI to ethically discern among its epistemic outputs – i.e. it cannot draw upon Ethics1 – creates several Ethics1 considerations that have a significant impact on intercultural communication scholarship. These considerations encompass, among others, gender and racial biases embedded in AI (Jenks 2025, this issue), the ethical implications of military applications of AI, the ethical dilemmas surrounding face-recognition technologies, the potential exploitation of AI workers in the Global South under the pretext of innovation, and unequal access to AI, all of which are critical to Ethics1. The phenomenon of AI is still in its infancy, yet its consequences in the real world remain unforeseen.
5 Ethics1: AI-generated inequality and capitalist accumulation
A recent instance of gender discrimination, backed by the use of AI, involves the hiring practices at Amazon. The company employed an AI-automated hiring tool inherently biased towards the recruitment of male employees, determined by the word choices present in resumes. In healthcare settings, it has been found that algorithms used in US hospitals were discriminating against patients based on race and that in the US court system, black offenders were more likely to be categorized as at risk of becoming recidivists than white offenders (Obermeyer et al. 2019). AI is used for the ‘intelligent’ bombing of civilians and its face recognition capacity has been proven to discriminate against skin colour, while increasing the general surveillance of populations around the world. Further discrimination is visible in the treatment of IT workers in the Global South who are sub-contracted by AI companies in the Global North to moderate online content to provide safety for their customers. These workers are being exposed to harmful content in unregulated and underpaid ‘digital sweatshops’ (Tan and Cabato 2023) without any systems in place to safeguard their mental and material wellbeing. Finally, access to generative AI tools such as Chat GPT now widely used for educational purposes is unequally distributed not only between the North and the Global South, but even within the wealthiest areas of the developed world, limiting its use to those who can afford adequate internet connectivity and computers, and who speak the right LLM (Large Language Model) English – as Brandt and Hazel (2024) in this issue suggest. These considerations underpin recent calls to address the Ethics1 implications of AI to reduce its in-built biases (Jenks 2025, this issue; Johnson 2022; Mehrabi et al. 2021), the exploitation of digital workers in the Global South (Anwar and Graham 2020) and its use in surveillance and the military (Ams 2021; Saheb 2023). These are ethical dilemmas which a focus on the actual and empirical domains in positivism/postpositivism and poststructuralism is unable to address. By being focused on individualist agency and innovative bricoleur, attention to structure is suppressed. The consequence is that these positions find themselves occupying the same epistemological space as the capitalist individualists for whom self-agency alongside a regularized service to processes of accumulation is the whole game (Kubota 2016; Urciuoli 2008).
By playing into the individualist hands of capitalist accumulation, Ethics2 perspectives abdicate judgemental reason and the capacity to make moral choices (Ethics1). This is explicit in AI, as the paper of Rodney Jones (2024) in this issue demonstrates, where the AI entity’s response to an Ethics1 challenge is to deny its capacity to make any such judgement and deflects responsibility for such decision making to the moral reasoning of the human interlocutor. But this is as nothing in comparison with the singular problem that is AI, which is that it is built, owned and released into the world by corporations that are wedded to the advance of private capital accumulation (Rob Faure Walker, personal communication). In this sense, AI is the same as every piece of corporate bureaucracy that has come before it – it will be impossible to have a full view of the operations, causes and harms that it produces, especially when things go horribly wrong. Not only that, but just like AI’s obscurantist algorithmic trail, responsibility for any gross harms that AI produces will be so dispersed as to be untraceable, such that no one can be held accountable for the ills that occur. For this reason, AI is the ultimate ghost in the machine.
Even in the potential ameliorative use of AI in areas such as global healthcare (Dai et al. 2024, this issue) and the management of climate change the link between AI and capitalism remains. In short, only those persons and societies with the requisite financial and technological resources will have access to whatever advances and ameliorations are offered by AI, further exacerbating inequality between poorer and more developed nations. AI, by determining the world in this way, not only (re)engages us in continued ontological reductionism by manipulating anew what is already in existence, but it also creates the fetish illusion of a level playing field in which all human beings and the nations to which they belong have an equal opportunity to have their ills reversed. In the reality of the capitalist world-system in which we live, this is patently false. AI, presented as a solution to the world’s problems, seems more likely only to exacerbate the world’s problems by increasing competition between nations, business corporations and various elite groups over access to its supposed benefits, while also enhancing its dangers. This may lead to some being cured of cancer, or having their air quality improved, but it will not cure the ills of the world, nor reduce the existential dangers that profit-based AI represents, because these are a product of the real underlying mechanisms which are responsible for producing the epistemic domains in which AI exists and which are the source of our ongoing global-capitalist dysfunction.
6 The empirical-realist turn
In this connection, we believe that the conflating of reality with discourse – as occurs in poststructuralist empirical realism – is a ‘wrong turn’ because it cuts the ground from under Ethics1 as the exercise of judgemental rationality and only leaves us with Ethics2 as practices to be described – often in minute detail. The urge to resist injustice and inequality often remains, but it is greatly etiolated by the compulsion to focus on the local at the expense of everything else. The consequence for intercultural ethics, as the pursuit of Ethics2, is a politics of recognition, and not one of redistribution, as Nancy Fraser (1995) has pointed out. In place of redistribution, poststructuralists – and by extension interculturalists who locate themselves in this space – have instead “thrown themselves upon a preoccupation with individuated micro-resistances and the politics of recognition without dealing with ‘the underlying generative framework’ (Fraser 1995: 82) or ‘the generative complexes at work’ (Bhaskar 2008/1975: 48) which are responsible for the (re)production of particular kinds of social activity” (O’Regan 2021: 200). War, genocide, famine, human displacement and drowning at sea are real things that cannot be explicated or addressed solely in terms of the discursive mediation of reality. In Bhaskar’s words, they each constitute ‘material states of being’ (Bhaskar 2016: 105). It follows from this that social reality ‘though concept-dependent, is not exhausted by conceptuality’ (ibid). Equally, we wish to affirm, lest there be any doubt, that an intercultural ethics that is materially applied demands that we have a view on these matters, such is the deontology of our shared intercultural being.
We claim that intercultural communication studies – hence, intercultural ethics – should not be limited to the thick or local description of interactions between individuals from different cultural and linguistic backgrounds so as to improve communication and bridge the essentialist cultural divide, or to take account of the non-essentialized diversity of empirical human practice. Instead, intercultural ethics should confront the injustices and inequalities that are glossed over in both essentialist and non-essentialist perspectives in intercultural communication studies, and which also subsist in the underlying generative mechanisms and causes which non-essentialist poststructuralist positions miss. It can be argued from the latter position that focusing on small cultural formations and the ways in which culture is co-constructed between interactants disrupts the epistemic hegemony of grand narratives concerning national identity, language and culture. However, if this perspective retreats into relativism and the inability to choose between better or worse outcomes, it falls short of critically addressing the question of what is (ontology) and remains at the level of what we know (epistemology). Similarly, calls to decolonize intercultural communication and its Eurocentric bias are devoid of any real meaning if not based on a critique of an imperialist, racist and patriarchal capitalism that is motivated by Ethics1. All these dimensions operate at a level of reality that have real consequences in our lives, and as such, not making our views on these matters explicitly based on a critique of what is, leaves the field in an ethical vacuum. Thus, even if we recognize the intersubjective nature of interaction and its contingent and dynamic nature, these interactions are still taking place on a structurally unjust and unequal playing field in the real world. Therefore, our work as critical interculturalists is to examine the underlying mechanisms that generate these injustices and to take a morally ethical stance in relation to them – Ethics1. In this vein, as argued by Phipps (2014) and others (Ferri 2022; Moon and Holling 2015; Nakayama 2020), intercultural dialogue has to be ‘re-politicized’ in order to recognize the material conditions of precarity, conflict and displacement in which much intercultural communication takes place outside of neutral and idealized epistemic models of intercultural competence, intercultural understanding and intercultural awareness.
7 Intercultural ethics and AI: an existential dilemma
To conclude, it is our view that the emergence of AI poses an existential dilemma for critical intercultural scholarship: of either retreating, by means of AI, into depoliticized and uncritical micro-descriptions of intercultural interactions – as in positivist and poststructuralist empirical realism (Ethics2) – or of addressing intercultural injustice as it unfolds in the real world (Ethics1). Incorporating diverse voices and advocating for non-normative identities and language practices in AI could reduce its in-built gender and racial bias to some extent, but these effects are undermined by AI’s ontological reductionism. To take such reductionism a step further and to delegate to AI judgements over issues of diversity, equality and inclusion – as has occurred in corporate EDI strategies – not only risks the production of algorithmically determined injustice and suffering but is also to ignore the underlying generative complexes which are responsible for the injustice and suffering that exists. An intercultural ethics embedded in AI thus only feeds the endless reproduction of the simulacrum.
References
Ams, Shama. 2021. Blurred lines: The convergence of military and civilian uses of AI & data use and its impact on liberal democracy. International Politics 60(1). 879–896. https://doi.org/10.1057/s41311-021-00351-y.Suche in Google Scholar
Anwar, Mohammad Amir & Mark Graham. 2020. Digital labour at economic margins: African workers and the global information economy. Review of African Political Economy 47(163). 95–105. https://doi.org/10.1080/03056244.2020.1728243.Suche in Google Scholar
Baudrillard, Jean. 1994. Simulacra and simulation. Ann Arbor, MI: University of Michigan Press.Suche in Google Scholar
Baudrillard, Jean. 1995. The Gulf War did not take place. Sydney: Power Publications.Suche in Google Scholar
Best, Steven & Kellner Douglas. 1991. Postmodern theory: Critical interrogations. London: Macmillan.10.1007/978-1-349-21718-2Suche in Google Scholar
Bhaskar, Roy. 2008/1975. A realist theory of science. London: Verso.Suche in Google Scholar
Bhaskar, Roy. 2011. From science to emancipation: Alienation and the actuality of enlightenment. London: Routledge.Suche in Google Scholar
Bhaskar, Roy. 2016. Enlightened common sense: The philosophy of critical realism. London: Routledge.10.4324/9781315542942Suche in Google Scholar
Brandt, Adam & Spencer Hazel. 2024. Towards interculturally adaptive conversational AI. Applied Linguistics Review.10.1515/applirev-2024-0187Suche in Google Scholar
Canagarajah, Suresh & Sender Dovchin. 2019. The everyday politics of translingualism as a resistant practice. International Journal of Multilingualism 16(2). 127–144. https://doi.org/10.1080/14790718.2019.1575833.Suche in Google Scholar
Dai, David W., Shungo Suzuki & Guanliang Chen. 2024. Generative AI for professional communication training in intercultural contexts: Where are we now and where are we heading? Applied Linguistics Review.10.1515/applirev-2024-0184Suche in Google Scholar
Ferri, Giuliana. 2022. The master’s tools will never dismantle the master’s house: Decolonising intercultural communication. Language and Intercultural Communication 22(3). 381–390. https://doi.org/10.1080/14708477.2022.2046019.Suche in Google Scholar
Foucault, Michel. 1989/1969. The Archaeology of knowledge (A. M. Sheridan Smith, Trans.). London: Tavistock Publications.Suche in Google Scholar
Fraser, Nancy. 1995. From redistribution to recognition? Dilemmas of justice in a ‘Post-Socialist’ age. New Left Review 1(212). 68–93.Suche in Google Scholar
Harland, Richard. 1987. Superstructuralism: The philosophy of structuralism and post-structuralism. London: Methuen.Suche in Google Scholar
Holliday, Adrian & Malcolm N. MacDonald. 2020. Researching the intercultural: Intersubjectivity and the problem with postpositivism. Applied Linguistics 41(5). 621–639. https://doi.org/10.1093/applin/amz038.Suche in Google Scholar
Johnson, Simisola. 2022. Racing into the fourth industrial revolution: Exploring the ethical dimensions of medical AI and rights-based regulatory framework. AI Ethics 2. 227–232. https://doi.org/10.1007/s43681-022-00153-9.Suche in Google Scholar
Jenks, Christopher J. 2025. Communicating the cultural other: Trust and bias in generative AI and large language models. Applied Linguistics Review 16(2). 787–795. https://doi.org/10.1515/applirev-2024-0196.Suche in Google Scholar
Jones, Rodney H. 2024. Culture machines. Applied Linguistics Review.10.1515/applirev-2024-0188Suche in Google Scholar
Kubota, Ryuko. 2016. The multi/plural turn, postcolonial theory, and neoliberal multiculturalism: Complicities and implications for applied linguistics. Applied Linguistics 37(4). 474–494. https://doi.org/10.1093/applin/amu045.Suche in Google Scholar
Li, Wei. 2018. Linguistic (super)diversity, post-multilingualism and translanguaging moments. In Angela Creese & Adrian Blackledge (eds.), The Routledge handbook of language and superdiversity: An interdisciplinary perspective, 16–29. London: Routledge.10.4324/9781315696010-3Suche in Google Scholar
Lyotard, Jean-François. 1984. The postmodern condition: A report on knowledge. Manchester: Manchester University Press.10.2307/1772278Suche in Google Scholar
Marx, Karl. 1976/1867. Capital: A critique of political economy, I. London: Penguin.Suche in Google Scholar
Mehrabi, Ninareh, Fred Morstatter, Nripsuta Saxena, Kristina Lerman & Galstyan Aram. 2021. A survey on bias and fairness in machine learning. ACM Computing Surveys 54(6). 1–35. https://doi.org/10.1145/3457607.Suche in Google Scholar
Moon, Dreama & Michelle A. Holling. 2015. A politic of disruption: Race(ing) intercultural communication. Journal of International and Intercultural Communication 8(1). 1–6. https://doi.org/10.1080/17513057.2015.991073.Suche in Google Scholar
Nakayama, Thomas K. 2020. Critical intercultural communication and the digital environment. In Guido Rings & Sebastian Rasinger (eds.), The Cambridge Handbook of intercultural communication, 85–95. Cambridge: Cambridge University Press.10.1017/9781108555067.008Suche in Google Scholar
Obermeyer, Ziad, Brian Powers, Christine Vogeli & Sendhil Mullainathan. 2019. Dissecting racial bias in an algorithm used to manage the health of populations. Science 366. 447–453. https://doi.org/10.1126/science.aax2342.Suche in Google Scholar
O’Regan, John P. 2021. Global English and political economy. London: Routledge.10.4324/9781315749334Suche in Google Scholar
Phipps, Alison. 2014. ‘They are bombing now’: ‘Intercultural dialogue’ in times of conflict. Language and Intercultural Communication 14(1). 108–124. https://doi.org/10.1080/14708477.2013.866127.Suche in Google Scholar
Saheb, Tahereh. 2023. Ethically contentious aspects of artificial intelligence surveillance: A social science perspective. AI Ethics 3. 369–379. https://doi.org/10.1007/s43681-022-00196-y.Suche in Google Scholar
Stahl, Bernd Caster, Doris Schroeder & Rowena Rodrigues. 2023. Ethics of artificial intelligence: Case studies and options for addressing ethical challenges, 1st edn. Cham: Springer Nature.10.1007/978-3-031-17040-9Suche in Google Scholar
Tan, Rebecca & Regine Cabato. 2023. Behind the AI boom, an army of overseas workers in ’digital sweatshops. Washington Post. https://link.gale.com/apps/doc/A762692208/AONE?u=anon∼8939f117&sid=googleScholar&xid=99353146 (accessed 13 January 2024).Suche in Google Scholar
Urciuoli, Bonnie. 2008. Skills and selves in the new workplace. American Ethnologist 35(2). 211–228. https://doi.org/10.1111/j.1548-1425.2008.00031.x.Suche in Google Scholar
Weber, Max. 2001/1905. The protestant ethic and the spirit of capitalism. London: Routledge.Suche in Google Scholar
Weedon, Chris. 1987. Feminist practice and poststructuralist theory. London: Wiley.Suche in Google Scholar
© 2024 the author(s), published by De Gruyter, Berlin/Boston
This work is licensed under the Creative Commons Attribution 4.0 International License.
Artikel in diesem Heft
- Frontmatter
- Special Issue 1 : Applied Linguistics, Ethics and Aesthetics of Encountering the Other; Guest Editors: Maggie Kubanyiova and Angela Creese
- Introduction
- Introduction: applied linguistics, ethics and aesthetics of encountering the Other
- Research Articles
- “When we use that kind of language… someone is going to jail”: relationality and aesthetic interpretation in initial research encounters
- The humanism of the other in sociolinguistic ethnography
- Towards a sociolinguistics of in difference: stancetaking on others
- Becoming response-able with a protest placard: white under(-)standing in encounters with the Black German Other
- (Im)possibility of ethical encounters in places of separation: aesthetics as a quiet applied linguistics praxis
- Unsettled hearing, responsible listening: encounters with voice after forced migration
- Special Issue 2: AI for intercultural communication; Guest Editors: David Wei Dai and Zhu Hua
- Introduction
- When AI meets intercultural communication: new frontiers, new agendas
- Research Articles
- Culture machines
- Generative AI for professional communication training in intercultural contexts: where are we now and where are we heading?
- Towards interculturally adaptive conversational AI
- Communicating the cultural other: trust and bias in generative AI and large language models
- Artificial intelligence and depth ontology: implications for intercultural ethics
- Exploring AI for intercultural communication: open conversation
- Review Article
- Ideologies of teachers and students towards meso-level English-medium instruction policy and translanguaging in the STEM classroom at a Malaysian university
- Regular articles
- Analysing sympathy from a contrastive pragmatic angle: a Chinese–English case study
- L2 repair fluency through the lenses of L1 repair fluency, cognitive fluency, and language anxiety
- “If you don’t know English, it is like there is something wrong with you.” Students’ views of language(s) in a plurilingual setting
- Investments, identities, and Chinese learning experience of an Irish adult: the role of context, capital, and agency
- Mobility-in-place: how to keep privilege by being mobile at work
- Shanghai hukou, English and politics of mobility in China’s globalising economy
- Sketching the ecology of humor in English language classes: disclosing the determinant factors
- Decolonizing Cameroon’s language policies: a critical assessment
- To copy verbatim, paraphrase or summarize – listeners’ methods of discourse representation while recalling academic lectures
Artikel in diesem Heft
- Frontmatter
- Special Issue 1 : Applied Linguistics, Ethics and Aesthetics of Encountering the Other; Guest Editors: Maggie Kubanyiova and Angela Creese
- Introduction
- Introduction: applied linguistics, ethics and aesthetics of encountering the Other
- Research Articles
- “When we use that kind of language… someone is going to jail”: relationality and aesthetic interpretation in initial research encounters
- The humanism of the other in sociolinguistic ethnography
- Towards a sociolinguistics of in difference: stancetaking on others
- Becoming response-able with a protest placard: white under(-)standing in encounters with the Black German Other
- (Im)possibility of ethical encounters in places of separation: aesthetics as a quiet applied linguistics praxis
- Unsettled hearing, responsible listening: encounters with voice after forced migration
- Special Issue 2: AI for intercultural communication; Guest Editors: David Wei Dai and Zhu Hua
- Introduction
- When AI meets intercultural communication: new frontiers, new agendas
- Research Articles
- Culture machines
- Generative AI for professional communication training in intercultural contexts: where are we now and where are we heading?
- Towards interculturally adaptive conversational AI
- Communicating the cultural other: trust and bias in generative AI and large language models
- Artificial intelligence and depth ontology: implications for intercultural ethics
- Exploring AI for intercultural communication: open conversation
- Review Article
- Ideologies of teachers and students towards meso-level English-medium instruction policy and translanguaging in the STEM classroom at a Malaysian university
- Regular articles
- Analysing sympathy from a contrastive pragmatic angle: a Chinese–English case study
- L2 repair fluency through the lenses of L1 repair fluency, cognitive fluency, and language anxiety
- “If you don’t know English, it is like there is something wrong with you.” Students’ views of language(s) in a plurilingual setting
- Investments, identities, and Chinese learning experience of an Irish adult: the role of context, capital, and agency
- Mobility-in-place: how to keep privilege by being mobile at work
- Shanghai hukou, English and politics of mobility in China’s globalising economy
- Sketching the ecology of humor in English language classes: disclosing the determinant factors
- Decolonizing Cameroon’s language policies: a critical assessment
- To copy verbatim, paraphrase or summarize – listeners’ methods of discourse representation while recalling academic lectures