Home Linguistics & Semiotics Developing AI literacies and negotiating professional identities: a study of pre-service English teachers in a ChatGPT-facilitated pedagogy
Article Open Access

Developing AI literacies and negotiating professional identities: a study of pre-service English teachers in a ChatGPT-facilitated pedagogy

  • Yilin You

    Yilin You obtained her master’s degree in linguistics and applied linguistics from the School of English and International Studies, Beijing Foreign Studies University. Her research interests include identity and investment in language learning, social inequality, Global Englishes, and AI literacies. She is the recipient of the 2024 AILA Solidarity Award from the International Association of Applied Linguistics (AILA).

    ORCID logo
    and Yue Zhang

    Yue Zhang is an Assistant Professor in the Department of English Language Education, The Education University of Hong Kong. She received her M.Phil. and Ph.D. in Applied English Linguistics at The Chinese University of Hong Kong. Her research focuses on language identity and investment, critical pedagogy, CALL, and AI literacies. She has published in Asia-Pacific Journal of Teacher Education, ELT Journal, ReCALL, RELC, Journal of Language, Identity & Education, Language Awareness, Computer-Assisted Language Learning, Journal of Multilingual and Multicultural Development, Computers and Education: Artificial Intelligence, TESOL Quarterly, System, and Research Methods in Applied Linguistics.

    ORCID logo EMAIL logo
Published/Copyright: June 16, 2025

Abstract

As generative AI models such as ChatGPT reshape language education, understanding how teachers engage with AI-mediated practices and negotiate professional identities becomes essential. This longitudinal multiple-case qualitative study investigates how non-native pre-service English teachers at a university in the Hong Kong SAR, China, developed AI literacies and negotiated their professional identities during a 13-week M.Ed. course integrating ChatGPT-facilitated pedagogy. Notably, the study draws on Darvin and Norton’s investment model, reconceptualizing AI literacies as situated social practices that mediate the negotiation of identity, capital, and ideology. Data were collected through end-of-course interviews, learner artifacts, and teaching materials. Findings reveal that PSTs’ development of AI literacies is a multidimensional process involving (1) recognizing AI’s (linguistic) strengths, (2) refining prompt engineering competencies, and (3) cultivating critical awareness of AI’s limitations and strategic use. While participants leveraged AI-mediated linguistic, cultural, and technical capital to enhance perceived professionality and assert more legitimate language teacher identities, their tendency to adopt AI-generated “native-like” outputs without sufficient critical evaluation may risk reinforcing existing linguistic hegemony. The study offers practical implications for integrating AI literacies development into teacher education, balancing technological affordances with emancipatory pedagogy.

1 Introduction

In language education, the integration of artificial intelligence (AI) technologies, particularly large language models like ChatGPT, raises critical questions about teachers’ preparedness and identity development in this transforming educational landscape (Guan et al. 2025a; Rajak et al. 2024). Pre-service teachers (PSTs), especially multilingual PSTs who are labelled as “non-native” English teachers (NNETs), face unique challenges in this AI-enhanced environment. As emerging professionals, they must navigate AI integration while constructing their professional identities at a crucial stage of their career development (Ayanwale et al. 2024). For NNETs, this navigation becomes particularly complex as they have historically struggled with perceived inadequacies in establishing professional legitimacy compared with native English teachers (NETs; Aneja 2016; Holliday 2006). The intersection of their pre-service status and “non-native” identity creates a unique and under-researched context for examining how emerging professionals adapt to the AI-mediated educational landscape and negotiate their identity and legitimacy in the process (Guan et al. 2025a).

These challenges have led scholars to examine the concept of AI literacy in language education contexts, an area that remains under-explored, particularly regarding teachers rather than students (Ayanwale et al. 2024; Ma et al. 2024; Sperling et al. 2024). While early approaches to AI literacy focused primarily on technical competencies and skills (e.g., Kandlhofer et al. 2016; Long and Magerko 2020; Ng et al. 2021a, 2021b), recent scholarship has begun to reconceptualize AI literacies as social practices embedded in specific contexts and power relations (Guan et al. 2025a; Liu et al. 2024). This shift recognizes that teachers do not simply learn to operate AI tools; rather, they engage in situated practices that are inherently social and intertwined with identity, power and ideology. Grounded in (critical) digital literacies research (Darvin 2024; Darvin and Hafner 2022; Jiang and Gu 2023), this perspective particularly illuminates how NNETs navigate the affordances and constraints of AI while developing their language teacher identity (LTI) in contexts where their legitimacy has often been questioned.

This study examines how “non-native” pre-service English teachers develop AI literacies and negotiate their identities through engagement in ChatGPT-facilitated pedagogy. Darvin and Norton’s (2023) model of investment was adopted to reframe PSTs’ participation in AI-mediated practices as social practices informed by identity, capital and ideology. In so doing, the study reveals how these interactions influence their AI literacies practices and shape their evolving professional identities.

2 Literature review

2.1 AI in language education: emerging landscape for PSTs

The emergence of AI technologies, specifically generative models like ChatGPT, has fundamentally transformed language teaching and learning practices (Liang et al. 2023). These technologies offer unprecedented capabilities in language processing and feedback provision, introducing new possibilities for personalized learning experiences and immediate linguistic support (Huang et al. 2023). For example, Ji et al.’s (2022) systematic review suggested that conversational AIs could assist language teachers by playing the role of “a conversational partner, feedback provider, resource provider, and needs analyst” (p. 57).

However, such integration also raises critical questions about teachers’ readiness and preparedness to adapt to an increasingly AI-mediated educational landscape (Ayanwale et al. 2022; Du et al. 2024; Guan et al. 2025a; Lan 2024). Research across diverse disciplines has indicated that teachers often struggle with when and how to integrate AI tools effectively (ElSayary 2024), anxiety and bewilderment about using new technology (Li and Huang 2020), and apprehension regarding AI as a threat to students’ critical thinking and writing abilities (Hong 2023), etc. Furthermore, studies have also underscored the risks associated with poorly guided AI use, particularly its potential to perpetuate existing stereotypes and hegemonies, diminish learner autonomy, and facilitate academic misconduct (De Roock 2024; Tang and Su 2024). These findings emphasize the need for critical and reflective engagement with this inherently imperfect tool (Dai and Hua 2025).

PSTs represent a crucial demographic as they occupy a unique position as future educators and face unique challenges adapting to the AI-enhanced educational context. At the early stage of their professional development, PSTs’ developing attitudes and competencies toward AI will profoundly influence their future teaching practices, and thus potentially affect a large number of students in understanding and employing AI technologies (Sperling et al. 2024). However, the challenges of AI integration are also acute for PSTs, as they must navigate this technological transformation during a crucial period where their professional identities and pedagogical practices are yet to be established (Guan et al. 2025a; Lan 2024). These challenges become even more pronounced as AI technologies continue to evolve rapidly, requiring PSTs to constantly adapt and update their understanding and approaches accordingly.

2.2 Conceptualizing AI literacies as social practices

Given the evolving AI-mediated landscape and the challenges it presents for language teachers, AI literacy has been increasingly recognized as a crucial competency for teachers (Ayanwale et al. 2024; Du et al. 2024; Ma et al. 2024; Sperling et al. 2024). Early conceptualizations primarily focused on technical or operational competencies, defining AI literacy as the ability to critically evaluate, communicate and collaborate effectively with AI tools purposefully across various contexts (Kandlhofer et al. 2016; Long and Magerko 2020; Ng et al. 2021a, 2021b). More specific to language teachers, AI literacy has been conceptualized to emphasize teachers’ ability to understand AI usage for pedagogical purposes, including its functionalities, limitations, and ethical implications, without the need to grasp the complexity of the underlying technology and technical issues (Liang et al. 2023; Ma et al. 2024). This skill-oriented approach offers a comprehensive framework that is conducive to operationalization, outlining specific skills and knowledge relevant to guiding and assessing teachers’ competency in using AI tools effectively.

Nevertheless, the complexities of engaging with AI practices necessitate moving beyond viewing AI literacy as merely a set of technical skills. Instead, recent scholarship argues that AI literacies should be viewed as social practices of negotiating the affordances and constraints of AI platforms while demonstrating a critical awareness of how AI platforms can be biased in providing information (Liu et al. 2024). Grounded in the concept of (critical) digital literacies (Darvin 2024; Darvin and Hafner 2022), the conceptualization of AI literacies recognizes that users’ engagement with AI technologies is diversified, context-bound and intertwined with one’s identities, resources, and social purposes. Thus, “literacies” in plural form can effectively indicate that users negotiate the affordances and constraints of AI tools in different ways to achieve various purposes, resulting in diversified, contextualized AI-related practices.

Moreover, Darvin (2024) argues that digital tools are never neutral, as platform designs and algorithms often index the interests of specific institutions and individuals. Therefore, AI literacies are circumscribed by power relations and inequalities and more specifically, critical AI literacies involve practices of interrogating how power operates in the reproduction of ideologies, inequalities and modes of exclusion in these AI platforms (Ghimire 2025; Leander and Burriss 2020). Overall, this shift from AI literacy to AI literacies recognizes that engaging with AI is inherently social, involving negotiations of power, identity, and social purposes. As Liu et al. (2024) commented, AI literacies cast a light on “the particular competencies, attitudes, and dispositions needed to use AI tools in agentive ways” (p. 20).

2.3 Negotiation of LTI in AI-enhanced language teaching

The social practice perspective on AI literacies becomes particularly relevant when considering how teachers construct their professional identities in AI-enhanced environments. LTI has long been recognized as a central issue in teachers’ professional development, affecting their beliefs, emotions and professional practices (Kanno and Stuart 2011; Norton 2016). It is the continuous process of teachers negotiating their roles, values and behaviours through engaging in varying discourses and practices, influenced by diverse cognitive, sociocultural, and emotional factors (Beauchamp and Thomas 2009; Pennington and Richards 2016; Yuan and Lee 2015). LTI is widely regarded as a focal point for research and discussions on language teacher education (Kanno and Stuart 2011), with the field committed to exploring key events that shape its development. In the context of AI integration, PSTs are not only adapting to this emerging technology but also redefining their interactions with these powerful tools, thereby reimagining their LTIs in potentially transformative ways.

For multilingual teachers, this identity negotiation becomes even more complex. Studies have extensively documented how NNETs often struggle with professional legitimacy due to entrenched native-speakerism – a dominant ideology in English language teaching (ELT) that privileges and legitimizes NETs over NNETs (Aneja 2016; Holliday 2006; Park 2012). They often experience the “impostor syndrome” (Bernat 2008), a feeling of lack of confidence in one’s English proficiency, fearing that their English will be judged negatively by students and others. Consequently, they tend to experience barriers in developing a confident professional self-image, which could affect their pedagogical choices and classroom authority (Creese et al. 2014). These identity negotiations become particularly relevant in light of AI integration, as generative AI’s capacity to produce native-like language output effortlessly might introduce new dimensions to multilingual teachers’ identity and legitimacy.

In a recent study, Tsou et al. (2024) investigated the potential role of ChatGPT in empowering English-medium instruction (EMI) teachers of STEM subjects in Chinese Taiwan to develop a competent EMI teacher identity through a professional development programme. The study found that ChatGPT can serve as an empowering EMI teacher companion by easing their insecurity regarding the lack of English proficiency and encouraging them to engage with students more attentively. Similarly, Ghiasvand and Seyri (2025) study also found that AI significantly contributed to Iranian English as a foreign language (EFL) teachers’ identity (re)construction in ways such as enhancing teachers’ professional expertise and knowledge base and spurring teachers to be self-reflective practitioners. Meanwhile, Guan et al. (2024) conducted a mixed-method study of English learners in the informal digital learning context. The findings revealed that AI-mediated informal digital learning practices can improve university English learners’ oral proficiency, but AI conversational partner alone is not adequate to sustain continuous extramural learning practices. Their (2025a) qualitative inquiry of AI-integrated training for pre-service teachers also indicates a lack of understanding and training for PSTs to effectively and adequately integrate AI into their learning-to-teach practices as a process of teacher identity negotiation.

The intersection of pre-service and “non-native” English teacher identities provides a valuable lens for examining LTI in AI-enhanced environments. This dual positioning at the margins of professional authority makes their engagement with AI technologies particularly significant for understanding how power relations and professional identities are negotiated in emerging technological contexts. However, despite this relevance, there remain significant gaps in the literature concerning how these identities evolve amid the integration of AI.

2.4 Research gaps and theoretical framework

While existing research has examined pre-service language teachers’ perceptions of AI tools (ElSayary 2024; Guan et al. 2025a), there are significant gaps in understanding the complex process through which PSTs develop AI literacies under guided, AI-facilitated pedagogy. Meanwhile, although PSTs’ and NNETs’ identity has been extensively studied (Park 2012; Yuan and Lee 2015), the ways in which AI technologies introduce new dimensions to these identity negotiations remain underexplored. These gaps highlight the urgent need for research that examines how teachers engage with AI-mediated practices to develop AI literacies and construct professional identities. Understanding this process is crucial for informing teacher education practices in the age of AI.

To this end, Darvin and Norton’s (2023) model of investment serves as a valuable theoretical framework, reframing AI-mediated pedagogical activities as situated social practices and thereby effectively acknowledging the diverse social contexts faced by these “non-native” PSTs. Investment refers to “the choice to participate in a social practice (such as AI use) and involves understanding the material and symbolic context in which this choice is made” (Darvin and Norton 2023, p. 33). In the context of AI integration, investment acknowledges that learners need to negotiate their resources and assert their identities in order to invest in AI practices (Darvin and Norton 2023).

The model emphasizes three interconnected components: identity (the multiple positions that individuals take up or are assigned), capital (various forms of resources that can be converted into symbolic power), and ideology (dominant values and power structures that influence what practices are considered legitimate) (Figure 1).

Figure 1: 
Model of L2 investment (Darvin and Norton 2023, p. 42).
Figure 1:

Model of L2 investment (Darvin and Norton 2023, p. 42).

Identity is “how a person understands his or her relationship to the world, how that relationship is structured across time and space, and how the person understands possibilities for the future” (Norton 2016, p. 45). Capital, drawn upon Bourdieu’s (1986) conceptualization, denotes how power exists in various forms and is distributed to represent the structure of the social world. When different forms of capitals are perceived and recognized as legitimate in specific fields, they become what Bourdieu calls symbolic capital, which constitutes symbolic power – an “invisible” power embedded in the social fabric, operating through norms, values, and language and supporting dominant power structures and social orders (Kramsch 2021). In the investment model, ideology extends beyond language ideology to encompass broader ways of thinking that shape the dominant beliefs and practices of specific social groups or entities. Ideologies are important in understanding investment as they assign value to capital and make identity construction “created, sustained and contested” (Gee 2000, p. 114).

In Darvin and Norton’s (2023) investment model, identity, capital, and ideology function as interconnected components that collectively shape how learners engage with language practices. Identity positions influence what forms of capital one seeks to acquire, while the capital one possesses affects how they can position themselves in various contexts. Simultaneously, prevailing ideologies determine which identities are recognized as legitimate and which forms of capital are valued in specific fields. For “non-native” PSTs navigating AI-enhanced environments, this theoretical lens is particularly relevant as it illuminates how their investment in AI-mediated practices may be influenced by their evolving self-perception (identities), their access to linguistic and technical resources (capital), and the dominant societal beliefs about language teaching expertise and AI use (ideology). The model thus provides a comprehensive framework for examining how these teachers might negotiate their professional development amid technological innovation.

Moreover, the notion of investment can be adopted to explore how modes of exclusion operate in professional contexts where language learners and teachers position themselves and are positioned not only in terms of different social (Zhang and Darvin 2025) and professional (Guan et al. 2025b; Zhang 2024b; Zhang and Huang 2024) identities but also in terms of how learners can be empowered through critical pedagogies that enable them to claim as legitimate speakers of the language (Gonzales et al. 2025; Zhang and Wilkinson 2024). The model of L2 investment is particularly relevant for examining how “non-native” PSTs develop AI literacies because it recognizes the multiple identity positions teachers navigate (Norton and Toohey 2011), the shifting context of teacher identity negotiation (Zhang and Huang 2024), how PSTs invest in and divest from specific learning-to-teach practices (Zhang 2024b).

In the Chinese context, “non-native” learners of English can be empowered to counter insidious (neo)colonial ideologies and invest in their identities as legitimate users of World Englishes (Zhang and Wilkinson 2024) through a critical pedagogy that changed their attitudes towards World Englishes varieties (Gonzales et al. 2025). In the use of AI tools, learners may resort to such tools as the source of authority (Loftus and Madden 2020), potentially shifting the power relationship between language learners and “native” speakers. Also, as “non-native” learners of English, PSTs may develop different AI “literacies” that are “intrinsically diverse, historically and culturally variable, practices with (AI) texts” (Collins and Blot 2003, p. 4). Despite the problematic conceptualization of PSTs as “non-native”, a positioning that challenges their ownership of English and legitimate identity as users of World Englishes (Zhang in press), digital tools such as online tutoring platforms can facilitate or even encourage their users to enact native-speakerism (Curran 2023a). For instance, teachers positioned to diverge from expectations of nativeness can be rated lower (Curran 2023b). Despite the biases and stereotypes embedded in AI output (Zhang and Gonzales 2025), little attention has been directed to how PSTs actually invest in their professional teacher identities and learning-to-teach practices involving AI. While AI can enhance learner motivation (Guan et al. 2025b), highly motivated language learners may not necessarily invest in the corresponding language and literacy practices (Darvin and Norton 2023). Aligned with the theoretical framework, we propose one research question to address this issue: How do “non-native” pre-service English teachers develop AI literacies and negotiate their LTIs through investment in AI-mediated practices?

3 Methodology

3.1 Design

This study employs a longitudinal multiple-case approach with an aim for an in-depth understanding of a small group of participants in a situated context (Duff 2014). By selecting eight cases, the study enables a thematic analysis that extends beyond individual instances, capturing the evolution of learner behaviors and perspectives over time. The authors, as both researchers and practitioners, adopt a proactive stance, collaboratively driving change within their distinct sociocultural contexts.

3.2 Participants and context

The present study was conducted over a 13-week English PST education course, offered from January to April 2024 at a public university in Hong Kong SAR, China. The second/corresponding author, serving as the course lecturer and coordinator, was responsible for designing the course, delivering lectures, and integrating ChatGPT into both in-class and extracurricular learning activities for 61 M.Ed. students. After designing the study and relevant protocols, at the end of the term, the second/corresponding author introduced the project to the students, eleven of whom volunteered to participate. From this group, eight participants (see Table 1) were selected based on (a) their diverse linguistic and regional backgrounds, and (b) their willingness to provide all learner artifacts and the relevance of those artifacts to key themes of the study.

Table 1:

Participant profiles.

Name Gender L1 L2 L3 Region of birth Program
May Female Mandarin Jiangxi dialect English Jiangxi M.Ed. (EAP)
Emma Female Mandarin English / Sichuan M.Ed. (EAP)
Elizabeth Female Cantonese Mandarin English Guangdong M.Ed. (EAP)
Julie Female Mandarin Sichuan dialect English Sichuan M.Ed. (EAP)
Victoria Female Mandarin English / Hebei M.Ed. (EAP)
Luna Female Chaoshan dialect Mandarin English Guangdong M.Ed. (EAP)
Christine Female Mandarin Cantonese English Guangdong M.Ed. (EAP)
Tom Male Cantonese Mandarin English Guangdong M.Ed. (EAP)

The development of digital literacies in the AI era is crucial for Hong Kong, which aims to become a leading global “Smart City” (Education Bureau 2015). Despite significant progress in the 2017 Smart City Blueprint (Education Bureau 2023), more empirical studies are needed to understand how different stakeholders in education can be empowered to use AI for different purposes in real-life contexts.

In the course, students were invited to engage with AI practices by using ChatGPT as a part of the larger university-level project called 6P model that consists of “Plan writing, Prompting questions and text, Previewing draft(s), Producing an assignment, Peer review, and Portfolio tracking”. These six phases, interactive and iterative in nature, aim to promote the use of AI-enabled text-generating tools such as ChatGPT for the development of critical and/or reflective thinking by students. In the course, PSTs need to submit an 8000-word literature review of a topic in the field of ELT as the most important part of their final assessment, following the 6P process. This pedagogical model corresponds to self-regulated learning, where students set goals and plan their way of writing forward in the first step (Zimmerman 2002), apply writing strategies and AI knowledge that they learn in class and monitor their own writing progress in steps two to five (Guo 2022), and reflect upon their writing and learning processes and the formulation of strategies for future writing and learning tasks involving AI (Bavlı 2023).

3.3 Data collection and analysis

Guidance was provided in various AI-facilitated learning-to-teach activities (see Appendix A). Both authors speak Mandarin as the mother tongue, sharing the ethnic and linguistic backgrounds of the participants. The eight participants selected were invited to a 1:1 Zoom techno-reflective narrative interview (TRNI; Zhang, 2023, 2024a) post-course in April 2024, conducted in Mandarin, the shared mother tongue between the interviewer (a mainland Chinese research assistant of the corresponding author) and interviewees. The TRNIs, lasting 60–105 minutes, explored L2 learners’ investment in English language and literacy practices in EMI settings, revisiting their ChatGPT use and digital literacies involving ChatGPT and reflecting on their classroom engagement with AI. The first author, in the role of research assistant, analyzed interview transcripts and conducted initial data analysis of all the learner artifacts and teaching materials provided by the second author.

We conducted a thematic analysis of eight interview transcripts and learner artifacts, generating initial codes, identifying emergent themes, and detecting patterns. The data were coded using NVivo 12, first following an inductive approach with line-by-line coding to capture both implicit and explicit meanings, and later mapped onto the model of L2 investment (Darvin and Norton 2023) to generate themes. For instance, “with ChatGPT being so powerful, I wonder if there’s still a place for us as English teachers” was assigned the code “teacher identity crisis”, as this utterance expressed worries and concern regarding how ChatGPT’s language ability may threaten English teachers’ (especially NNETs’) perceived expertise according to the context. Codes were later triangulated by cross-referencing different data sources (interviews and learner artifacts) to confirm or refine the emergent themes, which was a process of iterative analysis and reflexivity. The coding scheme is attached in Appendix B.

4 Results and discussions

4.1 Developing AI literacies: perceptual change, prompt literacy, and critical practices

Participants’ development of AI literacies follows a complex trajectory that integrates changes in perceptions of AI and its use, developing prompt literacies as a part of AI literacies, and critical AI practices.

Initially, many participants exhibited either unfamiliarity with or strong resistance to AI tools, rooted not merely in technological ignorance but also in deeply held beliefs and biases about academic traditions and AI ethics. As May articulated,

I used to be very resistant to AI…Perhaps I’m a rather rigid person. In the past, I believed that doing academic work should be a professional process, and that tools like ChatGPT and Grammarly should be avoided. I felt it was kind of like cheating. But later on, …this semester, in various classes, teachers encouraged us to use these tools, saying that they can help us learn a lot. So I slowly started to accept them. (May-interview)

This initial negative attitude reflects how (non-)investment in AI practices is inherently tied to one’s understanding of legitimate participation in academic activities and imagined identity, which are mediated by societal beliefs and prejudice about AI and its use. Akin to how PSTs may invest in and divest from learning-to-teach practices (Zhang 2024b), PSTs in this study, such as May, divested from her AI literacies practices, holding the belief that using AI equals academic dishonesty. However, as she gained more exposure to ChatGPT over the course, she came to realize its strength and talked enthusiastically about its ability to generate linguistically professional output in an instant.

Interviewer: How do you think of the article generated by ChatGPT?

May: It’s very professional.

Interviewer: Why do you think it is professional?

May: Its wording and sentences, grammar, structure, and supporting ideas were all very convincing. When we wrote the article by ourselves under time constraints, we were mentally tense. And when you write, you usually come up with one idea at a time, jot it down, and then refine the language. But GPT could generate a fully structured essay almost instantly, with sophisticated vocabulary and well-crafted sentences. If we were writing ourselves, there’s no way we could come up with so many high-level words all at once. Honestly, I feel like its abilities are way beyond what an average person can do. (May-interview)

This emphasis on ChatGPT’s strength in writing native-like English turns out to be a prominent perception among PSTs – when asked about the strength of AI, they constantly referred to the way it produced “perfect” English, including “fancy vocabulary”, “excellent sentences”, and the complex use of syntactic structures. Interestingly, this focus on ChatGPT’s advantage in English language quality is particularly pronounced during the interviews. In learner artifacts documenting participants’ peer- and self-evaluations of ChatGPT-revised and -generated articles, the focus was primarily on logic, argumentation, and coherence. For example, when Emma commented on Luna’s article, she noted “After ChatGPT has revised the version, the text becomes more logical. Explanation should be presented logically and briefly, and use more specific numbers to ensure persuasiveness” (Emma-learner artifact). This shows that while they are able to critically evaluate the content of ChatGPT’s output, they are more impressed by its ability to produce high-quality English. This suggests that for pre-service English teachers, recognizing ChatGPT’s language strength may serve as a prerequisite to invest in AI practices and develop AI literacies.

PSTs’ development of AI literacies was also significantly supported by their growing mastery of prompt engineering and output management. Under detailed guidance provided by the teacher, participants were able to improve their abilities to generate detailed, appropriate, and targeted prompts and make modifications when necessary.

The task for writing the article was assigned during the last 20 minutes of the class, and we were already exhausted …Writing such an article at that point was challenging, cause our language use was quite basic, and we didn’t consider which vocabulary would be appropriate for children, our target students. The structure of our writing wasn’t clear either. However, GPT could tailor the output according to the instructions we provided. I remember the teacher required us to write prompts that included the age of the children who would read the article, the word limitation, and the genre. And ChatGPT generated a piece with a clear narrative structure, which was much better suited for children’s vocabulary. (Emma-interview)

From this excerpt, it was evident that Emma’s prompt literacy – the ability to generate precise prompts as input for AI systems, interpret the outputs, and iteratively refine prompts to achieve desired results (Hwang et al. 2023) – was cultivated within a situated problem-solving context. When assigned the writing task, she first identified her own weaknesses in writing the required teaching material for children, such as unsatisfactory language quality and limited consideration of the target readers under fatigue. Then, the teacher offered purposeful and targeted guidance on ChatGPT prompt writing to include genre, pedagogical purpose, word limitations, etc., which successfully helped Emma navigate the skills for developing prompts. Therefore, this example suggested that PSTs’ AI literacies, specifically prompt literacy, could be effectively developed through a task-oriented approach, contrasted with non-AI-assisted writing, and accompanied by detailed guidance on prompt engineering.

Meanwhile, a significant finding emerged regarding PSTs’ critical practices in using ChatGPT. All the participants were able to identify ChatGPT’s limitations and elaborate on ways to collaborate with AI strategically and critically, as shown in the following excerpts.

GPT…might not fully understand the situation of my students. I need to adapt its suggestions according to my students’ current situation and my teaching progress. I won’t directly use its output. Instead, I use it to generate ideas, then modify and apply them. (Julie-interview)

Advice for improvement: if GPT’s text can highlight the features of an autobiography type, it will be beneficial for students to learn the features of this genre type. (Luna-learner artifact-evaluation of Julie’s work)

Sometimes the way ChatGPT writes is too advanced. The wording and the sentence pattern are too difficult… and my target students, like primary school students, won’t be able to read it… So in that case I would stick to my original one instead of using the ChatGPT-generated one, which might be suitable for sophomore or junior students in college. (Christine-interview)

When discovering ChatGPT’s limitations, participants treated them not as barriers, but learned to anticipate and work around them through improved prompting, critical evaluation, and tailored adaptation. This strategic and contextualized approach suggests that PSTs developed what could be termed “informed investment” in AI literacies practices – a sophisticated approach to collaborate with ChatGPT based on an understanding of its strengths and weaknesses, and equipped with both prompt literacy and awareness for critical adaptation.

Notably, their statements reflect an awareness of AI as a tool requiring human oversight. However, they do not critically interrogate whose knowledge systems and sociocultural structures underpin ChatGPT’s outputs, nor do they question whether its design inherently privileges specific ways of thinking and working. There appears to be limited explicit critique of power hierarchies, corporate control, or epistemic dominance, which lies at the center of critical digital literacies (Darvin 2024). A more explicit interrogation of power structures would question whose realities are encoded in AI, whose voices are missing, and how this affects pedagogical autonomy (Darvin 2024). Without such critical interrogation, learners may accept AI-generated content as neutral or universally applicable, overlooking the ways in which it reinforces existing biases and inequities (Dai and Hua 2025). This absence of scrutiny also limits their ability to challenge dominant ideologies and develop a more agentive stance in navigating AI-mediated educational environments. Encouraging deeper critical reflection on these issues would empower PSTs to actively negotiate and resist the epistemic and structural constraints embedded in the design of AI tools.

Together, the three critical dimensions of AI literacies that emerged from the data – perceptual change, prompt literacy, and critical practices – form a mutually reinforcing cycle in PSTs’ AI literacies development. Shifts in perception serve as the foundation: only when PSTs overcome their prejudices against AI and recognize its value did they invest in developing AI literacies. As May’s journey demonstrates, her transition from viewing AI as “cheating” to seeing it as a powerful tool enabled her subsequent investment in mastering prompt engineering. The development of such prompt literacy, in turn, reinforces positive perceptions. As participants became more adept at crafting effective prompts and managing outputs, they developed greater appreciation for AI’s potential. Emma’s experience with the writing task illustrates this: her successful generation of age-appropriate content through careful prompt engineering enhanced her recognition of AI’s pedagogical value. This virtuous cycle ultimately facilitates the development of critical practices. As both perceptions and prompt literacies mature, participants develop more sophisticated strategies for AI integration. Tom’s reflection exemplifies this culmination: his understanding of AI’s basic affordances combined with professional judgment to form an “informed investment” in AI practices.

This multi-dimensional development process extends current understanding of AI literacy development by highlighting its cyclical and interconnected nature, moving beyond linear models of technology adoption to recognize the complex interplay between perception, prompt literacy, and critical awareness. This interrelated development also aligns with investment theory, which suggests that meaningful investment in a social practice is made based on an understanding of “the material and symbolic context in which this choice is made” (Darvin and Norton 2023, p. 33). In this case, PSTs’ sustained investment in AI practices was initiated by learners’ recognition of the value of practices (perception), enabled by their growing ability to participate (prompt literacy), and stimulated by the development of strategic approaches (critical practices).

4.2 Negotiating LTI through enhanced AI literacies

Participants’ enhanced AI literacies catalyzed significant shifts in their professional identity construction. As pre-service English teachers, the participants demonstrated heightened awareness of the linguistic features present in ChatGPT’s outputs, often remarking on its perceived “native-like” language quality. As Elizabeth observed,

When ChatGPT helped me revise my article, I realized that some of my expressions were still ‘Chinglish,’ and there’s actually a significant difference compared to how native speakers write. For ChatGPT, many of its sentences use passive voice, which makes the output seem more sophisticated. The constant variation in sentence structures also makes the text less monotonous and dull. It makes the article feel richer and less generic in terms of sentence patterns. (Elizabeth-interview)

This heightened attention to syntactic and rhetorical features reflects their dual identity as both language learners and future teachers, especially in contexts where language learning and teaching prioritize performance-oriented practices that emphasize adherence to established linguistic standards as markers of proficiency and legitimacy. Moreover, contrasting ChatGPT’s “perfect” English with their own “Chinglish” writing showcased their insecurities about their own language proficiency as “non-native” speakers. For them, proficiency in English is not merely a skill but a core component of their professional legitimacy (Park 2012). Hence, engaging with AI magnified their anxiety about the inadequacies of their English skills, inadvertently reinforcing deficit perspectives and simultaneously positioning them as insufficient English users and perennial English learners. Luna explicitly mentioned that throughout the activity of letting ChatGPT generate and modify the text, she realized that her English was not good enough and felt that she needed to work harder. As a “non-native” speaker, she positioned herself in a powerless way against ChatGPT’s seemingly flawless language output (Norton and Toohey 2011).

This intensified sense of inadequacy thus produced complex and sometimes contradictory effects on their LTI construction. While all the participants expressed a sense of crisis triggered by ChatGPT’s production of “flawless” English, some of them felt threatened and worried about their own professionality and competitiveness as English teachers.

With ChatGPT being so powerful, I wonder if there’s still a place for us as English teachers. (Luna-interview)

Meanwhile, others saw it as an opportunity to reflect, grow and build alternative expertise and professionality as language teachers.

ChatGPT’s convenience gives us a strong sense of crisis because it can indeed accomplish a lot, even outperforming many of us ordinary teachers in certain tasks…When technology becomes so advanced, as a teacher, you have to think about your future position and what you need to do. It’s not like AI will completely replace you, but if you don’t move forward, there will eventually come a day when you’ll be eliminated. (May-interview)

I still believe that the role of a teacher is irreplaceable. Even if ChatGPT’s output is well-polished, you cannot teach students to polish like that through your teaching…I think it serves more as an auxiliary tool or functional aid, rather than replacing our expertise and knowledge. (Christine-interview)

Using GPT makes us reflect on how to work more efficiently. GPT improves efficiency by providing the basics, such as article structures or information, which serves as a guide, a prompt, or a foundational tool. However, as a professional, you still need to tailor it to your own context and focus on what’s truly important for your work. (Tom-interview)

Participants’ diversified responses towards AI integration and impact on LTI construction reflect their different imagined identities and understanding of teachers’ expertise. While Luna may have imagined language teachers as “language experts” and thus felt threatened by the emergence of ChatGPT, May, Christine and Tom began envisioning new professional identities as AI-empowered educators who aim to facilitate students’ learning with the help of AI. Engaging with AI deepened their reflection on the nature of the teaching and teachers’ role, stimulating a reimagination of LTIs in their future careers. During the interviews, some participants even planned their strategic use of AI in their future teaching practices, such as writing out teaching plans, designing role-play lines in classroom activities, and providing feedback on students’ assignments, etc.

Thus, investing in AI literacy practices has the potential to empower NNETs to adopt new identities that extend beyond mere language instruction by equipping them with richer pedagogical resources and deeper reflection on education. However, these practices may also lead to feelings of insecurity, fear and concerns regarding PSTs’ perceived professional expertise. These observations resonate with findings from existing empirical studies on the dual influence of AI on EMI and EFL teachers’ identity, expertise, confidence, and shift of roles (Ghiasvand and Seyri 2025; Tsou et al. 2024; Zaman et al. 2024; Zhong et al. 2023). This suggests that future research should focus more on the identity shifts experienced by PSTs within AI-facilitated pedagogy and provide appropriate guidance on AI utilization that considers the fluid nature of identity of PSTs (Guan et al. 2025a).

4.3 Transforming capital and power relations through AI literacies practices

In light of PSTs’ renegotiation of their LTIs, they accumulate new forms of capital that create opportunities for challenging existing power structures and traditional hierarchies in their language education contexts. For example, AI’s capacity to instantly generate native-like English greatly alleviated their linguistic insecurities as NNETs and equipped them with crucial linguistic capital that builds up their professional expertise. Furthermore, their growing confidence in mastering AI tools such as ChatGPT demonstrated an accumulation of technical capital, while their newly gained critical understanding of AI’s limitations and advantages constituted important cultural capital in the digital age.

These various forms of capital – especially the linguistic capital – is particularly important and empowering given the ideological realities faced by PSTs. Under the entrenched ideology of native-speakerism shaping ELT, which privileges native-speaker English as the ultimate model of English learning and invalidates “non-native” English, NNETs were pressured to achieve native-like English to establish professional legitimacy (Creese et al. 2014). Specifically, for PSTs who are raised in Chinese mainland, their English learning journeys are deeply influenced by performance-oriented practices that prioritize conformity to standardized, native-speaker norms (Adamson 2004; Fong 2021). Thus, they are more likely to develop heightened attention for “correctness” in English use, and consequently may experience linguistic insecurity when failing to achieve perceived standards (Mei 2024). Therefore, under such circumstances, the linguistic capital gained through investment in AI literacies practices was successfully transformed into symbolic power (Kramsch 2021), enhancing PSTs’ perceived professionality and helping them establish a more legitimate professional identity. However, this transformation also carries the risk of reinforcing the ideology of native-speakerism that appears to underlie ChatGPT’s output, as participants frequently adopt its “correct” English without critically reflecting on the biases and linguistic hegemony embedded in this process.

Another significant finding emerged regarding how AI interaction reshaped traditional power relations in PSTs’ professional development. Unlike conventional hierarchical relationships between teachers and students, or between “native” and “non-native” speakers, PSTs in the current study could freely experiment with and learn from AI without fear of judgment. As Victoria reflected,

If the question is related to English teaching, I can ask my English teacher. However, sometimes I feel intimidated by their authority or worry about bothering them, and communication with them isn’t always timely or convenient. But with GPT, it’s just a robot. It won’t laugh at me, and it can provide timely feedback. Although it’s not as professional as the teacher in some cases, as a companion it is already very good! (Victoria-interview)

In traditional hierarchical relationships, power dynamics are largely embodied through linguistic capital differentials – teachers or “native” speakers possess the more “legitimate” language expertise that PSTs strive to acquire. As Victoria’s reflection indicates, this hierarchical relationship often creates anxiety and hesitation. AI interaction, however, disrupts this traditional power structure by providing judgment-free, high-quality linguistic support. Unlike human interactions where language use is constantly subject to evaluation, AI offers a space where PSTs can freely experiment with and learn from linguistic models without fear of negative assessment. This democratization of access to linguistic resources fundamentally alters the power dynamics of professional development.

This interesting finding connects to broader sociocultural dynamics in East Asian contexts. In societies where people tend to have pervasive attentiveness to other people and a more interdependent construal of the self (Markus and Kitayama 1991), the hierarchical nature of teacher-student relationships can significantly constrain PSTs’ identity construction and professional development (Song 2016). The use of AI tools, therefore, may open up an important space for PSTs to seek help from non-hierarchical relationships and develop professional identities with greater agency. This enables PSTs to accumulate capital and establish LTIs without the constraints of traditional hierarchical relationships, potentially allowing for more autonomous and confident teacher identities to emerge.

Furthermore, this reshaping of power relations challenges traditional notions of expertise in language teacher education (Johnson 2009). Rather than a unidirectional transfer of knowledge from expert to novice, AI interaction enables a more dynamic and autonomous approach to professional development where PSTs can actively construct their expertise through strategic engagement with technological resources.

Overall, the study underscores the empowering and transformative potential of AI integration in interrogating established power structures in language education, resonating with research on critical digital literacies (Darvin 2024; Jiang and Gu 2023). Future research can take on a more explicit focus on critical AI literacies to reimagine how AI technology can be leveraged to disrupt native-speakerism and create more empowering pathways for “non-native” PSTs’ professional development.

5 Conclusions

This study revealed how PSTs’ AI literacies were developed through a complex interplay of recognition of AI tools’ strength, enhanced prompt literacy, and development of critical awareness of AI’s limitations and strategic use. These crucial dimensions offer important insights for future pedagogy on AI integration in PST education, emphasizing the usefulness of a problem-based approach, hands-on experience, and detailed guidance on prompt engineering and critical reflection.

The current study also demonstrated how the development of AI literacies enabled PSTs’ construction of more legitimate LTIs by enhancing their perceived professionality. Investment in AI practices helped PSTs gain prompt literacy and critical understanding of AI tools, and allowed them to generate native-like English output effortlessly, which contributed to their accumulation of technical, cultural, and linguistic capital. Against the ideological underpinning of native-speakerism, these various forms of capital were effectively transformed into symbolic power that compensated for their insecurities for the perceived lack of professionality as NNETs and PSTs. Specifically, AI’s capacity to generate high-quality, native-like English content significantly transformed their identity positions as inadequate English users to more competent, legitimized language teachers. Future research should further investigate the ethical implications of AI use in teacher education, particularly in relation to critical engagement with issues of bias, epistemic inequality, and the reinforcement of dominant ideologies such as linguistic hegemony and the native/non-native division. By fostering critical AI literacies, teacher education programs can empower PSTs to navigate these ethical complexities and promote more equitable and inclusive practices in AI-mediated classrooms.

The specific sociocultural and educational context of Hong Kong SAR, China provides a distinctive lens for examining AI literacies and professional identity construction. Its hierarchical teacher-student relationships, performance-oriented education system, and the pervasive influence of native-speakerism in ELT significantly shaped how participants perceived and engaged with AI tools. For example, AI’s capacity to democratize learning by bypassing traditional hierarchies holds particular significance in East Asian contexts, where such power dynamics are more pronounced. Similarly, participants’ strong emphasis on linguistic accuracy and native-like proficiency reflects regional ideologies that prioritize standardized norms rooted in native-speaker ideals.

These contextual characteristics enrich our understanding of PSTs’ engagement with AI tools and offer transferable insights for regions with similar sociocultural dynamics. They also highlight the need for context-sensitive approaches to AI integration in teacher education, empowering PSTs to navigate and challenge the sociocultural and ideological constraints embedded in their professional contexts. Future research should explore how PSTs in diverse settings engage with AI tools, particularly in contexts with differing educational hierarchies and linguistic ideologies. Comparative studies would be valuable in identifying shared and context-specific dynamics in developing AI literacies and professional identities (e.g., Tsou et al. 2024).

Regarding the limitations, this study focused primarily on ChatGPT, limiting the understanding of how various AI tools influence PSTs’ engagement and practices. Exploring the combined and comparative use of multiple AI tools could provide a broader perspective on AI literacy development in teacher education. Additionally, while the study emphasizes participants’ use of AI, there was an insufficient emphasis on fostering explicit critical reflection regarding the biases, stereotypes and hegemony inherent in AI-generated outputs, highlighting the need for more robust pedagogical guidance to equip PSTs with the skills to critically evaluate and navigate these issues.

Rather than seeking demographic representativeness, this qualitative inquiry emphasizes transferability and trustworthiness, making multifaceted contributions to the fields of language teacher education, AI literacy, and ELT. First, it offers a practical solution for developing AI literacies in the PST education program, emphasizing the role of guided, hands-on prompt engineering and critical practices. Second, it sheds light on AI’s empowering potential by illustrating how ChatGPT can be leveraged to bridge the gap between perceived linguistic inadequacies and professional legitimacy for NNETs. Meanwhile, it also underscores the importance of critically guiding PSTs to identify and address the potential linguistic hegemony embedded in AI-generated outputs. Third, the study illuminates the democratizing potential of AI, fostering a more equitable environment for professional identity development in traditionally hierarchical educational settings. Pedagogically, a holistic approach of AI integration in language education can be adopted to maximize AI’s technological benefits (Guan et al. 2025b). Future studies may also consider analysing human-AI interactions using discourse analysis or exploring pedagogical endeavours to develop PSTs’ critical digital literacy involving AI (Darvin 2025). Taken together, this study lays the groundwork for future research on AI’s transformative impact on PSTs’ professional development.


Corresponding author: Yue Zhang, The Education University of Hong Kong, B4-1/F-21, Hong Kong SAR, China, E-mail:

About the authors

Yilin You

Yilin You obtained her master’s degree in linguistics and applied linguistics from the School of English and International Studies, Beijing Foreign Studies University. Her research interests include identity and investment in language learning, social inequality, Global Englishes, and AI literacies. She is the recipient of the 2024 AILA Solidarity Award from the International Association of Applied Linguistics (AILA).

Yue Zhang

Yue Zhang is an Assistant Professor in the Department of English Language Education, The Education University of Hong Kong. She received her M.Phil. and Ph.D. in Applied English Linguistics at The Chinese University of Hong Kong. Her research focuses on language identity and investment, critical pedagogy, CALL, and AI literacies. She has published in Asia-Pacific Journal of Teacher Education, ELT Journal, ReCALL, RELC, Journal of Language, Identity & Education, Language Awareness, Computer-Assisted Language Learning, Journal of Multilingual and Multicultural Development, Computers and Education: Artificial Intelligence, TESOL Quarterly, System, and Research Methods in Applied Linguistics.

Acknowledgments

We would like to thank the participants involved in this study.

  1. Informed consent: Informed consent was obtained from all individuals included in this study.

  2. Author contributions: All authors have accepted responsibility for the entire content of this manuscript and approved its submission.

  3. Competing interests: Authors state no conflict of interest.

  4. Research funding: This project is funded by the Beijing-Hong Kong Universities Alliance (BHUA) Fund 2024-2025 (University Grants Committee, Hong Kong) and The Education University of Hong Kong and the Institute of Education, University of London collaborative fund (EdUHK-IOE Collaboration Seed Fund Project). The funding organization(s) played no role in the study design: in the collection, analysis, and interpretation of data; in the writing of the report: or in the decision to submit the report for publication.

  5. Ethical approval: The research related to human use has complied with all the relevant national regulations, institutional policies, and in accordance with the tenets of the Helsinki Declaration, and has been approved by the authors’ Institutional Review Board.

Appendix A: Task instructions

Individually, identify one potential topic of your group review article. Follow the Plan-Prompt-Preview procedure using ChatGPT to generate an 800-word review of two theoretical articles and two empirical research. You may check the output using Google Scholar and submit the outcome to both your journal and the Moodle forum (L3). Produce: use ChatGPT and one more GenAI tool to generate content for your group review. Peer review: polish your part and discuss with your group members to ensure the quality and accuracy of the content and supporting references. Portfolio-tracking: reflect on your group production and submit a group reflection of the process.

Appendix B: Coding scheme

Theme Sub-theme Code
AI literacies development Pre-course beliefs and attitudes about AI Unfamiliarity
Resistant attitude
Concerns for AI ethics
Traditional assumptions about legitimate AI use
Recognition of AI strength Convenience
Rich content with diversified perspectives
Quick response (generate ideas in an instant)
Language quality and English proficiency
High efficiency without burnout
Non-hierarchical help-seeking experience
Recognition of AI limitations Irrelevance of content
Too advanced vocabulary
Inappropriate genre
Low-quality academic content
Requiring human oversight
Cannot handle complicated content
Requiring more time in post-editing
Inconvenience caused by equipment use
Limited application scenarios
Prompt engineering Growing awareness of the importance of prompt
Enhanced ability to write targeted, appropriate, specific prompt
Effective cultivation of prompt engineering ability under guided, problem-solving pedagogical context
Critical practices and strategic use of AI Selective adoption of ChatGPT output
Contextual adaptation of ChatGPT output
Follow-up AI interaction to tailor content focus
Strategically use AI to “form the basis”
Strategically use AI for writing
Critical awareness of how to engage with ChatGPT output
Independent judgment of the relevance and correctness of ChatGPT output
Identity negotiation Challenging existing identities Increased feelings of inadequacy in English skills
Teacher identity crisis
Anxiety about diminished professional expertise
Emerging identities Educator with AI-enhanced expertise
Mentors who cultivate students’ discernment of AI-generated content
Accumulated capital Linguistic capital Native-like English-language output
Reduced language burden as English teachers
Decreased feeling of linguistic insecurity
Technical capital Growing mastery of AI tools in the AI age
Enhanced prompt engineering skills
Cultural capital Critical understanding of AI use
Discernment of AI-generated outputs
Adaptive strategies of AI use for pedagogical purposes
Reconsideration of teachers’ role
Power structures and ideological contexts Dominant language ideologies Native-speakerism in ELT
Performance-oriented tradition in English teaching and learning
Language teacher expertise as intertwined with native-like language proficiency
Teacher-student relationship Concerns about asking teacher for help
Judgment-free interaction experience with AI

References

Adamson, Bob. 2004. China’s English: A history of English in Chinese education. Hong Kong: Hong Kong University Press.Search in Google Scholar

Aneja, Geeta A. 2016. (Non) native speakered: Rethinking (non) nativeness and teacher identity in TESOL teacher education. TESOL Quarterly 50(3). 572–596. https://doi.org/10.1002/tesq.315.Search in Google Scholar

Ayanwale, Musa A., Owolabi P. Adelana, Rethabile R. Molefi, Olalekan Adeeko & Adebayo M. Ishola. 2024. Examining artificial intelligence literacy among pre-service teachers for future classrooms. Computers and Education Open 6. 100179. https://doi.org/10.1016/j.caeo.2024.100179.Search in Google Scholar

Ayanwale, Musa A., Ismaila T. Sanusi, Owolabi P. Adelana, Kehinde D. Aruleba & Solomon S. Oyelere. 2022. Teachers’ readiness and intention to teach artificial intelligence in schools. Computers and Education: Artificial Intelligence 3. 100099. https://doi.org/10.1016/j.caeai.2022.100099.Search in Google Scholar

Bavlı, Bünyamin. 2023. Learning from online learning journals (OLJs): Experiences of postgraduate students. Interactive Learning Environments 31(10). 7040–7052. https://doi.org/10.1080/10494820.2022.2061005.Search in Google Scholar

Beauchamp, Catherine & Lynn Thomas. 2009. Understanding teacher identity: An overview of issues in the literature and implications for teacher education. Cambridge Journal of Education 39(2). 175–189. https://doi.org/10.1080/03057640902902252.Search in Google Scholar

Bernat, Eva. 2008. Towards a pedagogy of empowerment: The case of ‘impostor syndrome’ among pre-service non-native speaker teachers in TESOL. English Language Teacher Education and Development 11(1). 1–8.Search in Google Scholar

Bourdieu, Pierre. 1986. The forms of capital. In John Richardson (ed.), Handbook of theory and research for the sociology of education, 241–258. New York: Greenwood Press.Search in Google Scholar

Collins, James & Richard Blot. 2003. Literacy and literacies: Texts, power, and identity. Cambridge: Cambridge University Press.10.1017/CBO9780511486661Search in Google Scholar

Creese, Angela, Adrian Blackledge & Jaspreet K. Takhi. 2014. The ideal ‘native speaker’ teacher: Negotiating authenticity and legitimacy in the language classroom. The Modern Language Journal 98(4). 937–951. https://doi.org/10.1111/modl.12148.Search in Google Scholar

Curran, Nathaniel M. 2023a. “More like a friend than a teacher”: Ideal teachers and the gig economy for online language learning. Computer Assisted Language Learning 36(7). 1288–1308. https://doi.org/10.1080/09588221.2021.1976801.Search in Google Scholar

Curran, Nathaniel M. 2023b. Discrimination in the gig economy: The experiences of Black online English teachers. Language and Education 37(2). 171–185. https://doi.org/10.1080/09500782.2021.1981928.Search in Google Scholar

Dai, David W. & Zhu Hua. 2025. When AI meets intercultural communication: New frontiers, new agendas. Applied Linguistics Review 16(2). 747–751. https://doi.org/10.1515/applirev-2024-0185.Search in Google Scholar

Darvin, Ron. 2024. Critical digital literacies. In Encyclopedia of Applied Linguistics, vol. Literacy, 2nd ed. Hoboken, New Jersey: Wiley. Available at: https://www.researchgate.net/publication/384902958_Critical_digital_literacies (Epub ahead of print).Search in Google Scholar

Darvin, Ron. 2025. The need for critical digital literacies in generative Al-mediated L2 writing. Journal of Second Language Writing 67. 101186. https://doi.org/10.1016/j.jslw.2025.101186.Search in Google Scholar

Darvin, Ron & Christoph Hafner. 2022. Digital literacies in TESOL: Mapping out the terrain. TESOL Quarterly 56(3). 865–882. https://doi.org/10.1002/tesq.3161.Search in Google Scholar

Darvin, Ron & Bonny Norton. 2023. Investment and motivation in language learning: What’s the difference? Language Teaching 56(1). 29–40. https://doi.org/10.1017/s0261444821000057.Search in Google Scholar

De Roock, Roberto S. 2024. To become an object among objects: Generative artificial “intelligence,” writing, and linguistic white supremacy. Reading Research Quarterly 59(4). 590–608. https://doi.org/10.1002/rrq.569.Search in Google Scholar

Du, Hua, Yanchao Sun, Haozhe Jiang, A. Y. M. Atiquil Islam & Xiaoqing Gu. 2024. Exploring the effects of AI literacy in teacher learning: An empirical study. Humanities and Social Sciences Communications 11(1). 1–10. https://doi.org/10.1057/s41599-024-03101-6.Search in Google Scholar

Duff, Patricia A. 2014. Case study research on language learning and use. Annual Review of Applied Linguistics 34. 233–255. https://doi.org/10.1017/s0267190514000051.Search in Google Scholar

Education Bureau. 2015. Information technology in education in Hong Kong. Available at: https://www.edb.gov.hk/attachment/en/edu-system/primary-secondary/applicable-to-primary-secondary/it-in-edu/Policies/4th_consultation_eng.pdf.Search in Google Scholar

Education Bureau. 2023. Curriculum modules on innovation and technology education. Circular memorandum, No. 39/2021. Available at: https://applications.edb.gov.hk/circular/upload/EDBCM/EDBCM23109E.pdf.Search in Google Scholar

ElSayary, Areej. 2024. An investigation of teachers’ perceptions of using ChatGPT as a supporting tool for teaching and learning in the digital era. Journal of Computer Assisted Learning 40(3). 931–945. https://doi.org/10.1111/jcal.12926.Search in Google Scholar

Fong, Emily T. Y. 2021. English in China: Language, identity and culture. London: Routledge.10.4324/9781003001225Search in Google Scholar

Gee, James P. 2000. Identity as an analytic lens for research in education. Review of Research in Education 25. 99–125. https://doi.org/10.2307/1167322.Search in Google Scholar

Ghiasvand, Farhad & Haniye Seyri. 2025. A collaborative reflection on the synergy of Artificial Intelligence (AI) and language teacher identity reconstruction. Teaching and Teacher Education 160. 105022. https://doi.org/10.1016/j.tate.2025.105022.Search in Google Scholar

Ghimire, Asmita. 2025. Utilizing ChatGPT to integrate world English and diverse knowledge: A transnational perspective in critical artificial intelligence (AI) literacy. Computers and Composition 75. 102913. https://doi.org/10.1016/j.compcom.2024.102913.Search in Google Scholar

Gonzales, D. W. Wilkinson & ZhangYue. 2025. Reshaping language learners’ languaging habitus: A world-Englishes-informed critical pedagogy. RELC Journal 1–22. https://doi.org/10.1177/00336882251313702.Search in Google Scholar

Guan, Lihang, John C.-K. Lee, Yue Zhang & Mingyue M. Gu. 2025b. Investigating the tripartite interaction among teachers, students, and generative AI in EFL education: A mixed-methods study. Computers and Education: Artificial Intelligence 8. 100384. https://doi.org/10.1016/j.caeai.2025.100384.Search in Google Scholar

Guan, Lihang, Yue Zhang & Mingyue M. Gu. 2024. Examining generative AI-mediated informal digital learning of English practices with social cognitive theory: A mixed-methods study. ReCALL 1–17. https://doi.org/10.1017/S0958344024000259 (Epub ahead of print).Search in Google Scholar

Guan, Lihang, Yue Zhang & Mingyue M. Gu. 2025a. Pre-service teachers preparedness for AI-integrated education: An investigation from perceptions, capabilities, and teachers’ identity changes. Computers and Education: Artificial Intelligence 8. 100341. https://doi.org/10.1016/j.caeai.2024.100341.Search in Google Scholar

Guo, Lin. 2022. Using metacognitive prompts to enhance self-regulated learning and learning outcomes: A meta-analysis of experimental studies in computer-based learning environments. Journal of Computer Assisted Learning 38(3). 811–832. https://doi.org/10.1111/jcal.12650.Search in Google Scholar

Holliday, Adrian. 2006. Native-speakerism. ELT Journal 60(4). 385–387. https://doi.org/10.1093/elt/ccl030.Search in Google Scholar

Hong, Wilson C. H. 2023. The impact of ChatGPT on foreign language teaching and learning: Opportunities in education and research. Journal of Educational Technology and Innovation 5(1). 2790–2110. https://doi.org/10.61414/jeti.v5i1.103.Search in Google Scholar

Huang, Xinyi, Di Zou, Gary Cheng, Xieling Chen & Haoran Xie. 2023. Trends, research issues and applications of artificial intelligence in language education. Educational Technology & Society 26(1). 112–131.Search in Google Scholar

Hwang, Yohan, Jang Ho Lee & Dongkwang Shin. 2023. What is prompt literacy? An exploratory study of language learners’ development of new literacy skill using generative AI. arXiv: 2311.05373. https://doi.org/10.48550/arXiv.2311.05373.Search in Google Scholar

Ji, Hyangeun, Insook Han & Yujung Ko. 2022. A systematic review of conversational AI in language education: Focusing on the collaboration with human teachers. Journal of Research on Technology in Education 55(1). 48–63. https://doi.org/10.1080/15391523.2022.2142873.Search in Google Scholar

Jiang, Lianjiang & Michelle M. Gu. 2023. Toward a professional development model for critical digital literacies in TESOL. TESOL Quarterly 56(3). 1029–1040. https://doi.org/10.1002/tesq.3138.Search in Google Scholar

Johnson, Karen E. 2009. Second language teacher education: A sociocultural perspective. London: Routledge.10.4324/9780203878033Search in Google Scholar

Kandlhofer, Martin, Gerald Steinbauer, Sabine Hirschmugl-Gaisch & Petra Huber. 2016. Artificial intelligence and computer science in education: From kindergarten to university. In 2016 IEEE Frontiers in education conference (FIE). Erie, PA, USA: IEEE Press.10.1109/FIE.2016.7757570Search in Google Scholar

Kanno, Yasuko & Christian Stuart. 2011. Learning to become a second language teacher: Identities-in-practice. The Modern Language Journal 95(2). 236–252. https://doi.org/10.1111/j.1540-4781.2011.01178.x.Search in Google Scholar

Kramsch, Claire. 2021. Language as symbolic power. Cambridge: Cambridge University Press.10.1017/9781108869386Search in Google Scholar

Lan, Yanzhen. 2024. Through tensions to identity-based motivations: Exploring teacher professional identity in Artificial Intelligence-enhanced teacher training. Teaching and Teacher Education 151. 104736. https://doi.org/10.1016/j.tate.2024.104736.Search in Google Scholar

Leander, Kevin M. & Sarah K. Burriss. 2020. Critical literacy for a posthuman world: When people read, and become, with machines. British Journal of Educational Technology 51(4). 1262–1276. https://doi.org/10.1111/bjet.12924.Search in Google Scholar

Li, Jian & Jin-Song Huang. 2020. Dimensions of artificial intelligence anxiety based on the integrated fear acquisition theory. Technology in Society 63. 101410. https://doi.org/10.1016/j.techsoc.2020.101410.Search in Google Scholar

Liang, Jia-Cing, Gwo-Jen Hwang, Mei-Rong A. Chen & Darmawansah Darmawansah. 2023. Roles and research foci of artificial intelligence in language education: An integrated bibliographic analysis and systematic review approach. Interactive Learning Environments 31(7). 4270–4296. https://doi.org/10.1080/10494820.2021.1958348.Search in Google Scholar

Liu, Guangxiang L., Ron Darvin & MaChaojun. 2024. Exploring AI-mediated informal digital learning of English (AI-IDLE): A mixed-method investigation of Chinese EFL learners’ AI adoption and experiences. Computer Assisted Language Learning 1–29. https://doi.org/10.1080/09588221.2024.2310288.Search in Google Scholar

Loftus, Mary & Michael G. Madden. 2020. A pedagogy of data and Artificial Intelligence for student subjectification. Teaching in Higher Education 25(4). 456–475. https://doi.org/10.1080/13562517.2020.1748593.Search in Google Scholar

Long, Duri & Brian Magerko. 2020. What is AI literacy? Competencies and design considerations. In Proceedings of the 2020 CHI conference on human factors in computing systems. Honolulu, HI, USA: Association for Computing Machinery, Inc.10.1145/3313831.3376727Search in Google Scholar

Ma, Qing, Peter Crosthwaite, Daner Sun & Di Zou. 2024. Exploring ChatGPT literacy in language education: A global perspective and comprehensive approach. Computers and Education: Artificial Intelligence 7. 100278. https://doi.org/10.1016/j.caeai.2024.100278.Search in Google Scholar

Markus, Hazel R. & Shinobu Kitayama. 1991. Culture and the self: Implications for cognition, emotion, and motivation. Psychological Review 98(2). 224–253. https://doi.org/10.1037/0033-295x.98.2.224.Search in Google Scholar

Mei, Yunbo. 2024. Navigating linguistic ideologies and market dynamics within China’s English language teaching landscape. Linguistics Vanguard 10(1). 439–451. https://doi.org/10.1515/lingvan-2024-0024.Search in Google Scholar

Ng, Davy T. K., Jac K. L. Leung, Kai W. S. Chu & Maggie S. Qiao. 2021a. AI literacy: Definition, teaching, evaluation and ethical issues. Proceedings of the Association for Information Science and Technology 58(1). 504–509. https://doi.org/10.1002/pra2.487.Search in Google Scholar

Ng, Davy T. K., Jac K. L. Leung, Kai W. S. Chu & Maggie S. Qiao. 2021b. Conceptualizing AI literacy: An exploratory review. Computers and Education: Artificial Intelligence 2. 100041. https://doi.org/10.1016/j.caeai.2021.100041.Search in Google Scholar

Norton, Bonny. 2016. Learner investment and language teacher identity. In Gary Barkhuizen (ed.), Reflections on language teacher identity research, 80–86. New York: Routledge.Search in Google Scholar

Norton, Bonny & Kelleen Toohey. 2011. Identity, language learning, and social change. Language Teaching 44(4). 412–446. https://doi.org/10.1017/s0261444811000309.Search in Google Scholar

Park, Gloria. 2012. “I am never afraid of being recognized as an NNES”: One teacher’s journey in claiming and embracing her nonnative-speaker identity. TESOL Quarterly 46(1). 127–151. https://doi.org/10.1002/tesq.4.Search in Google Scholar

Pennington, Martha C. & Jack C. Richards. 2016. Teacher identity in language teaching: Integrating personal, contextual, and professional factors. RELC Journal 47(1). 5–23. https://doi.org/10.1177/0033688216631219.Search in Google Scholar

Rajak, Leena, Sangeeta Chauhan & Sonu Bara. 2024. Transforming English pedagogy with Artificial Intelligence: Enroute to enhanced language learning. In Tahmeena Khan, Manisha Singh & Saman Raza (eds.), Artificial intelligence: A multidisciplinary approach towards teaching and learning, 216–241. Singapore: Bentham Science Publishers.10.2174/9789815305180124010013Search in Google Scholar

Song, Juyoung. 2016. Emotions and language teacher identity: Conflicts, vulnerability, and transformation. TESOL Quarterly 50(3). 631–654. https://doi.org/10.1002/tesq.312.Search in Google Scholar

Sperling, Katarina, Carl-Johan Stenberg, Cormac McGrath, Anna Åkerfeldt, Fredrik Heintz & Linnéa Stenliden. 2024. In search of artificial intelligence (AI) literacy in teacher education: A scoping review. Computers and Education Open 6. 100169. https://doi.org/10.1016/j.caeo.2024.100169.Search in Google Scholar

Tang, Lin & Yu-Sheng Su. 2024. Ethical implications and principles of using artificial intelligence models in the classroom: A systematic literature review. International Journal of Interactive Multimedia and Artificial Intelligence 8(5). 25–36. https://doi.org/10.9781/ijimai.2024.02.010.Search in Google Scholar

Tsou, Wenli, Angel M. Y. Lin & Fay Chen. 2024. Co-journeying with ChatGPT in tertiary education: Identity transformation of EMI teachers in Taiwan. Language Culture and Curriculum 37(4). 529–543. https://doi.org/10.1080/07908318.2024.2362326.Search in Google Scholar

Yuan, Rui & Icy Lee. 2015. The cognitive, social and emotional processes of teacher identity construction in a pre-service teacher education programme. Research Papers in Education 30(4). 469–491. https://doi.org/10.1080/02671522.2014.932830.Search in Google Scholar

Zaman, Samina, Muhammad S. Hussain & Memoona Tabassam. 2024. Use of artificial intelligence in education: English language teachers’ identity negotiation in higher education. Journal of Asian Development Studies 13(3). 861–869. https://doi.org/10.62345/jads.2024.13.3.70.Search in Google Scholar

Zhang, Yue. 2023. L2 investment and techno-reflective narrative interviews. TESOL Quarterly 58(2). 991–1006. https://doi.org/10.1002/tesq.3211.Search in Google Scholar

Zhang, Yue. 2024a. Researching L2 investment in EMI courses: Techno-reflective narrative interviews. Research Methods in Applied Linguistics 3(2). 100115. https://doi.org/10.1016/j.rmal.2024.100115.Search in Google Scholar

Zhang, Yue. 2024b. Investing in and divesting from learning-to-teach practices: A critical ethnography of a teacher of English in China. Asia-Pacific Journal of Teacher Education 53(2). 153–171. https://doi.org/10.1080/1359866x.2024.2431021.Search in Google Scholar

Zhang, Yue. in press. Novice non-native English-speaking teacher investment: Identities, positioning and agency. In Mostafa Nazari (ed.), Novice non-native English language teachers navigating agency: International perspectives. New York: Routledge.Search in Google Scholar

Zhang, Yue & Ron Darvin. 2025. Negotiating gender ideologies and investing in teacher identities: The motivation and investment of EFL pre-service teachers. System 1–15. https://doi.org/10.1016/j.system.2025.103669 (Epub ahead of print).Search in Google Scholar

Zhang, Yue & Wilkinson D. W. Gonzales. 2025. Investing in critical digital literacies: A case study of university students in Hong Kong. In AAAL 2025. Denver: American Association for Applied Linguistics.Search in Google Scholar

Zhang, Yue & Jing Huang. 2024. Learner identity and investment in EFL, EMI, and ESL contexts: A longitudinal case study of one pre-service teacher. Journal of Language, Identity and Education 1–14. https://doi.org/10.1080/15348458.2024.2318423 (Epub ahead of print).Search in Google Scholar

Zhang, Yue & D. W. Gonzales Wilkinson. 2024. World Englishes pedagogy: Constructing learner identity. ELT Journal 1–12. https://doi.org/10.1093/elt/ccae053 (Epub ahead of print).Search in Google Scholar

Zhong, Yuchun, Davy T. K. Ng & Samuel K. W. Chu. 2023. Exploring the social media discourse: The impact of ChatGPT on teachers’ roles and identity. In Proceedings of the 31st international conference on computers in education. Matsue, Shimane, Japan: Asia-Pacific Society for Computers in Education.Search in Google Scholar

Zimmerman, Barry J. 2002. Becoming a self-regulated learner: An overview. Theory Into Practice 41(2). 64–70. https://doi.org/10.1207/s15430421tip4102_2.Search in Google Scholar

Received: 2025-02-02
Accepted: 2025-05-07
Published Online: 2025-06-16

© 2025 the author(s), published by De Gruyter and FLTRP on behalf of BFSU

This work is licensed under the Creative Commons Attribution 4.0 International License.

Downloaded on 2.2.2026 from https://www.degruyterbrill.com/document/doi/10.1515/jccall-2025-0007/html
Scroll to top button