Home Linguistics & Semiotics Enhancing EAP students’ academic writing and AI literacies: practice at a transnational university in China
Article Open Access

Enhancing EAP students’ academic writing and AI literacies: practice at a transnational university in China

  • Huimin He

    Huimin He, Language Lecturer in EAP at the English Language Centre, Xi’an Jiaotong-Liverpool University, is currently the deputy module leader of a Y1 EAP course with over 500 students. She has been teaching EAP at XJTLU for over five years and has received multiple grants for conducting research on GenAI for EAP teaching and learning. Her papers have been presented at international conferences and published in Frontiers in Psychology and English for Academic Purposes in the EMI Context in Asia. Her research interests include computer-assisted language learning, second language academic writing and EAP pedagogy. She received an MA in applied linguistics from Sun Yat-sen University and is a fellow of the Higher Education Academy.

    ORCID logo
    and Joseph Tinsley

    Joseph Tinsley is an Educational Developer within the Educational Development Unit, Academy of Future Education, Xi’an Jiaotong-Liverpool University. At XJTLU he helps to deliver the PGC402 Pedagogic Research to Enhance Professional Practice module for the Postgraduate Certificate in Learning and Teaching in Higher Education and coordinates the Assessment and Feedback Community of Practice. He has worked as a teacher, curriculum developer and teacher trainer at a range of transnational institutions in China over the past 10 years. His interests lie in dialogic feedback, feedback literacy, and discipline-specific assessment and feedback practices. He has a MSc in Educational Research and is a fellow of the Higher Education Academy.

    ORCID logo EMAIL logo
Published/Copyright: May 29, 2025

Abstract

As Generative AI is increasingly used by language learners for academic writing in the context of higher education, it becomes essential for educators to integrate AI literacies into the writing curriculum to guide students to use Generative AI effectively, critically and ethically. We aim to share our practice in relation to AI literacy training for English academic writing among EAP learners studying at a joint-venture EMI university in China. Specifically, learning activities were designed and implemented for students to understand GenAI’s advantages and pitfalls, engineer effective prompts, enhance assessment literacies, exercise critical thinking and safeguard academic integrity during the process of using GenAI for academic writing. In addition, the pedagogical considerations and rationale behind the practice will be illustrated. Finally, the article will be concluded with reflection on the practice and further suggestions on delivering AI literacy training for language learning purposes.

1 Introduction

The development of Generative AI (GenAI) has led to new possibilities in the teaching of English for Academic Purposes (EAP). Given the centrality of academic writing to EAP teaching and assessment, GenAI is likely to have a significant impact on the discipline. Unlike other tools which give limited corrections or translations, GenAI tools are able to produce large amounts of plausibly academic text in real time. The ease with which students and teachers can produce such texts will likely influence the way the academic writing processes are taught and learned.

Academic writing (AW) is a complex task which involves a range of competencies such as idea generation, idea development, responding to feedback and revising drafts (Flower and Hayes 1981). Although these competencies are challenging for any writer, they are perhaps more so for second language learners (dos Santos et al. 2023; Zhang 2023). The academic writing process is especially challenging as writing is an epistemic tool which allows writers to construct and develop knowledge patterns (Chen 2019). Essentially, AW enhances writers’ thinking, reasoning and understanding of knowledge. The skills required for EAP AW, such as summarising, paraphrasing and synthesising, are those which GenAI is able to imitate quickly and accurately (Ngo and Hastie 2025). Many of the tasks performed by students and assessed in AW courses are those which could easily be replicated by GenAI tools, particularly as GenAI chatbots are “all-in-one” tools able to assist with all stages of the writing process at once (Yan 2023).

However, the benefits of EAP AW courses for students are not just in enhancing linguistic or writing skills, but in enhancing their thinking and reasoning competencies as mentioned above. This presents challenges for EAP AW teaching as students may be unaware of the limitations of GenAI, or may use GenAI tools uncritically (Pack and Maloney 2024; Walter 2024). Although it may be desirable for students to use GenAI to contribute to their own learning (Kim et al. 2024; Rowland 2023), the challenge for EAP is facilitating this usage in a way that allows practitioners to develop and assess students’ linguistic skills (Roe et al. 2024). Based on this, we decided to implement a number of interventions to explore and enhance students’ AI literacies in an EAP AW context. Although the potential risks of using GenAI are well documented (Farrokhnia et al. 2023), we recognised the need to leverage the educational affordances of GenAI tools (Kostka and Toncelli 2023) and the need to reflect on classroom usage of GenAI inclusively with students (Farrokhnia et al. 2023). Additionally, in our context as a Sino-British EMI institution, it is crucial for students to demonstrate critical AI literacies within EAP classes in their first year at university, as these are competencies students will need throughout their degree courses, particularly in relation to disciplinary AW, and in future academic and professional contexts (Chan 2023).

Although previous studies among EAP students revealed students are aware of the affordances and disadvantages of using GenAI and the need for interventions to enhance students AI literacies (Du and Alm 2024; Liu et al. 2024), few empirical studies with targeted interventions have been conducted in EAP settings. The main empirical study (Ngo and Hastie 2025) found that targeted teaching and curriculum interventions led to increased AI literacies among students on an EAP pre-sessional course in the UK. Specifically, GenAI interventions enhanced students’ confidence in using GenAI and deepened their understanding of GenAI. Our study aims to determine whether similar benefits can be observed in a foundation-year EAP course at a Sino-British institution in China.

2 Teaching context

The practices included in this article were implemented among forty first-year Chinese undergraduate students at a Sino-British EMI university. These students’ overall English language proficiency was CEFR B1+ when they were enrolled on the EAP course where they were trained by their lecturer on how to use GenAI for academic writing. Specifically, the lecturer made minor adaptations to the existing writing curriculum by integrating the training on EAP-specific AI literacies, which may or may not be considered or delivered by the other lecturers on the same module or course. This was because it was still at the early phase of the GenAI application to higher education (HE) at that time, and although there were general institutional policies planning the implementation of GenAI within a few certain modules and departments, there were no specific guidelines or clear attitude towards incorporating GenAI into the EAP modules or the teaching and assessment materials yet. However, since the availability of GenAI tools such as ChatGPT, many students were already trying and using these tools for helping with their academic assignments, to a varied extent and with different levels of appropriacy. Therefore, it was essential for teachers on different modules to start exploring best practices of teaching AI literacies specific to their disciplines or subjects, to ensure ethics, fairness and quality. In the context of our EAP module, the students were required to complete a source-based discursive essay which should discuss the advantages and disadvantages of a provided topic, including a formative first draft and a final assessed draft. Since the EAP course took an integrated approach, in that it covered reading, listening, writing and speaking skills, the time devoted to writing was about one third of the total class time of 6 contact hours weekly. In that way, the class time used for the training on AI literacies for academic writing was about 20–30 mins per week considering the need to deliver other content about writing processes and techniques. In the designated lesson time, the lecturer used a self-made PPT, worksheet and the institution’s GenAI platform, XIPU AI, to introduce, demonstrate and guide the use of GenAI for helping students with their first drafts. Students had the chance to work on their drafts with GenAI, reflect on the process and receive feedback from their peers and lecturer on the use of GenAI.

3 Theoretical framework

In this project, we aimed to explore and ultimately enhance students’ AI literacies. Although various definitions of AI literacies have been proposed (Ng et al. 2021), we understand these as not just the skills students need to have to make use of GenAI tools, but also the competencies necessary for critically evaluating and using GenAI tools in an ethical and responsible way.

Although AI literacies have been conceptualised as a subset of digital or information literacies (Tinmaz et al. 2022), AI literacies are beginning to emerge as a distinct concept (Ng et al. 2021). This is driven in part by greater recognition of the benefits of GenAI for teaching and learning (Kim et al. 2024; Rowland 2023), and also by a recognition of the need to prepare graduates for future employment and academic contexts in which they will be expected to be proficient in using GenAI (Rowland 2023). If certain tasks can be easily automated by GenAI tools, students need to be able to evaluate the quality of automated outputs (Bearman et al. 2023) and make judgements about how and when to automate tasks (Dawson 2020).

Many attempts at creating AI literacy scales are mapped to the levels on Bloom’s Taxonomy (Almatrafi et al. 2024; Ng et al. 2021; Weber et al. 2023). One of the most widely studied is the framework by Ng et al. (2021) which divides AI literacies into four aspects: (1) know and understand AI, (2) use and apply AI, (3) evaluate and create AI and (4) AI ethics. We attempted to relate the four aspects of Ng et al.’s (2021) model to EAP AW, and specifically to our students as foundation year EAP students at a transnational institution in China (see Figure 1). We found that these particular aspects of EAP AW were challenging for students before the emergence of GenAI, but the widespread availability and ease of use of GenAI tools have exacerbated these trends. There also seemed to be some overlap between EAP AW literacies and AI literacies as GenAI is a language-based medium, and engaging in dialogue with a chatbot draws on many EAP AW skills such as revision, paraphrasing and summarising (Ngo and Hastie 2025).

Figure 1: 
Applying Ng et al.’s (2021) AI literacy framework to an EAP AW curriculum.
Figure 1:

Applying Ng et al.’s (2021) AI literacy framework to an EAP AW curriculum.

3.1 Prompt writing for EAP AW

A key aim of EAP AW courses is for students to gain an awareness of different genres and discourse patterns. Students often enter AW courses with a “one size fits all” approach to writing tasks which does not take genre conventions or task appropriateness into consideration. Students also tend to be unfamiliar with process writing as many high school writing assessments are short, timed writing tasks conducted in exam conditions. Students may therefore attempt to apply familiar strategies to the range of academic writing tasks they are expected to complete at university. In the same way that writing an essay without planning, developing or generating ideas is unlikely to lead to the production of a task-appropriate response, copying an AW writing prompt into a GenAI tool without careful prompting will likely lead to an output which is unsatisfactory (Ngo and Hastie 2025). Effective prompting, as with academic writing, requires students to consider the audience, purpose and style of a desired text (Rowland 2023). Given the commonalities between AW and AI prompting, we aimed to utilise activities using GenAI tools to explore these competencies with students. These activities were chosen to align with the “know and understand AI” and “use and apply AI” aspects in the framework by Ng et al. (2021).

3.2 Enacting EAP AW assessment and feedback literacies

A further challenge we had experienced was in helping students understand learning outcomes and rubrics, or enhancing their assessment literacies more broadly (Rust et al. 2005). Due to a lack of familiarity with EAP rubrics, our students initially produced writing in the style of a high school essay or an IELTS writing task. After exposure to and engagement with AW rubrics and learning outcomes, students produced more task-appropriate texts. However, before properly engaging with marking rubrics and learning outcomes, there is a danger that students may assume a GenAI tool can automatically produce a task-specific response of appropriate quality. We aimed to encourage dialogue between students, artefacts, and GenAI to enhance students’ assessment literacies. These dialogues around standards and rubrics were chosen to align with the “know and understand AI” and “use and apply AI” aspects in the framework by Ng et al. (2021).

3.3 Critical thinking for EAP AW

Our EAP AW module has significant expectations for the level of criticality and autonomy students are required to demonstrate in a UK style HE course. Students who are used to a more teacher-led style of teaching and feedback can find adapting to a more student-led style of education challenging initially. There is the danger that students could overly rely on GenAI to mitigate some of these challenges, but we also recognised that GenAI could potentially help with reducing some of these challenges, if its use is scaffolded carefully. We aimed to promote student agency by supporting their use of GenAI and providing carefully structured support involving teacher modelling and comparison tasks. Previous studies within EAP settings have suggested that students enjoy critically analysing GenAI output and comparing these to their own compositions (Kostka and Toncelli 2023). We took this as a starting point for building students’ evaluative judgement around both their own AW texts (Tai et al. 2018) and GenAI output (Bearman et al. 2023). These activities aimed to support the third aspect of the framework by Ng et al. (2021): evaluate and create AI.

3.4 Exploring academic integrity within EAP AW

Our EAP AW course is usually students’ first exposure to the concept of academic integrity. In particular, the foundation year is the first time when students have been asked to perform source-based writing tasks in which they use and synthesise texts by different authors. EAP courses are the place where many students are exposed to ideas around academic integrity and are particularly important for establishing habits which students draw on throughout their academic lives (Perkins et al. 2020). Given the ease with which GenAI tools can produce large amounts of coherent and seemingly authentic text, concerns have been raised about the issue of “AI-giarism”. This is where students use GenAI to write entire essays and present these as their own work (Chan 2024). Plagiarism is a difficult concept to define and is influenced by cultural and moral perspectives around authorship and ownership (Howard 1995). Determining how much a GenAI tool has been used in a student submission and how much usage constitutes plagiarism is a challenging task, and institutional policies and approaches will differ (Luo 2024). The reasons why students perform any kind of plagiarism are complex, and intention to deceive is not always the primary motivation (Hu and Lei 2012). As such, we wanted to avoid taking a punitive approach to GenAI and academic integrity and aimed to have an open and inclusive dialogue with students involving the discussion of case studies and our own modelling of appropriate uses of GenAI. This focus on GenAI and academic integrity aligned with the fourth level of the AI literacies model by Ng et al. (2021): AI ethics.

4 Teaching practice and rationale

4.1 Understanding GenAI: opportunities and pitfalls for academic writing

The first practice was to guide students to know and understand AI in the AW context. Before starting university, students may have some experience using different GenAI platforms, including ChatGPT, for daily and academic purposes. However, few have used it for writing an academic essay or may not have used GenAI ethically or effectively (Črček and Patekar 2023).

The students were first briefly introduced to the GenAI technology, including its history, how it works and common applications. After that, the XIPU AI platform, a GPT-based GenAI tool created by the institution, was introduced to the students, including the access, interface and basic functions. Meanwhile, students were asked to log in to the platform and interact with it so as to gain some hands-on experience, especially those who had not previously used any GenAI platforms. The EAP lecturer offered basic tech support during the process and invited the students to share what they thought of the platform when it came to problem-solving. A guide made by the IT support team was shared with the students to help them further explore the tool, such as its advanced functions and usage limitations.

After that, it was important for the students to understand what opportunities GenAI could offer for helping with their EAP writing. First and foremost, students should be aware of the purpose of using GenAI for academic writing. Specifically, they should be clear why they were trained to use GenAI for their writing and how GenAI could help them apart from what they could learn from the course and the writing assistance they could obtain from some non-GenAI tools such as Grammarly and Marking Mate. This is because many studies found that motivation or willingness is a significant influencing factor of users’ adoption intention of GenAI for educational purposes (Chan and Hu 2023). To help students explore the potential benefits, we provided a few possible areas which they could try asking XIPU AI to help with. These areas included brainstorming ideas on a writing topic, structuring a paragraph or essay, receiving feedback and improving language (Pack and Maloney 2024). In the controlled interaction practice, the students were provided with some sample texts and prompts in order for XIPU AI to generate more desirable outputs, and then they were asked to compare their own thoughts to the outputs of XIPU AI and those of Grammarly or other similar tools to discuss the advantages of using GenAI to help with the writing process.

Additionally, the students were also asked to reflect on the downsides of using the GenAI tool for academic writing compared to producing the writing completely by themselves or with the help of classmates or teachers. The possible downsides may include inaccurate information (off-topic responses to the writing task), fake or “hallucinated” references, and language under or over the students’ targeted level. More specifically, one way for them to notice the restrictions or challenges of using GenAI for academic writing was to compare the GenAI outputs with what was taught in the EAP course. For example, the students were tasked with checking whether the structure of a paragraph generated by XIPU AI contained all the structural components taught in the class and whether the paragraph was cohesively organised by using cohesive devices they learned from the course. At this stage, with the relatively simple sample prompts, students might find that the text produced by GenAI did not meet the requirements of the course, and thus they should be the “gatekeeper” and evaluate carefully prior to the uptake of any GenAI-created content.

At the end of the semester, after the students used and learned more about GenAI, they reported that they perceived GenAI useful and intelligent for academic writing to a bigger extent than before. Meanwhile, they became more cautious about whether its outputs were aligned with the course expectations.

4.2 Prompt engineering for process-based writing

The students were also trained how to craft effective prompts for different stages of the writing process to save time and generate more desirable outputs (Ngo and Hastie 2025; Walter 2024). The writing process was divided into brainstorming on the essay topic, reading and analysing sources, making an outline, writing the draft, receiving feedback, revising the draft (Flower and Hayes 1981). This was aligned with the process-based writing instruction in the existing EAP writing curriculum, so the integration of AI literacy training on how to engineer prompts to help students in the above various steps was seamless.

Before the students were taught how to craft prompts specific to receiving assistance in the academic writing process, they were given some general tips on prompting GenAI. We encouraged the students to treat GenAI as a naïve kid, and thus they should be clear and specific when giving a prompt, because vague prompts may lead to vague answers. Some general tips we provided are as follows:

  1. Provide XIPU AI with a specific context or background of what you are working on and what you need to help with.

  2. Narrow down requests and questions to focus XIPU AI on the task at hand. Avoid vague prompts such as “improve this paragraph for me” and “identify problems of my essay”. Provide some example outputs if needed.

  3. You may ask follow-up questions since XIPU AI sometimes needs clarification. It may be necessary to have a back-and-forth dialogue with XIPU AI.

  4. Limit the word count of your prompts to 150–250 words since lengthy prompts could confuse XIPU AI. Similarly, you may need to ask XIPU AI to review one paragraph of your essay at a time.

  5. You may need to adjust and refine your prompt (e.g., rephrasing it) based on XIPU AI’s response if your initial prompt does not work.

After the students were familiar with the general protocol of prompting and interacting with GenAI, the writing-process-based prompt engineering was implemented, which foregrounds student-centred learning, open and transparent communication practices, and students’ “hands-on” experience using relevant AI tools (Ngo and Hastie 2025). For the different stages of the writing process, the students were asked to brainstorm what GenAI could help them with specifically, and then the lecturer elicited feedback from the students and provided some suggested areas. After that, the students tried creating their own prompts to receive outputs from XIPU AI for assistance in the suggested and other areas. For example, in the stage of reading and analysing sources, students were tasked with practising using XIPU AI to help with their source comprehension and analysis and to identify the main advantages and disadvantages in relation to the essay topic illustrated in one of the provided source articles. Sample prompts were not provided at first so that students could independently craft prompts based on the general tips of prompt engineering, review the results and refine their prompts. Subsequently, the students shared their dialogue with GenAI among themselves and agreed on the most effective prompt(s), and through peer learning, they learned different ways of prompting XIPU AI for the same purpose which could lead to a variety of responses. After that, the lecturer demonstrated with a relatively holistic and effective prompt which yielded a satisfactory output. Then, the students were asked to improve their prompts, if needed, by following the key features of the sample prompt to generate new outputs. By observing and modelling prompts for process-based writing, students were more likely to identify the features of effective interactions with GenAI or prompt engineering at different points of the writing process. After the semester-long training, the learners thought the provided prompt banks were useful but still hoped to enhance their prompt engineering skills for more advanced and spontaneous interaction with GenAI in order to receive desirable outputs.

4.3 Enhancing assessment literacies

The third aspect of our practice was elevating learners’ AW assessment literacies. In order for students to use GenAI more effectively for their academic writing, it is important that they are clear about what the final writing product should look like, which means they need to be familiar with the task specifications and assessment criteria. In that way, students would be able to create targeted prompts which instruct GenAI to generate outputs that are consistent with the course learning outcomes and standards for academic writing (Rust et al. 2005). In addition, with the course-specific AW assessment literacies, students are more likely to consistently make informed evaluative judgement of GenAI outputs (Bearman et al. 2023).

One practice we conducted was providing a feedback checklist, designed based on the marking rubric, for students to make self-evaluation of their drafts and asking them to craft prompts by integrating the specific evaluation items included in the checklist. First, it is essential that students understand the substance of each checklist item. The lecturer checked the students’ understanding by asking questions and invited them to refer back to the teaching materials. For example, one of the statements in the checklist was “You have used a variety of cohesive features within and between paragraphs”. The students should know what “cohesive features” mean and how these can be used within and between paragraphs to enhance textual cohesion, which was covered in the course materials. If the students are not clear about the instructions, they should review the class content or ask for clarification, because they would need to evaluate GenAI’s feedback on this aspect in the later stage.

After that, we asked the students to reflect on and identify the needs of using GenAI in providing feedback. They were asked to consider on which checklist items they would prompt GenAI to provide feedback and on which they would trust their self-evaluation more or it would be more efficient to manually make evaluation and corrections based on what they learnt. During this process, the students engaged with the assessment criteria more in-depth by reviewing the evaluative statements in the checklist and anticipating GenAI’s ability of providing certain feedback. Then, the students were invited to engineer prompts for receiving feedback on the items they chose. Some sample prompts were provided to show how students could incorporate the checklist statements into their prompts. They were also asked to assess whether the feedback provided was aligned with the specific requirements in the marking rubric and the teaching materials and how they would address the GenAI feedback.

Another practice was guiding the students to use GenAI to help them comprehend and respond to teacher feedback on their drafts (Carless and Boud 2018). First, the lecturer gave the students some time to read through the individual feedback to try to understand it and make a plan of revising their drafts accordingly. During this process, some students had difficulty comprehending and interpreting the teacher feedback or thinking of ways to address the feedback to improve their writing (Winstone et al. 2017). To help with this, the lecturer asked the students to prompt GenAI to explain the feedback they found challenging to understand or unpack the terminology used in the feedback. For example, “lack of unity” in the feedback seemed abstract and unfamiliar to some students, and they used GenAI for providing further explanation in relation to their own writing by asking for specific examples in their drafts to demonstrate the “lack of unity”. Due to practicality, teachers may not be able to provide sufficient examples for a feedback comment, but asking GenAI to search writing text consistent to the teacher feedback facilitates students’ understanding of the feedback and provides reference as for how they could make revisions. After fully comprehending the feedback, students also used GenAI to provide suggestions on revision, especially when they were not sure how to make changes according to the feedback. When doing this, the students were reminded to take control over the whole revision process and evaluate GenAI outputs before adopting any of its suggestions. The students were also encouraged to use GenAI to receive extra feedback to supplement the teacher feedback on their revisions.

4.4 Critical thinking

In addition, we trained the students to conduct critical evaluation of GenAI outputs for AW. Facing such a powerful tool, learners are likely to become dependent on it, especially for assignments or assessments that they are required to complete after class at their own pace. However, over relying on the tool for producing writing may not be helpful for EAP learners, especially EFL learners, to learn the target language and make progress (Barrot 2023). Therefore, we offered learning opportunities where the students could enhance their critical use of GenAI by analysing, reflecting on and improving the GenAI writing outputs.

One example learning activity was to guide the students to critically interact with and reflect on the GenAI’s responses to a prompt which required it to detect non-academic vocabulary, such as vague and colloquial words, and revise them into academic vocabulary. First, the lecturer reviewed the knowledge about academic writing style and common informal language with the students. Then, the students chose a possible prompt from a provided prompt bank for identifying informal words in their draft paragraph (Appendix A). After that, they input the prompt into XIPU AI and received a response which they were asked to analyse by considering: 1) whether the informal words recognised by XIPU AI were the same as or similar to what we reviewed beforehand or consistent to the ten rules of academic writing style taught in the class, and 2) whether XIPU AI was helpful in making the draft paragraph more academic and enhancing the overall quality. At the end of this step, some students reported that XIPU AI was not holistic enough to identify all the informal language mentioned in the EAP class and that XIPU AI detected some academic vocabulary in the original draft as not academic.

Subsequently, in order for XIPU AI to produce more effective outputs aligned with the course requirements, we guided the students to refine their prompts by adding some examples or types of informal words that were taught in the class (Appendix B). Next, they were asked to discuss with their peers whether the responses were different from the previous step and whether the outputs were more aligned with the teaching materials. A similar procedure was used for the students to prompt XIPU AI to revise the identified informal language, ask follow-up questions and critically evaluate the GenAI outputs. At the end of the task, students were required to make a decision by themselves whether they would revise their paragraph based on XIPU AI’s advice and to provide rationale. This practice created an opportunity for the students to learn that there may be a gap between the GenAI knowledge base and the course content especially when it is not trained to be consistent with the teaching materials and that they would need to “teach” GenAI what is covered in the course and review the GenAI responses carefully and critically to produce an end-product that meets the needs of the writing assessment. In this way, the students applied what they learnt to assess the information provided by GenAI, which involved them in exercising their evaluative judgement (Bearman et al. 2023).

This aspect of the AI literacies training seemed successful since many learners stated that they usually evaluated the GenAI outputs before they decided on whether to adopt them in the writing drafts.

4.5 Academic integrity

The fifth aspect is related to AI ethics where learners should respect academic integrity when using GenAI for AW. When we were exploring what GenAI could do for the students, we made it clear that they should not use GenAI to “write” a paragraph or an essay for them, not even chunks (e.g., sentences) of text. We adopted a relatively strict approach since students should be able to apply what they learnt from the EAP classes to their writing, independently developing their writing skills and critical thinking ability. Specifically, ethical learning practices were showcased through case analysis, writing sample analysis and use of “Turnitin”, a tool for checking plagiarism and collusion.

For the case analysis, students were provided with various scenarios where learners use GenAI appropriately or inappropriately and were invited to discuss with others whether the users in the cases were aware of academic integrity and why or why not (Ngo and Hastie 2025; Rowland 2023). Generally speaking, if the users have control over the core ideas of the writing and maintain the “ownership” of the writing product, then it should be considered as using GenAI with integrity. A potential analogy would be thinking of GenAI as a more intelligent writing tutor than “Grammarly” and “Marking Mate” which could become a helper for, rather than the author of, students’ academic writing (Kim et al. 2024).

Regarding writing analysis, a sample paragraph generated by GenAI and the other one by the module were shown to the students, and they were asked to discuss which one they thought were produced by GenAI and also whether GenAI was reliable or met the standards of the course or academic integrity in that case. One of the most noteworthy academic integrity issues was that GenAI seemed to fabricate data, usually with vague citations (e.g., “research shows that…”).

Another way to check the academic integrity was to use “Turnitin”. At the time when the students were trained to use GenAI for academic writing, the “AI detection” function on “Turnitin” was still under development, which sometimes could be unstable or unreliable, and was not officially used by the institution to penalise unethical or other inappropriate use of GenAI in students’ assessments. Despite that, it was still helpful to make the students aware of the potential risk of being caught if they do not follow rules about academic integrity. Moreover, in the GenAI-created sample paragraph mentioned above, Turnitin did highlight a few chunks of text that were plagiarised from other sources, and these may not be closely related to the main idea of the paragraph. The Turnitin report was given to the students to analyse and reflect on the pitfalls of GenAI and the importance of being the overseer and owner of their own writing products.

By the end of the training, most learners were fully aware of the concept of authorship, claiming that the essays should be originally written by themselves and that they should have control over the core ideas and textual structure.

5 Conclusion and implications

In conclusion, this paper has demonstrated our practice aiming at developing students’ AI literacies in AW, namely understanding AI’s affordances and challenges in assisting AW, engineering effective prompts, enhancing assessment literacies, advancing critical evaluation and upholding academic integrity. The practice has had a positive impact on the students’ GenAI use.

The AI literacies for academic writing framework was essential for students to understand and use GenAI properly during the writing process. Whilst students may be familiar with the functionality of GenAI, they need ongoing guidance regarding its use within a specific subject in order to guarantee that they are using it effectively, critically and ethically in accordance with disciplinary norms and expectations. It is recommended that AI literacies could be integrated in existing curricula by aligning AI use to core discipline-specific competencies in order to leverage the technology for helping students effectively and innovatively achieve course learning outcomes (Ngo and Hastie 2025). One challenge in practice is that some students may not be motivated to use or learn to use GenAI for their academic writing, especially at the early stage of Gen AI development when institutional policies are not yet in place to explicitly encourage, guide and regulate the use of this technology for academic studies (Chan 2023). Therefore, prior to AI literacy training, teachers should help students appreciate the value of GenAI for academic writing, and policy support from the institution, departments and modules is important in this regard. For instance, instead of regarding GenAI use itself as violation of academic integrity, new academic integrity policies should accept this and require students to cite any GenAI-produced content properly. In addition, teachers may also assess how students interact with and critically evaluate what they cite in their work. This emphasises the evaluative judgement exercised by students when they assess GenAI outputs, which means students should not only be familiar with the course content and assessment criteria, but should also apply the knowledge to constantly and critically evaluate and make decisions about GenAI output (Bearman et al. 2023). As Bearman et al. (2023) argue, this kind of evaluative judgement can only be exercised in line with a contextualised understanding of quality. This may require curriculum designers to dedicate more time and opportunities for students’ deeper interaction with GenAI and provide more targeted teacher feedback on this area (Ngo and Hastie 2025). Institutional support could be provided to enhance faculty expertise in guiding students’ discipline-specific GenAI use, such as partnering discipline teachers with AI departments or offering micro-credentials for instructors in using GenAI for discipline-specific teaching and assessment practices. The practices described in this paper are specific to our teaching context within a foundation year EAP module at a transnational institution in China, and may not necessarily be generalisable to other contexts. However, several of the practices we implemented came from a study by Ngo and Hastie (2025) conducted in a foundation year programme in Scotland. Our successful replication of parts of their framework in our teaching context suggests that these practices can be replicated at other higher education institutions in China.


Corresponding author: Joseph Tinsley, Educational Development Unit, Academy of Future Education, Xi’an Jiaotong-Liverpool University, Dushu Lake Science and Education Innovation District, Suzhou, Jiangsu, 215123, The People’s Republic of China, E-mail:

About the authors

Huimin He

Huimin He, Language Lecturer in EAP at the English Language Centre, Xi’an Jiaotong-Liverpool University, is currently the deputy module leader of a Y1 EAP course with over 500 students. She has been teaching EAP at XJTLU for over five years and has received multiple grants for conducting research on GenAI for EAP teaching and learning. Her papers have been presented at international conferences and published in Frontiers in Psychology and English for Academic Purposes in the EMI Context in Asia. Her research interests include computer-assisted language learning, second language academic writing and EAP pedagogy. She received an MA in applied linguistics from Sun Yat-sen University and is a fellow of the Higher Education Academy.

Joseph Tinsley

Joseph Tinsley is an Educational Developer within the Educational Development Unit, Academy of Future Education, Xi’an Jiaotong-Liverpool University. At XJTLU he helps to deliver the PGC402 Pedagogic Research to Enhance Professional Practice module for the Postgraduate Certificate in Learning and Teaching in Higher Education and coordinates the Assessment and Feedback Community of Practice. He has worked as a teacher, curriculum developer and teacher trainer at a range of transnational institutions in China over the past 10 years. His interests lie in dialogic feedback, feedback literacy, and discipline-specific assessment and feedback practices. He has a MSc in Educational Research and is a fellow of the Higher Education Academy.

Appendix A: Sample Prompts Selected by Students

  1. I’m writing an EAP essay at the university level. Can you find any informal words or expressions from this paragraph: [copy and paste your body paragraph]

  2. I’m writing an academic essay at the university level and I need to make sure that all the vocabulary I use is formal and academic. Can you find any informal words or expressions from this paragraph: [copy and paste your body paragraph]

Appendix B: A Refined Prompt Example

I’m writing an academic essay at the university level and I need to make sure that all the vocabulary I use is formal and academic. Informal words include personal pronouns (our, my), vague words (good, bad, thing), proverbs, imperative sentences, absolute words. Can you find any informal words or expressions from this paragraph: [copy and paste your body paragraph].

References

Almatrafi, Omaima, Aditya Johri & Hyuna Lee. 2024. A systematic review of AI literacy conceptualization, constructs, and implementation and assessment efforts (2019–2023). Computers and Education Open(6). 100173. https://doi.org/10.1016/j.caeo.2024.100173.Search in Google Scholar

Barrot, Jessie. S. 2023. Using ChatGPT for second language writing: Pitfalls and potentials. Assessing Writing 57. 100745. https://doi.org/10.1016/j.asw.2023.100745.Search in Google Scholar

Bearman, Margaret, Joanna Tai, Phillip Dawson, David Boud & Rola Ajjawi. 2023. Developing evaluative judgement for a time of generative artificial intelligence. Assessment & Evaluation in Higher Education 49(6). 893–905. https://doi.org/10.1080/02602938.2024.2335321.Search in Google Scholar

Carless, David & David Boud. 2018. The development of student feedback literacy: Enabling uptake of feedback. Assessment & Evaluation in Higher Education 43(8). 1315–1325. https://doi.org/10.1080/02602938.2018.1463354.Search in Google Scholar

Chan, Cecilia K. Y. 2023. A comprehensive AI policy education framework for university teaching and learning. International Journal of Educational Technology in Higher Education 20(8). https://doi.org/10.1186/s41239-023-00408-3.Search in Google Scholar

Chan, Cecilia K. Y. 2024. Students’ perceptions of ‘AI-giarism’: Investigating changes in understandings of academic misconduct. Education and Information Technologies 30. 8087–8108. https://doi.org/10.1007/s10639-024-13151-7.Search in Google Scholar

Chan, Cecilia. K. Y. & Wenjie Hu. 2023. Students’ voices on generative AI: Perceptions, benefits, and challenges in higher education. International Journal of Educational Technology in Higher Education 20(43). https://doi.org/10.1186/s41239-023-00411-8.Search in Google Scholar

Chen, Ying-Cheh. 2019. Writing as an epistemological tool: Perspectives from personal, disciplinary, and sociocultural landscapes. In Vaughan Prain & Brian Hand (eds.), Theorizing the Future of Science Education Research. Contemporary Trends and Issues in Science Education, 115–132. Cham: Springer.10.1007/978-3-030-24013-4_8Search in Google Scholar

Črček, Nikola & Jakob Patekar. 2023. Writing with AI: University students’ use of ChatGPT. Journal of Language and Education 9(4). 128–138. https://doi.org/10.17323/jle.2023.17379.Search in Google Scholar

Dawson, Phillip. 2020. Cognitive offloading and assessment. In Margaret Bearman, Phillip Dawson, Rola Ajjawi, Joanna Tai & David Boud (eds.), Re-Imagining University Assessment in a Digital World, 37–48. Cham: Springer.10.1007/978-3-030-41956-1_4Search in Google Scholar

dos Santos, Alessandra Elisabeth, Larisa Olesova, Cristiane Vicentini & Luciana C. de Oliveira. 2023. ChatGPT in ELT: Writing affordances and activities. TESOL Connections 2023. 1–6.Search in Google Scholar

Du, Jinming & Antoine Alm. 2024. The impact of ChatGPT on English for academic purposes (EAP) students’ language learning experience: A self-determination theory perspective. Education Sciences 14(7). 726. https://doi.org/10.3390/educsci14070726.Search in Google Scholar

Farrokhnia, Mohammadreza, Seyyed Kazem Banihashem, Omid Noroozi & Arjen Wals. 2023. A SWOT analysis of ChatGPT: Implications for educational practice and research. Innovations in Education and Teaching International 61(3). 460–474. https://doi.org/10.1080/14703297.2023.2195846.Search in Google Scholar

Flower, Linda & John R. Hayes. 1981. A cognitive process theory of writing. College Composition and Communication 32(4). 365–387. https://doi.org/10.2307/356600.Search in Google Scholar

Howard, Rebecca Moore. 1995. Plagiarism, authorships, and the academic death penalty. College English 57(7). 788–806. https://doi.org/10.58680/ce19959094.Search in Google Scholar

Hu, Guangwei & Jun Lei. 2012. Investigating Chinese university students’ knowledge of and attitudes toward plagiarism from an integrated perspective. Language Learning 62(3). 813–850. https://doi.org/10.1111/j.1467-9922.2011.00650.x.Search in Google Scholar

Kim, Jinhee, Seongryeong Yu, Rita Detrick & Li Na. 2024. Exploring students’ perspectives on Generative AI-assisted academic writing. Education & Information Technologies 30. 1265–1300. https://doi.org/10.1007/s10639-024-12878-7.Search in Google Scholar

Kostka, Ilka & Rachel Toncelli. 2023. Exploring applications of ChatGPT to English language teaching: Opportunities, challenges, and recommendations. TESL-EJ 27(3). 1–19. https://doi.org/10.55593/ej.27107int.Search in Google Scholar

Liu, Yanhua, Jaeuk Park & Sean McMinn. 2024. Using generative artificial intelligence/ChatGPT for academic communication: Students’ perspectives. International Journal of Applied Linguistics 34. 1437–1461. https://doi.org/10.1111/ijal.12574.Search in Google Scholar

Luo, Jiahui (Jess). 2024. A critical review of GenAI policies in higher education assessment: A call to reconsider the “originality” of students’ work. Assessment & Evaluation in Higher Education 49(5). 651–664. https://doi.org/10.1080/02602938.2024.2309963.Search in Google Scholar

Ng, Davy Tsz Kit, Jac Ka Lok Leung, Samuel Kai Wah Chu & Maggie Shen Qiao. 2021. Conceptualizing AI literacy: An exploratory review. Computers & Education: Artificial Intelligence. 100041. https://doi.org/10.1016/j.caeai.2021.100041.Search in Google Scholar

Ngo, Thu Ngan & David Hastie. 2025. Artificial intelligence for academic purposes (AIAP): Integrating AI literacy into an EAP module. English for Specific Purposes 77. 20–38. https://doi.org/10.1016/j.esp.2024.09.001.Search in Google Scholar

Pack, Austin & Jeffrey Maloney. 2024. Using artificial intelligence in TESOL: Some ethical and pedagogical considerations. TESOL Quarterly 58(2). 1007–1018. https://doi.org/10.1002/tesq.3320.Search in Google Scholar

Perkins, Mike, Ulas Basar Gezgin & Jasper Roe. 2020. Reducing plagiarism through academic misconduct education. International Journal of Educational Integrity 16(3). https://doi.org/10.1007/s40979-020-00052-8.Search in Google Scholar

Roe, Jasper, Mike Perkins & Yulia Tregubova. 2024. The EAP-AIAS: Adapting the AI assessment scale for English for academic purposes. https://doi.org/10.48550/arXiv.2408.01075 (Epub ahead of print).Search in Google Scholar

Rowland, David R. 2023. Two frameworks to guide discussions around levels of acceptable use of generative AI in student academic research and writing. Journal of Academic Language and Learning 17(1). 31–69.Search in Google Scholar

Rust, Chris, O’Donovan Berry & Margaret Price. 2005. A social constructivist assessment process model: How the research literature shows us this could be best practice. Assessment & Evaluation in Higher Education 30(3). 231–240. https://doi.org/10.1080/02602930500063819.Search in Google Scholar

Tai, Joanna, Rola Ajjawi, David Boud, Phillip Dawson & Ernesto Panadero. 2018. Developing evaluative judgement: Enabling students to make decisions about the quality of work. Higher Education 76. 467–481. https://doi.org/10.1007/s10734-017-0220-3.Search in Google Scholar

Tinmaz, Hasan, Yook-Tae Lee, Mina Fanea-Ivanovici & Hasnan Baber. 2022. A systematic review on digital literacy. Smart Learning Environments 9(21). https://doi.org/10.1186/s40561-022-00204-y.Search in Google Scholar

Walter, Yoshija. 2024. Embracing the future of artificial intelligence in the classroom: The relevance of AI literacy, prompt engineering, and critical thinking in modern education. International Journal of Educational Technology in Higher Education 21(15). https://doi.org/10.1186/s41239-024-00448-3.Search in Google Scholar

Weber, Patrick, Marc Pinski & Baum Lorenz. 2023. Toward an objective measurement of AI literacy. In PACIS Proceedings 60. Pacific Asia Conference on Information Systems. Nanchang.Search in Google Scholar

Winstone, Naomi E., Robert A. Nash, James Rowntree & Michael Parker. 2017. It’d be useful, but I wouldn’t use it’: Barriers to university students’ feedback seeking and recipience. Studies in Higher Education 42(11). 2026–2041. https://doi.org/10.1080/03075079.2015.1130032.Search in Google Scholar

Yan, Da. 2023. Impact of ChatGPT on learners in a L2 writing practicum: An exploratory investigation. Education & Information Technology 28. 13943–13967. https://doi.org/10.1007/s10639-023-11742-4.Search in Google Scholar

Zhang, Limei. 2023. Empowering Chinese college students in English as a foreign language writing classes: Translanguaging with translation methods. Frontiers in Psychology 14. 1118261. https://doi.org/10.3389/fpsyg.2023.1118261.Search in Google Scholar

Received: 2025-01-15
Accepted: 2025-04-15
Published Online: 2025-05-29

© 2025 the author(s), published by De Gruyter and FLTRP on behalf of BFSU

This work is licensed under the Creative Commons Attribution 4.0 International License.

Downloaded on 19.12.2025 from https://www.degruyterbrill.com/document/doi/10.1515/jccall-2025-0004/html
Scroll to top button