Home Unlocking AI for language education: mastering prompts, critical evaluation of AI responses, and implications for language teaching and learning
Article Open Access

Unlocking AI for language education: mastering prompts, critical evaluation of AI responses, and implications for language teaching and learning

  • Yılmaz Köylü

    Yılmaz Köylü is an Assistant Professor of Language Education in the Center for Language Education at The Hong Kong University of Science and Technology. He has a PhD in Linguistics and Second Language Studies from Indiana University, Bloomington. His interests lie in the domains of first and second language acquisition, syntax, semantics, pragmatics, Turkish linguistics, as well as Teaching English to Speakers of Other Languages (TESOL), and second language (L2) writing. He is particularly interested in the syntax-semantics interface, the structure and meaning of noun phrases, genericity, kind reference, and mass/count noun distinction across languages.

    ORCID logo EMAIL logo
Published/Copyright: August 21, 2025

Abstract

This article examines the transformative role of Artificial Intelligence (AI), Generative AI (GenAI), and Large Language Models (LLMs) like ChatGPT and DeepSeek in language teaching and learning. It highlights their capabilities, including personalized content generation, automated assessment, real-time feedback, and administrative efficiency, while emphasizing the necessity of prompt engineering to optimize outputs. The PROMPT (Persona, Requirements, Organization, Medium, Purpose, Tone) framework is introduced as a structured approach to crafting effective prompts. Case studies, such as DeepSeek’s Python code generation, demonstrate practical applications. However, the article critically addresses limitations like bias, factual inaccuracies, and ethical concerns, advocating for rigorous fact-checking and balanced human-AI collaboration. By synthesizing research and practical examples, this article underscores AI’s potential to enhance language education while urging educators to adopt critical literacy and ethical frameworks to mitigate risks and ensure equitable, human-centered learning experiences.

1 AI, GenAI, LLMs and DeepSeek

Artificial Intelligence (AI) refers to computer systems that are designed to perform tasks that typically require human intelligence. These systems can learn from experience, recognize patterns, make decisions, and solve problems (Nah et al. 2023). Generative AI (GenAI) is an AI technology that automatically generates content in response to prompts written in natural-language conversational interfaces. The content can appear in formats that comprise all symbolic representations of human thinking: texts written in natural language, images (including photographs, digital paintings and cartoons), videos, music and software code. A large language model (LLM) is a type of AI that is trained on a large amount of text data and can generate new text. It is used in applications such as language translation, text summarization, and content creation. ChatGPT is a language model that allows people to interact with a computer in a more natural and conversational way. GPT stands for “Generative Pre-trained Transformer” and is the name given to a family of natural language models developed by Open AI (Nah et al. 2023). ChatGPT uses natural language processing to learn from Internet data, providing users with artificial intelligence-based written answers to questions or prompts. These models are trained on large text datasets to predict the next word in a sentence and, from that, generate coherent and compelling human-like output (Ouyang et al. 2022).

The advantages of AI tools for language teaching and learning have been demonstrated in various studies (see Ma et al. 2024 for an overview). Such tools help language learners with new vocabulary and grammar (Baskara and Mukarto 2023; Bezirhan and Davier 2023), enhance language comprehension thanks to glossaries and translations (Jiao et al. 2023), facilitate conceptualization of new material (Kohnke et al. 2023), and reinforce learning through tailored exercises (Kasneci et al. 2023). Ma et al. (2024) maintain that such AI tools also help teachers with language teaching. Teachers can craft novel learning environments based on individual student needs (Baidoo-Anu and Ansah 2023), create texts and dialogues (Crosthwaite and Baisa 2023), and design lessons and interactive activities (Kasneci et al. 2023). Some other uses of AI tools include assessing student work (Kasneci et al. 2023), generating questions (Kohnke et al. 2023), automating essay writing, grading, and providing feedback (Li et al. 2023; Mizumoto and Eguchi 2023), creating language practice with immediate feedback (Fryer and Carpenter 2006), and motivating students (Ali et al. 2023).

LLMs can perform various tasks (see Belkina et al. 2025 for a review of particularly the educational applications). First, they can generate human-like text on a variety of topics, such as writing prompts, summaries, and explanations (Ma et al. 2024). This can be useful for generating content for lesson plans, assignments, and assessments. LLMs can also answer questions on a variety of topics. This can be useful for providing students with quick and accurate answers to their questions, or for generating discussion prompts (Kohnke et al. 2023). LLMs can provide recommendations on a variety of topics, such as books, articles, and videos, which can be useful for providing students with personalized recommendations (Ma et al. 2024). LLMs can translate text from one language to another. Finally, LLMs can summarize long pieces of text into shorter, more concise summaries. Figure 1 taken from the OpenAI website (the company that created ChatGPT) illustrates various other tasks ranging from writing Python code to making a sandwich using ingredients from a kitchen LLMs can perform.

Figure 1: 
Various capabilities of LLMs.
Figure 1:

Various capabilities of LLMs.

In late 2024, a Chinese AI company called Hangzhou DeepSeek Artificial Intelligence Basic Technology Research Co. Ltd., focused on research and development in the field of Artificial General Intelligence (AGI) and based in Hangzhou, Zhejiang, in China, created their LLM DeepSeek. DeepSeek has fascinating capabilities, one of which is writing detailed computer code. The prompt below illustrates how to guide DeepSeek to generate the Python code to create the snake game.

The prompt: I am a student at the Hong Kong University of Science and Technology. I major in Biotechnology. I am taking a Python course. Our instructor asked us to create the ‘snake’ game in Python. Can you write the whole code for me?

For those who are not familiar, the snake game is a classic arcade-style video game where the player controls a growing snake that moves around a confined space, collecting food (represented as dots) while avoiding collisions with walls or the snake’s own body. We see that as a response to the prompt, DeepSeek outputs a detailed Python code to generate the snake game. The code includes how to install necessary packages, how to initialize the game, defining colors, and dimensions, etc. The DeepSeek output to the prompt above is illustrated in Figure 2.

Figure 2: 
Python code to create the snake game.
Figure 2:

Python code to create the snake game.

DeepSeek can even explain the code it generates in very simple language. Figure 3 demonstrates how DeepSeek breaks down the code so that the user understands each part.

Figure 3: 
DeepSeek’s explanation of the python code to create the snake game.
Figure 3:

DeepSeek’s explanation of the python code to create the snake game.

Given these vast functionalities, AI and LLMs can be indispensable tools for teaching and learning, particularly in language education (Ma et al. 2024). First, AI tools can enhance lesson planning and curriculum development processes. AI tools can also help save time on research and preparation by providing relevant and accurate information for lesson planning and curriculum development. This can free up time for teachers to focus on teaching and supporting student learning. Moreover, AI tools can analyze student data, student interests, and learning objectives to provide personalized recommendations for lesson planning and curriculum development. This can help teachers create more effective and engaging learning experiences that meet the needs of individual students (Huang et al. 2023). AI tools have the capability to identify areas where students may need additional support or where the curriculum may need to be adjusted. This can help teachers create more effective and targeted lesson plans. AI tools can also streamline administrative tasks. First, they can automate the grading process by analyzing student responses and providing accurate and consistent feedback. This can save time and effort on grading, allowing teachers to focus on teaching and supporting student learning. Next, AI tools can provide personalized feedback to students based on their individual learning needs and performance (Crompton et al. 2024). Finally, AI tools can help teachers respond to an email or create a newsletter. This can save hours of time, and help teachers craft consistent, professional emails and other communication (see Belkina et al. 2025 for a review of particularly the educational applications).

2 Prompt engineering

To benefit from all those functionalities of AI tools and LLMs such as ChatGPT and DeepSeek, it is essential to know prompt engineering. Prompt engineering is the process of crafting clear, specific prompts that guide the AI language model to generate relevant and accurate responses (Korzynski et al. 2023). The goal of prompt engineering is to ensure that the AI language model generates responses that meet the desired objectives and provide outputs that are on target. Prompt engineering is all about being able to ask the right questions and it is a critical skill for using any LLM. There are some key principles in prompt engineering. First, we need to identify the purpose of the prompt and determine the main reason for writing the prompt. For example, the purpose could be to generate creative writing, to facilitate classroom discussions, or to support research projects. This ensures that the prompt is tailored to the specific learning objective and target audience. Next, we need to understand the target audience, know the age group, learning level, and/or cultural background, in order to craft prompts that are relevant and engaging (Ciampa et al. 2023). Moreover, we need to use clear and concise language so that the prompts are easy to understand and follow. Finally, we should provide sufficient contextual and background information and make sure to include any necessary information to help the AI language model generate accurate and relevant responses. This can include providing relevant information about the topic, providing examples, or providing links to additional resources. By following these tips, teachers can craft effective prompts that guide the AI language model to generate accurate and relevant responses (Korzynski et al. 2023).

2.1 Example prompts

Based on the principles of prompt engineering, we could craft prompts to carry out various tasks on a variety of topics. Below are some example prompts:

  1. Generate a list of [number] objectives for my [grade and subject] class on [topic]. Each objective should begin with [sentence structure].

  2. Create a list of open-ended, critical thinking questions for [insert topic, subject, and grade].

  3. Generate a list of discussion topics for [subject and grade level] class on [topic] with [low/medium/high] complexity level.

  4. Create [number] essay prompts related to [topic] for [grade level] students.

  5. Generate [number] vocabulary words related to [topic or subject] and create a [matching, multiple choice, and/or fill-in-the-blank] quiz with two versions: a blank, student version and a teacher version with answers.

  6. Generate a list of [number] formative assessment ideas related to [topic] for my [grade level and subject] students.

  7. Create a list of strategies for dealing with [issue] in my classroom. The issue happens in my [grade, subject, time of day].

  8. Write an email to remind parents about our upcoming parent-teacher conference. Include the date: [date], time: [time], location: [location], and the following instructions: [instructions for how to schedule a meeting].

  9. Create a response to this email [email content or issue raised in email] that provides this response [response]. Keep the tone professional and friendly and write from the point of view of [teacher, administrator, etc.]

  10. Improve this content [content] by [changing the tone, fixing grammar, making it more concise, and/or making it more engaging].

2.2 The PROMPT framework to create effective prompts

Although there is a plethora of prompt frameworks one could utilize, the PROMPT framework from the Pennsylvania State University Library (https://guides.libraries.psu.edu/berks/ai) is one of the most effective ones to optimize output from GenAI platforms. PROMPT is an acronym that stands for:

  1. Persona

  2. Requirements

  3. Organization

  4. Medium

  5. Purpose

  6. Tone

Here is what one needs to do to craft effective prompts based on this framework, taken verbatim from the Pennsylvania State University Library.

  1. Persona

    Assign a role. Example: “You are a [literary critic/compliance officer/patent attorney/etc.].”

  1. Requirements

    Define the parameters for output. Examples: “Topical content to include/exclude, number of responses, word count/limit, reading level, standards compliance, etc.”

  1. Organization

    Describe the structure of the output. Examples: “Alphabetical, chronological, table, bulleted or numbered list, step-by-step instructions, etc.”

  1. Medium

    Describe the format of the output. Examples: Prose, social media post, computer code, spreadsheet, website, slide deck, audio/visual, recipe, dialogue script, survey, interview, etc.

  1. Purpose

    Identify the rhetorical purpose and intended audience. Examples: Explain, summarize, pitch, entertain, college students, English language learners, investor, first date, etc.

  1. Tone

    Specify the tone of the output. Examples: Academic, professional, snarky, funny, inspirational, sentimental, foreboding, etc.

3 Critical evaluation of AI responses

3.1 Limitations of LLMs

Although LLMs have many capabilities and we can craft the perfect prompt to get the desired output, it is important to understand the limitations (see Ma et al. 2024 for an overview). First, LLMs can be biased based on the data they were trained on (Baskara and Mukarto 2023). If the training data is biased, LLMs may generate biased or inaccurate responses. Next, many LLMs can make mistakes or provide inaccurate information (Alkaissi and McFarlane 2023). Thus, it is essential to use LLMs in conjunction with other teaching and learning resources to ensure that students are receiving accurate and reliable information. Moreover, LLMs may not always understand the context of a question or statement, which can lead to inaccurate or irrelevant responses (Mogavi et al. 2024). What is more, LLMs may not have a deep understanding of complex topics or concepts, which can limit their usefulness (Kocoń et al. 2023; Mogavi et al. 2024). In addition, LLMs are trained on trillions of datasets, some of which might not have been obtained consensually (Liesenfeld et al. 2023). When scraping data from the internet, LLMs have been known to ignore copyright licenses, plagiarize written content, and repurpose proprietary content without getting permission from the original owners or artists. LLMs may collect and store data from user interactions, which can raise privacy concerns for educators and students. Lack of familiarity with LLMs is also an issue (Athanassopoulos et al. 2023). Finally, overreliance on LLMs poses a risk of impairing learners’ critical thinking and problem-solving skills (Lo 2023).

3.2 Overcoming the limitations of LLMs

Given these limitations, it is crucial to check the output from LLMs (Cohen 2023; Cutler 2023). Table 1 below provides information regarding how we can overcome the limitations of LLMs.

Table 1:

Overcoming the limitations of LLMs by various fact checking strategies.

Information type Fact-check strategies
Factual verification Ask “is this information true or false?”. Use lateral reading to open up a new tab and do a search for the facts from the response. Ensure you are checking the information from a source that has the expertise on that topic.
Logic checks AI tools are not experts in logic and can make errors. You can see this by asking an AI tool to answer riddles. When asking AI anything puzzling, look for logical inconsistencies.
Citation checks AI tools can sometimes make up sources that do not exist. Search for sources either on Google, Google scholar, or library websites to confirm they exist.
Bias exploration Are there other perspectives that might be missing from the AI’s response? Read the response with a critical eye and consider ideas that might be missing.

The first point to pay attention to when critically evaluating the output from any LLM is to carry out factual verification. Figure 4 indicates how LLMs can provide misguided responses. Even though there is no Turkish folk singer named Yılmaz Köylü (the author of this article), the LLM provided such a response.

Figure 4: 
An inaccurate LLM response that requires factual verification.
Figure 4:

An inaccurate LLM response that requires factual verification.

AI outputs should also be scrutinized for logic checks. I asked a LLM “I live in Kennedy Town, Chinese Hong Kong. How can I go to the Queen Mary Hospital taking the MTR?” Anyone living in Hong Kong, China knows that the last station on the west side is the Kennedy Town station. The best and the fastest way to go to the Queen Mary Hospital from the Kennedy Town station is to either take a minibus or a taxi, which should take about 5 min. However, the AI output provides a response that is extremely illogical as it mentions MTR lines that do not exist, and since it increases the 5-min travel time to 40 min. The AI output is illustrated in Figure 5.

Figure 5: 
An illogical LLM response regarding transportation in Hong Kong, China.
Figure 5:

An illogical LLM response regarding transportation in Hong Kong, China.

It is also crucial to check AI responses for citation checks. I asked an LLM to provide a list of citations regarding how to use artificial intelligence and large language models in language teaching and learning. It provided names, articles, and links, some of which simply did not exist. This is illustrated in Figure 6.

Figure 6: 
An inaccurate LLM response with names, articles, and non-existent links.
Figure 6:

An inaccurate LLM response with names, articles, and non-existent links.

As discussed earlier, AI models are biased, which necessitates exploring such biases. I asked an LLM to list all the universities in Chinese Hong Kong. Figure 7 provides the AI output.

Figure 7: 
AI output as a response to “List all the universities in Chinese Hong Kong”.
Figure 7:

AI output as a response to “List all the universities in Chinese Hong Kong”.

I then asked the AI why it listed The University of Hong Kong, which is always ranked as the top university as the first one on the list. It argued that it was not biased. Even though the LLM argues that it was not biased, the ranking of universities in Chinese Hong Kong according to Times Higher Education illustrates that the list the AI response provided mirrors the exact ranking of universities. This is indicated in Figure 8.

Figure 8: 
Ranking of universities in Chinese Hong Kong according to Times Higher Education.
Figure 8:

Ranking of universities in Chinese Hong Kong according to Times Higher Education.

4 Implications for language teaching and learning

Having mastered prompts, and being equipped with the knowledge to critically evaluate the AI responses, teachers can use various LLMs for language teaching purposes (Belkina et al. 2025; Crompton et al. 2024; Huang et al. 2023.). An example could be to use LLMs to create a rubric using the following prompt.

The prompt: I am a lecturer at The Hong Kong University of Science and Technology. This semester, I am teaching a course titled “Professional Speaking for the Workplace”. Students are supposed to give a 5-minute individual presentation on anything they are passionate about. I have never assessed such a speaking task before. Can you create a rubric in a table? The rubric should have 5 criteria and 5 proficiency levels (e.g., excellent, proficient, etc.) and descriptions for each criteria.

The prompt above can easily help teachers create a very detailed rubric they could utilize in their courses. We can also use LLMs to create various assessments. The following prompt shows such a usage.

The prompt: I am a lecturer at The Hong Kong University of Science and Technology. This semester, I am teaching a course titled “Advanced English Grammar”. I need to assess students’ use of definite and indefinite articles. Can you create an advanced level quiz for this purpose? I need 20 fill-in-the-blank questions with instructions and an example.

AI tools and LLMs can also be effectively used by language learners. One way to utilize such tools could be to get instant feedback. The following prompt demonstrates how AI tools can be prompted to create an outline.

The prompt: I am a first-year student at The Hong Kong University of Science and Technology. I study Mathematics. I am currently taking a course titled “Advanced Academic English for University Studies”. I am supposed to write a reflection on my performance so far in the course but I have never written a self-reflection before. What are some suggestions that you can give me to get started on my reflection? Can you create an outline with bullet points for me?

Figure 9 shows the output.

Figure 9: 
The AI output for the reflection outline.
Figure 9:

The AI output for the reflection outline.

LLMs can also be used as a personal tutor. The following prompt indicates how AI can act as a grammar tutor.

The prompt: I am a first-year student at The Hong Kong University of Science and Technology. I major in Global China Studies. I am taking a course titled “Intensive English Language for University Studies”. We have a number of writing assignments and whenever I submit a draft, my instructor highlights my grammar mistakes. The instructor repeatedly said that I have problems in noun clauses and relative clauses. I really don’t know the difference and I don’t know where to start. Can you be my tutor and teach me noun clauses and relative clauses? You can ask me questions, or ask me to create sentences and give me feedback on my responses.

Figure 10 shows the output.

Figure 10: 
The AI output showing how it acts as a grammar tutor.
Figure 10:

The AI output showing how it acts as a grammar tutor.

LLMs can be used for any language learning activity. The prompt below and the following paragraph show how a learner can get feedback from AI to improve coherence in writing.

The prompt: I am a first-year student at The Hong Kong University of Science and Technology. I major in Physics. I am taking a course titled “Intensive English Language for University Studies”. We have a number of writing assignments and whenever I submit a draft, my instructor has a lot of recommendations. My grammar is almost perfect and he says that I have no issues there. My instructor tells me I have to improve coherence in my writing. I don’t know what that means. Here is a piece of self-reflection I wrote for the course. Can you tell me how to improve coherence in the following paragraph?

The paragraph: I believe that I contributed to my group’s discussions well and communicated my opinions on what we should do during the seminar presentation and e-magazine process. My group and I did well in giving our input on the challenges we faced and finding a compromise when we wanted to do different things. To improve my discussion and leadership skills, I should prepare more in advance. I believe that my synthesis paper turned out okay but could be better. It was due during the same weeks as my midterms, so l couldn’t spend as much time on it as I would’ve hoped, and with extra time on it, I could’ve done better. My seminar presentation turned out well regarding the information I presented and my script, but my audience engagement definitely could’ve been better. Finally, I believe that my group’s e-magazine turned out pretty well. The main feedback I have received is to have more audience awareness during presentations, which I don’t have much experience in, so I will need to improve on that in the future. During the seminar and e-magazine process, I learned a lot about the importance of communicating well with my group members. Even though we were working on separate sub-topics, we still had to ensure good group cohesion. The biggest challenge my group and l faced was deciding the topic and audience of our e-magazine, because everyone wanted to do what was best for their own sub-topic, but we had to communicate well to come to a compromise and find a topic everyone was happy with!

Figure 11 below indicates the output.

Figure 11: 
The AI output showing how the language learner can improve coherence in writing.
Figure 11:

The AI output showing how the language learner can improve coherence in writing.

5 Conclusions

In conclusion, AI and LLMs like ChatGPT and DeepSeek offer transformative opportunities for language education, enabling personalized learning, efficient resource creation, and instant feedback (Ma et al. 2024). Tools such as automated rubric design and grammar tutoring exemplify their pedagogical value, supported by structured prompt engineering strategies like the PROMPT framework. However, challenges including bias, factual errors, and ethical dilemmas necessitate vigilant and critical evaluation by educators and learners. This article underscores the importance of verifying AI-generated content through fact-checking, logic validation, and bias exploration to ensure reliability. Successful integration of AI in education requires balancing technological innovation with human oversight. By prioritizing ethical guidelines, transparency, and continuous refinement of AI tools, we can harness their potential to complement but not replace traditional pedagogy, ultimately enriching language education (Belkina et al. 2025).


Corresponding author: Yılmaz Köylü, Center for Language Education, School of Humanities and Social Science, The Hong Kong University of Science and Technology, Office 3023, Clearwater Bay, Kowloon, Hong Kong, China, E-mail:

About the author

Yılmaz Köylü

Yılmaz Köylü is an Assistant Professor of Language Education in the Center for Language Education at The Hong Kong University of Science and Technology. He has a PhD in Linguistics and Second Language Studies from Indiana University, Bloomington. His interests lie in the domains of first and second language acquisition, syntax, semantics, pragmatics, Turkish linguistics, as well as Teaching English to Speakers of Other Languages (TESOL), and second language (L2) writing. He is particularly interested in the syntax-semantics interface, the structure and meaning of noun phrases, genericity, kind reference, and mass/count noun distinction across languages.

  1. Research ethics: Not applicable.

  2. Informed consent: Not applicable.

  3. Author contributions: The author has accepted responsibility for the entire content of this manuscript and approved its submission.

  4. Competing interests: The author states no conflict of interest.

  5. Research funding: None.

References

Ali, Jamal K. M., Muayad A. A. Shamsan, Taha A. Hezam & Ahmed A. Q. Mohammed. 2023. Impact of ChatGPT on learning motivation: Teachers and students’ voices. Journal of English Studies in Arabia Felix 2(1). 41–49. https://doi.org/10.56540/jesaf.v2i1.51.Search in Google Scholar

Alkaissi, Hussam & Samy I. McFarlane. 2023. Artificial hallucinations in ChatGPT: Implications in scientific writing. Cureus 15(2). https://doi.org/10.7759/cureus.35179.Search in Google Scholar

Athanassopoulos, Stavros, Polyxeni Manoli, Maria Gouvi, Konstantinos Lavidas & Vassilis Komis. 2023. The use of ChatGPT as a learning tool to improve foreign language writing in a multilingual and multicultural classroom. Advances in Mobile Learning Educational Research 3(2). 818–824. https://doi.org/10.25082/amler.2023.02.009.Search in Google Scholar

Baidoo-Anu, David & Leticia Owusu Ansah. 2023. Education in the era of generative artificial intelligence (AI): understanding the potential benefits of ChatGPT in promoting teaching and learning. Journal of AI 7(1). 52–62. https://doi.org/10.61969/jai.1337500.Search in Google Scholar

Baskara, Risang & M. Mukarto. 2023. Exploring the implications of ChatGPT for language learning in higher education. Indonesian Journal of English Language Teaching and Applied Linguistics 7(2). 343–358. https://doi.org/10.21093/ijeltal.v7i2.1387.Search in Google Scholar

Belkina, Marina, Scott, Daniel, Sasha Nikolic, Rezwanul Haque, Sarah Lyden, Peter Neal, Sarah Grundy, Ghulam M. Hassan. 2025. Implementing generative AI (GenAI) in higher education: a systematic review of case studies. Computers and Education: Artificial Intelligence 100407. https://doi.org/10.1016/j.caeai.2025.100407.Search in Google Scholar

Bezirhan, Ümmügül & Matthias von Davier. 2023. Automated reading passage generation with OpenAI’s large language model. Computers and Education: Artificial Intelligence 5. https://doi.org/10.1016/j.caeai.2023.100161.Search in Google Scholar

Ciampa, Katia, Zora M. Wolfe & Briana Bronstein. 2023. ChatGPT in education: Transforming digital literacy practices. Journal of Adolescent & Adult Literacy 67. 186–195, https://doi.org/10.1002/jaal.1310.Search in Google Scholar

Cohen, Zak. 2023. Leveraging ChatGPT: practical ideas for educators. ASCD. https://www.ascd.org/blogs/leveraging-chatgpt-practical-ideas-for-educators.Search in Google Scholar

Crompton, Helen, Adam Edmett, Neenaz Ichaporia & Diane Burke. 2024. AI and English language teaching: affordances and challenges. British Journal of Educational Technology 1–27. https://doi.org/10.1111/bjet.13460.Search in Google Scholar

Crosthwaite, Peter & Vitek Baisa. 2023. Generative AI and the end of corpus-assisted data-driven learning? Not so fast!. Applied Corpus Linguistics 3(3). 100066. https://doi.org/10.1016/j.acorp.2023.100066.Search in Google Scholar

Cutler, David. 2023. Grappling with AI writing technologies in the classroom. Edutopia. https://www.edutopia.org/article/chatgpt-ai-writing-platforms-classroom.Search in Google Scholar

Fryer, Luke K. & Rollo Carpenter. 2006. Bots as language learning tools. Language, Learning and Technology 10(3). 8–14.10.64152/10125/44068Search in Google Scholar

Huang, Xinyi, Di Zou, Gary Cheng, Xieling Chen & Haoran Xie. 2023. Trends, research issues and applications of artificial intelligence in language education. Educational Technology & Society 26(1). 112–131.Search in Google Scholar

Jiao, Wenxiang, Wenxuan Wang, Jen-tse Huang, Xing Wang, Shuming Shi & Zhaopeng Tu. 2023. Is ChatGPT a good translator? Yes with GPT-4 as the engine. Computation and Language. https://doi.org/10.48550/arXiv.2301.08745.Search in Google Scholar

Kasneci, Enkelejda, Kathrin Sessler, Stefan Küchemann, Maria Bannert, Daryna Dementieva, Frank Fischer, Urs Gasser, Georg Groh, Stephan Günnemann, Eyke Hüllermeier, Stephan Krusche, Gitta Kutyniok, Tilman Michaeli, Claudia Nerdel, Jürgen Pfeffer, Oleksandra Poquet, Michael Sailer, Albrecht Schmidt, Tina Seidel, Matthias Stadler, Jochen Weller, Jochen Kuhn & Gjergji Kasneci. 2023. ChatGPT for good? On opportunities and challenges of large language models for education. Learning and Individual Differences 103. Article 102274. https://doi.org/10.1016/j.lindif.2023.102274.Search in Google Scholar

Kocoń, Jan, Igor Cichecki, Oliwier Kaszyca, Mateusz Kochanek, Dominika Szydło, Joanna Baran, Julita Bielaniewicz, Marcin Gruza, Arkadiusz Janz, Kamil Kanclerz, Anna Kocoń, Bartłomiej Koptyra, Wiktoria Mieleszczenko-Kowszewicz, Piotr Miłkowski, Marcin Oleksy, Maciej Piasecki, Łukasz Radliński, Konrad Wojtasik, Stanisław Woźniak & Przemysław Kazienko. 2023. ChatGPT: Jack of all trades, master of none. Information Fusion 99. Article 101861. https://doi.org/10.1016/j.inffus.2023.101861.Search in Google Scholar

Kohnke, Lucas, Benjamin L. Moorhouse & Di Zou. 2023. ChatGPT for language teaching and learning. RELC 1–14, https://doi.org/10.1177/00336882231162868.Search in Google Scholar

Korzynski, Pawel, Grzegorz Mazurek, Pamela Krzypkowska & Artur Kurasinski. 2023. Artificial intelligence prompt engineering as a new digital competence: analysis of GenAI technologies such as ChatGPT. Entrepreneurial Business and Economics Review 11(3). 25–37, https://doi.org/10.15678/eber.2023.110302.Search in Google Scholar

Li, Yuheng, Lele Sha, Lixiang Yan, Jionghao Lin, Mladen Raković, Kirsten Galbraith, Kayley Lyons, Dragan Gašević & Guanliang Chen. 2023. Can large language models write reflectively? Computers and Education: Artificial Intelligence 4. 100140. https://doi.org/10.1016/j.caeai.2023.100140.Search in Google Scholar

Liesenfeld, Andreas, Alianda Lopez & Mark Dingemanse. 2023. Opening up ChatGPT: tracking openness, transparency, and accountability in instruction-tuned text generators. In Proceedings of the 5th International conference on conversational user interfaces, 1–6.10.1145/3571884.3604316Search in Google Scholar

Lo, Chung Kwan. 2023. What is the impact of ChatGPT on education? A rapid review of the literature. Education Sciences 13(4). 410. https://doi.org/10.3390/educsci13040410.Search in Google Scholar

Ma, Qing, Peter Crosthwaite, Daner Sun & Di Zou. 2024. Exploring ChatGPT literacy in language education: a global perspective and comprehensive approach. Computers and Education: Artificial Intelligence 7. https://doi.org/10.1016/j.caeai.2024.100278.Search in Google Scholar

Mizumoto, Atsushi & Masaki Eguchi. 2023. Exploring the potential of using an AI language model for automated essay scoring. Research Methods in Applied Linguistics 2(2). https://doi.org/10.1016/j.rmal.2023.100050, Article 100050.Search in Google Scholar

Mogavi, Reza Hadi, Chao Deng, Justin Juho Kim, Pengyuan Zhou, Young D. Kwon, Ahmed Hosny Saleh Metwally, Antonio Bucchiarone, Sujit Gujar & Lennart E. Nacke, Pan Hui. 2024. ChatGPT in education: a blessing or a curse? A qualitative study exploring early adopters’ utilization and perceptions. Computers in Human Behavior: Artificial Humans 2(1). Article 100027. https://doi.org/10.1016/j.chbah.2023.100027.Search in Google Scholar

Nah, Fui-Hoon F., Ruilin Zheng, Jingyuan Cai, Keng Siau & Langtao Chen. 2023. Generative AI and ChatGPT: applications, challenges, and AI-human collaboration. Journal of Information Technology Case and Application Research 25(3). 277-304, https://doi.org/10.1080/15228053.2023.2233814.Search in Google Scholar

Ouyang, Long, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L. Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul Christiano & Jan Leike, Ryan Lowe. 2022. Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35. 27730-27744.Search in Google Scholar

Received: 2025-05-01
Accepted: 2025-07-15
Published Online: 2025-08-21

© 2025 the author(s), published by De Gruyter and FLTRP on behalf of BFSU

This work is licensed under the Creative Commons Attribution 4.0 International License.

Downloaded on 15.10.2025 from https://www.degruyterbrill.com/document/doi/10.1515/jccall-2025-0017/html
Scroll to top button