Home Linguistics & Semiotics Generative AI and the future of writing for publication: insights from applied linguistics journal editors
Article Publicly Available

Generative AI and the future of writing for publication: insights from applied linguistics journal editors

  • Benjamin Luke Moorhouse

    Benjamin Luke Moorhouse (SFHEA) is an Associate Professor in the Department of English, City University of Hong Kong. He has extensive experience as a teacher educator and primary school English-language teacher. Benjamin’s research has appeared in international journals, including System, TESOL Quarterly, TESOL Journal, RELC Journal, and ELT Journal.

    ORCID logo EMAIL logo
    , Sal Consoli

    Sal Consoli is Lecturer (research) in Language Education at the Institute for Language Education at the Moray House School of Education. Previously, Dr Consoli worked at the University of Warwick, Newcastle University in the UK and at the Hong Kong Polytechnic University in China.

    ORCID logo
    and Samantha M. Curle

    Samantha M. Curle (DPhil, FHEA, FRSA) is a Reader in Education (Applied Linguistics), Director of all MRes programmes in the Faculty of Humanities and Social Sciences, the Institutional Academic Lead for the South-West Doctoral Training Programme (SWDPT, University of Bath), and Associate Member of the English Medium Instruction Oxford Research Group (University of Oxford).

    ORCID logo
Published/Copyright: July 23, 2025

Abstract

The emergence of Generative Artificial Intelligence (GenAI) is reshaping academic writing and publishing practices. As knowledge curators, applied linguistics journal editors need to respond to GenAI developments. Yet, little is known about their perspectives on GenAI in academic writing and publishing. These perspectives could influence their editorial decisions and journal policies – potentially defining how scholars write for publication. Through in-depth semi-structured interviews, this study explored the perceptions of ten applied linguistics journal editors towards GenAI in academic writing for publication. Analysis shows that the development of GenAI is putting additional strain on the editorial process, which is already struggling. It highlights that current publisher and journal policies on GenAI are ambiguous, leading to confusing and questionable research practices. Editors are cautious about the use of GenAI in applied linguistics research and writing, with only the use of these tools to improve writing quality universally acceptable. Transparency is seen as essential. The findings highlight a pressing need for discipline-specific guidance on the acceptable uses of GenAI in academic publishing and the development of methodological models that detail ways GenAI can be integrated into the field’s rich and diverse research traditions.

1 Introduction

The advent of Generative Artificial Intelligence (GenAI) is reshaping academic writing and publishing, offering both transformative potential and significant challenges (Bagenal 2024). GenAI tools can streamline core elements of the research and writing processes – such as literature reviews, tool development, data analysis, manuscript drafting, and editing – enabling increased productivity and academic output (Naeem et al. 2025). However, this rapid technological advancement raises critical questions about authorship, transparency, and ethical research practices (Farangi and Nejadghanbar 2024; Kendall and Teixeira da Silva 2024; Stahl and Eke 2024). Recent reports of journals accepting articles with substantial evidence of GenAI involvement, and the appearance of AI-modified visuals in publications, underscore the urgency of addressing these developments within academic communities (McFall-Johnson 2024).

GenAI developments have come at a time when academia is becoming increasingly defined by the ‘publish or perish’ culture, where publication metrics drive career progression, funding allocation, and institutional rankings (Horta and Li 2022; Yeo et al. 2022). Unrelenting publication pressure has been identified as key driver of various Questionable Research Practices (QPRs), including publishing in predatory journals (Nejadghanbar et al. 2023) and the inappropriate use of GenAI (Moorhouse and Nejadghanbar 2025; Szudarski 2025). The ability of Large Language Models (LLMs) to create human-like texts and perform many tasks associated with scholarly research and publication means our knowledge dissemination processes are vulnerable to contamination of AI-generated data and content (Stahl and Eke 2024; Szudarski 2025). At the extreme end, this could be AI-fabricated data and AI-fabricated manuscripts. At the other end, this could be ‘cultural nudging’ where AI subtly nudges authors, reviewers or journal editors to make decisions that represent one specific world view that end up unintentionally influencing the published works, and, therefore, contaminating our knowledge base (Farangi et al. 2024; Hetzscholdt 2024). In the field of applied linguistics, Farangi and Khojastermehr (2024) in a mixed-method study found that applied linguistics scholars were engaging in AI-related QRPs, and these were, in part, related to insufficient guidance and policies of the appropriate use of AI.

At the same time, GenAI is being integrated into researchers’ knowledge production and dissemination processes (Farangi et al. 2024) and is being utilised by journal reviewers to support their review processes (Ebadi et al. 2025). The ability of AI to automate research tasks (e.g., Morgan 2023; Naeem et al. 2025), the advent of specialist research tools and databases (e.g., Open AI’s Deep Research; SCOPUS AI), and the integration of AI-assistive features into major computer-assisted qualitative data analysis software (CAQDAS) popular amongst applied linguistics researchers, such as, MAXQDA and NVivo, creates a complex terrain for authors, reviewers and journal editors to navigate (Tlili et al. 2025). Indeed, the development and adoption of these tools have outpaced our understanding of ethical appropriateness and empirical studies exploring the efficacy of these tools for applied linguistics research (Belcher 2025).

Applied linguistics, a discipline characterised by diverse research methodologies and publication formats, currently lacks established norms for the ethical integration of GenAI in research and writing practices (Belcher 2025; Moorhouse and Nejadghanbar 2025; Szudarski 2025). While researchers are beginning to use these tools, evidence suggests that their usage often goes unreported in manuscripts, potentially compromising research integrity through questionable practices such as incomplete methodology disclosure or inadvertent deception (Farangi and Nejadghanbar 2024; Larsson et al. 2023). Moreover, the implications of GenAI are likely to be discipline-specific, as its applications interact with unique epistemological and methodological traditions (Chapelle et al. 2024).

On the front line of GenAI’s impacts are applied linguistics journal editors. They need to decide, what a journal’s GenAI policy is, how to communicate this with authors, reviewers, and readers, whether GenAI has been used in a manuscript, if it has been used appropriately, and what actions to take if they believe the use of GenAI was inappropriate (Casal and Kessler 2023; Tlili et al. 2025). Editorships can often feel isolated (Roth 2006; Rushby 2015; Stankiewicz 2017). Often, they act solo or with a small editorial team. It is valuable then, to bring their voices together to understand their perceptions and reactions to the development of GenAI and its role in research, writing, and editorial processes. The study addresses the following research question, “How do applied linguistics journal editors perceive and react to the use of Generative Artificial Intelligence (GenAI) in academic writing for publication?” By examining the views of applied linguistics journal editors, this research aims to inform the development of discipline-specific norms and standards for GenAI use in academic writing for publication. Such standards are critical not only to fostering transparency and ethical practice but also to shaping the field’s response to technological innovation in a way that advances knowledge production responsibly (Szudarski 2025; Tlili et al. 2025). This paper addresses an urgent gap in the literature, offering timely insights into how applied linguistics journals can navigate the evolving landscape of GenAI to ensure its potential is harnessed ethically and effectively.

2 Literature review

2.1 The role of journal editors

Journal editors and editorial staff, encompassing roles such as (co) editors-in-chief, managing editors, associate editors, section editors, and book review editors, are pivotal to the academic publishing process (Silver et al. 2023). The Committee for Publication Ethics (COPE), whose membership includes numerous journals, has outlined the essential responsibilities of editors in its Code of Conduct and Best Practice Guidelines for Journal Editors (COPE 2011). This document establishes that editors bear ultimate accountability for all content published in their journals. To fulfil this obligation, editors are required to:

  1. Strive to meet the needs of readers and authors;

  2. Continuously enhance their journals;

  3. Implement processes to ensure the quality of published material;

  4. Champion freedom of expression;

  5. Maintain the integrity of the academic record;

  6. Prevent business needs from compromising intellectual and ethical standards;

  7. Publish corrections, clarifications, retractions, and apologies as necessary.

Beyond these mandatory standards, the guidelines delineate best practices, underscoring the multifaceted nature of editorial responsibilities (COPE 2011). Editors serve as gatekeepers, ensuring rigorous quality control by filtering out substandard submissions, selecting expert reviewers, and making publication decisions. They oversee the peer review process, liaising with reviewers and authors to enhance manuscript quality. Additionally, editors collaborate with publishers, societies, and editorial boards to set the journal’s strategic direction, including the formation of editorial boards, articulation of journal vision and mission, and development of policies governing manuscript preparation, declarations, and peer review. These responsibilities mean they cannot avoid the impact of GenAI on academic publishing (Casal and Kessler 2023). Publisher and journal policies about AI use, drafted after the realisation of the impact of ChatGPT on academic publishing, state their responsibility for safeguarding the curation process. For example, Elsevier’s AI policy for journals, while explicitly stating editors cannot use AI for any aspect of their editorial duties, also states, “responsible and accountable for the editorial process, the final decision and the communication thereof to the authors” (Elsevier, para.4, 2025). These responsibilities make them accountable for devising or implementing GenAI policies, but also, ensuring authors abide by the publisher and journal’s policies on use of GenAI. If questionable uses of AI are found in published papers, this could tarnish the journal’s reputation and possibly lead to retractions, leading the field to question the editor’s competence. The number of retractions of AI-generated articles is rising (Van Noorden 2023). Whether applied linguistics journal editors feel confident and competent with this responsibility is not understood.

The perspectives of applied linguistics journal editors on GenAI are not widely known. However, there was one survey study by Silver and colleagues (2023) that explored their views regarding research ethics and academic publishing. Their findings identified a shortage of expert reviewers as the most pressing ethical issue, with additional concerns including plagiarism, redundant publications, and inappropriate citations. Lesser issues included editorial interference and image manipulation. Interestingly, the survey revealed gaps in editors’ awareness of specific ethical challenges, such as unacknowledged writers and inappropriate acknowledgements, indicating the need for more comprehensive guidance. Notably, the study by Silver et al. (2023) did not address the role of artificial intelligence (AI) in academic publishing, as its emergence as a significant factor was just beginning. A recent editorial invited various technology education journal editors to share their voices on academic integrity in the GenAI era (Tlili et al. 2025). While highlighting the complex picture AI creates for editors, their main call was for more coherent and practical guidance for authors, reviewers, and editors on the use of GenAI, rather than, as they currently see it, ‘vague advice’. In addition, they called for journals to develop guidelines and regulations that reflect the nature of their research and writing conventions.

2.2 Artificial intelligence and academic publishing

Generative Artificial Intelligence (GenAI) represents a transformative subset of AI technologies explicitly designed for generative tasks (Chan and Colloton 2024). Large Language Models (LLMs) like ChatGPT, can perform a diverse range of natural language processing tasks without explicit pre-training, offering users unprecedented speed and contextual interactivity (Ali et al. 2024; Moorhouse 2024). This makes them capable of performing many tasks within the research and writing process that would have traditionally required humans. GenAI is increasingly embedded in widely used digital platforms such as writing processors (e.g., Google Docs), academic databases (e.g., SCOPUS AI), and data analysis software (e.g., MAXQDA) (Mizumoto and Teng 2025). Specialist tools like Consensus, an AI-driven academic search engine, illustrate the growing sophistication of these technologies in advancing research efficiency.

The capabilities and rapid adoption of GenAI have disrupted established norms in academic publishing, particularly concerning integrity and trustworthiness (Tlili et al. 2025). The primary focus has been on potential risks, including misconduct and the erosion of academic integrity in the research and publication processes. For instance, Casal and Kessler (2023) found that applied linguistics scholars and journal editors struggled to differentiate between ChatGPT-generated abstracts and human-authored ones, underscoring the challenges of detecting AI-generated content. However, there have also been concerns about bias and the inability of LLMs to discern nuances within academic texts – potentially reducing diversity within writing for publication (Kuteeva and Andersson 2024) and lead to ‘cultural nudging’ where some world views are more evident in AI responses than others (Hetzscholdt 2024). Scholars, such as, Stahl and Eke (2024) believe the ethical risks of GenAI outweigh the benefits it affords. Issues such as ‘hallucinations’ where GenAI tools create factually inaccurate information could undermine our knowledge base. Indeed, GenAI tools do not understand the data they are accessing or creating, and can present misleading, non-sensical or bias information (Dumit and Roepstorff 2025). In fact, their reliance on statistical probability means they can amplify bias, or misinformation, if they are dominating in their datasets (Kuteeva and Andersson 2024). While efforts to detect GenAI content have been made, detection tools have proven unreliable. Perkins et al. (2024) demonstrated how simple strategies could bypass AI detectors, creating the potential for false accusations and undetected breaches of publication ethics.

In response to the developments of GenAI, COPE issued a position statement on GenAI tools, stipulating that AI cannot meet the criteria for authorship because it cannot take responsibility for scholarly work (COPE 2023). The guidelines require authors to transparently disclose their use of AI tools in the Materials and Methods or equivalent sections of their manuscripts. They also hold authors accountable for the integrity of content generated with AI tools, emphasising the need for responsible use. Yet, the guidance from journals has been inconsistent (Tlili et al. 2025). Bagenal (2024) highlights this inconsistency, noting that only 24 % of the top 100 global academic publishers provide explicit guidance on GenAI use, and fewer than half of these require authors to disclose their use of AI. Similarly, Ganjavi et al. (2024) report that while 87 % of leading scientific journals offer guidance, only 43 % outline specific disclosure criteria for GenAI use.

Even where policies exist, ambiguity persists. Authors often fear that disclosing their use of GenAI may negatively impact their submissions. Farangi and Nejadghanbar (2024) identified this concern in their study of QRPs among Iranian applied linguists. Participants expressed reluctance to disclose their use of AI tools, citing fears of biased peer review and rejection. One participant noted that declaring AI-assisted data analysis led to prolonged peer review, highlighting the potential for editorial bias against GenAI usage. Tan and colleagues (2025) found that people do judge texts they considered to be generated by GenAI more critically than texts they assume to be written by humans. The lack of clarity exacerbates ethical concerns, particularly around unreported GenAI use, which risks undermining trust in academic publishing (Belcher 2025). At the same time, the integration of GenAI into editorial work, such as screening, peer review, or adding editorial decisions, could harm academic relationships, as editorial work is essential in shaping academic communities and norms (Hoessini and Horbach 2023).

The picture is not all negative. As has already been discussed, there are real and tangible benefits GenAI can bring to the research process. Studies by Morgan (2023) and Naeem et al. (2025) show that with human oversight, GenAI can assist with qualitative data analysis. They could increase rigor by being tasked with secondary data coding along with trained human coders. They may be able to analyse large amounts of data that humans may struggle to process (Mizumoto and Eguchi 2023). For example, they could be trained to review large numbers of abstracts or other data for keywords or patterns. They can accurately and contextually translate texts (Guo et al. 2024). Indeed, a few authors have begun to document their process of integrating GenAI into their research processes (see Moorhouse and Kohnke 2024). Regarding writing, they can help authors present their work more clearly and coherently, providing greater access to journals by less proficient users of English. They can also increase access to academic content through multimodal or multilingual representations (e.g., podcasts, infographics, translations) (Yeo et al. 2025). Ebadi et al. (2025) explored the perspective of twelve journal reviewers and found tangible benefits of using GenAI in their peer review processes. These included automating tasks, such as, preliminary screening, plagiarism detection, and language verification. These uses could reduce workload and enhance consistency in applying review standards. By acknowledging and integrating these advantages into editorial practices, journals could develop policies that balance mitigating risks with leveraging AI’s capabilities (Tlili et al. 2025).

2.3 Addressing the research gap

To navigate these challenges and capitalise on the advantages of GenAI, applied linguistics requires discipline-specific guidance on the ethical and practical use of GenAI in academic publishing (Farangi et al. 2024). This study addresses a critical gap in the literature by examining how editors of applied linguistics journals perceive and respond to these developments. By exploring their perspectives, this research aims to contribute to the development of transparent, equitable, and actionable policies that uphold academic integrity while embracing the opportunities presented by GenAI.

3 Methodology

This study utilised an exploratory qualitative research design to investigate the research question. The rapidly evolving nature of GenAI technologies and the limited existing literature on their role in scholarly publishing (Bagenal 2024) made an exploratory approach essential. This approach enabled in-depth insights from experts actively engaged in the academic publishing process. Additionally, the study aligns with calls for more research on journal editors’ perspectives in applied linguistics (Silver et al. 2023).

3.1 Participants

The study involved ten editors from applied linguistics journals. Participants were selected using purposeful sampling to target individuals with the most relevant expertise. The inclusion criteria were:

  1. Holding a senior editorial position (e.g., editor-in-chief, associate editor) in a peer-reviewed applied linguistics journal.

  2. Having a minimum of two years’ experience in editorial roles to ensure familiarity with editorial processes and exposure to submissions involving GenAI tools.

Recruitment was conducted via email, with invitations outlining the study’s purpose and procedures. The sampling strategy involved drawing on the three authors’ familiarity and connections with applied linguistics editors of various journals (Cohen et al. 2018). We started with our inclusion criteria and then brainstormed editors we knew professionally through our roles as editors, reviewers, and authors. We developed a list of 15 editors who represented a diversity of geographical locations, journal types, and prestige (indicated by Social Science Citation Index [SSCI] and SCOPUS indexing). We contacted all 15 editors. Of the fifteen, ten consented to participate. They represented various journal types, including academic societies, commercial publishers, and university presses (see Table 1 for demographic details). The journals they edited were based in North America, Europe, Australasia, and Asia. All but one were indexed in SCOPUS, with eight indexed in SSCI. Specific details of each editor’s journal are not provided to protect their anonymity (Cohen et al. 2018). As we progressed through the data collection process, we observed a similar pattern in the participants’ responses to our interview questions. Therefore, we believed the sample size was sufficient to achieve data saturation, a standard benchmark in qualitative research for identifying recurring themes (Cohen et al. 2018). However, in line with calls to make researcher subjectivity and life capital visible within applied linguistics research (Consoli 2022), we acknowledge that our own editorial and professional networks shaped participant recruitment and the types of perspectives captured in this study.

Table 1:

Participants’ demographic information.

Pseudonym Gender Editorial title Publisher type
Maggie F Editor Commercial Publisher
Anson M Editor Commercial Publisher
Gary M Co-editor University Press
Nova F Co-editor-in-chief Commercial Publisher
Ruth F Editor Commercial Publisher
Osman M Editor Commercial Publisher
Paul M Editor-in-chief Academic Society
Patrick M Editor Academic Society
Azzurra F Editor Commercial Publisher
Sit Wing M Associate Editor Commercial Publisher

3.2 Data collection

Data were collected through semi-structured interviews conducted via Zoom or Microsoft Teams between August and September 2024. Semi-structured interviews offered the flexibility to explore participants’ perspectives while ensuring that all key themes were addressed consistently across interviews. While other methods, such as reviewing journal guidelines, would have provided valuable insights (see Yin et al. 2025 for an example of a cross-sectional study of GenAI guidelines in medical journals, as well as, Yin and Chapelle 2025 for an exploration of guidelines of applied lingustics journals), we decided to deploy interviews. Interviews could provide deep insight into the editors’ thinking as well as their direct experience in relation to GenAI’s impact.

The interview guide was developed based on the research question and relevant literature with a focus on the following themes:

  1. Awareness, understanding, and experiences of GenAI tools in academic writing and their editorial roles.

  2. Perceived benefits and challenges of GenAI use in research and publication.

  3. Appropriate and inappropriate uses of GenAI in research and publication.

  4. Journal editors’ roles in relation to GenAI in academic publishing.

  5. Policies or guidelines regarding GenAI within their journals.

To refine the guide, the lead authors reviewed the relevant literature, consulted the two collaborating authors, and uploaded a draft to GPT-4.0 for feedback. The prompt used was: “I am conducting a research project exploring applied linguistic journal editors’ perspectives of the use of GenAI in academic publishing. Can you provide some feedback on the clarity of the questions and the comprehensiveness of the guide?” Suggestions regarding overlapping questions, diverse GenAI experiences, additional questions, and clarifying concepts were reviewed and incorporated. For example, it suggested several of the questions were overlapping. It stated, “There is some overlap in questions about perceptions (e.g., Q3, Q4, Q10, Q11) and challenges (e.g., Q6, Q13). Streamlining these could make the guide more concise.” These were reviewed, and appropriate changes were made (see Table 2). Human judgement, accountability, and oversight were prioritised (Tlili et al. 2025). It also suggested adding a field-specific question, “Do you think the use of GenAI in applied linguistics journals differs from its use in other academic disciplines? If so, how?”. The question was added to the final guide. The finalised interview guide can be found in Appendix 1.

Table 2:

Example of GPT4.o suggestions and researchers’ modifications.

Original questions Revised questions
– What were your initial thoughts or reactions when you first heard about the use of GenAI in academic publishing?

– How do you currently perceive the use of GenAI in academic publishing? Has your perception changed over time?
– What were your initial thoughts or reactions to the idea of using GenAI in academic publishing when you first encountered it?

– Have your perception of the use of GenAI in academic publishing evolved over time? If so, how, and what factors influenced this change?

Interviews lasted between 40 and 60 min and were audio-recorded with participants’ consent. Given the semi-structured nature, each interview followed a different flow. However, we covered the different aspects of our guide during each interview. Conducting interviews via video-conferencing software facilitated participation from geographically diverse editors. It accommodated their schedules (Guo et al. 2024). Audio recordings were transcribed verbatim using machine transcription software, with transcripts cross-checked by the researchers against recordings to ensure fidelity. To maintain confidentiality, identifiable information was removed, and pseudonyms were assigned to participants.

3.3 Data analysis

The study employed thematic analysis as the methodological framework for analysing data (Cohen et al. 2018). This approach is well-suited for identifying, interpreting, and reporting patterns within qualitative data and is widely utilised in exploratory research. To develop a coding scheme, the three researchers selected one interviewee’s transcript to code (Maggie). Each researcher read and re-read Maggie’s interview transcript and coded it independently. We used a combination of inductive coding – allowing themes to emerge organically from the data – and deductive coding guided by the study’s research question. We then met to discuss our coding. We combined our emerging themes into a table and compared them with each other. Discrepancies were discussed, and a final coding scheme was developed (see Table 3 for the initial coding scheme). However, we continued to use inductive coding on the remaining transcripts to ensure no relevant themes were ignored and left out.

Table 3:

Initial coding scheme.

Theme Example codes
Uncertainty around the permissible use of AI hinders adoption – Perceived potential of AI

– Ethical concerns

– Lack of clear guidelines

– Personal hesitation

– Support editorial processes (e.g., finding reviewers)
Potential threats AI poses to knowledge-creation – Suspicion of AI-generated manuscripts

– Relaibilty of detection tools

– Issues with fictious references

– Impact on knowledge integrity

– Lack of evidence of author’s declaring AI use
Insufficient/inappropriate polices, guidance and professional support/development from publishers – Publishers’ inaction

– Disconnect between policies and practices

– Need for industry standards
Impact on the peer review process – Difficulty finding reviewers

– Suspect Use of AI by reviewers

– Questioning the value of peer review
Revisiting academic authorship – Blurred authorship lines

– Over-reliance on AI by authors

– Generic content

– Equity of access
A consistent framework and collaborative guidelines across journals are essential – Collaboration among editors

– Developing guidelines

After the meeting, we assigned two researchers to each remaining transcript. This allowed us to compare our coding. We continued to adapt our coding scheme as we analysed the data and re-visited previous coded transcriptions to identify salient patterns across participants. We then meet to identify our final themes (see Table 4 for the final themes with example codes and data extracts). The change in Table 3 to 4 shows how we engaged in a reflexive and iterative process throughout the data analysis stage (Braun and Clarke 2006). By incorporating reflexivity throughout the design and implementation of this study, from participant selection to theme development, we align with a growing recognition in applied linguistics that reflexivity is a hallmark of ethical and rigorous research practice (Consoli and Ganassin 2023).

Table 4:

Final themes after analysis.

Theme Sub-themes Example codes Example data extracts
Challenging context of academic publishing Surge of submissions

Overloaded editorial processes

Difficult to find peer reviewers
“We have an explosion in the number of articles… and then we don’t have enough reviewers. Every editor I’ve spoken to says now you have to approach 7, 8, 9 people before you can get two reviewers.”
The Role of GenAI in Exacerbating Existing Pressures GenAI allows for rapid article production

Low quality research

Suspect Use of AI by reviewers
“I’m much more worried about the impact of generative AI on the actual research… From an editorial point of view, because there’s nothing to stop people looking to get published… Just generated from absolutely nothing”
The Responses to GenAI in Academic Publishing Policies and practices of GenAI use

Ambiguity in declaration of GenAI use

Responses to address challenges
Perceived potential of AI

Ethical concerns

Lack of clear guidelines

Personal hesitation
“But it’s a little bit tricky for me to interpret the policy because… There’s a grey line in the interpretation of the policy, a grey zone, I would say”
Concerns about GenAI in academic publishing Generic content and reduced originality

Disengagement from the research process

Bias and confidentiality in AI use

Undetectable use of GenAI

Equity and access to GenAI tools
Suspicion of AI-generated manuscripts

Relaibilty of detection tools

Impact on knowledge integrity

Lack of evidence of author’s declaring AI use

Over-reliance on AI by authors

Generic content
“How do I know if the author of this paper has really understood the data, has really a great understanding of what is in there and has analyzed it with rigor and certainty”

“One of our reviewers was looking at an article… clicked on it, and actually it didn’t exist… we’ve had to instruct the admin staff to do random checks of the DOI because it’s not possible for an admin person to click through 50 references.”
The potential and acceptable utility of GenAI in academic publishing GenAI in manuscript preparation

GenAI in the editorial process

The need for innovation in academic publishing

Collaboration efforts for clearer guidelines
Support editorial processes (e.g., finding reviewers)

Improve quality of writing

Screening manuscripts
“I think some reviewers are using GPT or AI tools to do the review because it’s just so painful having to do reviews.”

4 Findings

The findings reveal that all the applied linguistics journal editors acknowledged the transformative impact of GenAI on academic writing for publication, along with an urgent need to address its implications.

4.1 The context of academic publishing

The editors consistently emphasised the need to situate GenAI within the broader challenges of academic publishing. The data indicate that systemic pressures – such as increasing institutional demands for publication output – are driving a surge in submissions, often at the expense of quality. Gary lamented “The market has driven people crazy to do everything they can to get published instead of really producing good knowledge.” He highlighted that many of the submissions he reviewed “did not add anything new to the field.” Similarly, Osman observed “an increase in submissions but not an increase in quality,” reflecting broader concerns about the diminishing rigour in academic contributions.

The strain on journal operations was also a recurring theme. Editors described difficulties in managing rising submission volumes and finding appropriate reviewers. Maggie noted, “We have an explosion in the number of articles… and then we don’t have enough reviewers.” Gary attributed this pressure in part to an “explosion of open access journals,” which require additional peer review resources. Ruth critiqued a publication-driven academic culture, stating, “Scholarship is about churning out articles [which] has affected generating knowledge.”

Despite these challenges, some participants observed emerging efforts to shift the paradigm. Osman pointed to examples of some European universities that have “abolished the rankings,” signalling a gradual move away from metrics-driven publication pressures. Within this complex environment, the editors saw their roles as custodians of quality, tasked with maintaining rigour, managing the peer review process, and steering their journals’ strategic direction.

4.2 The role of GenAI in exacerbating existing pressures

Editors expressed concern about how GenAI could intensify existing challenges in academic publishing. The ease with which AI tools can generate content risks exacerbating the tension between quantity and quality. Patrick voiced apprehension, noting that “Other editors are concerned because they have seen an increase in submissions, and they know that some of that might be due to the assistance [or] over-reliance on AI tools.” As Gary noted, this influx of submissions risks “contaminating the knowledge pool,” making it harder to discern meaningful contributions.

Moreover, the speed at which GenAI can generate manuscripts raised fears of overwhelming the peer review process. Ruth questioned whether editors could manage the volume: “People are worried about the pace… if someone is just churning out a bunch of articles with AI, then can we possibly keep up with checking all of those?”

These findings underscore a pressing need to address the implications of GenAI in applied linguistics publishing. The data suggest that while the technology holds promise for enhancing productivity, it also risks perpetuating systemic issues and compromising academic integrity. Editors are caught in the tension between enabling innovation and safeguarding the quality and rigour that underpin scholarly knowledge production.

4.3 The responses to GenAI in academic publishing

All the editors recognised the necessity of responding to the rapid development of GenAI in applied linguistics writing for publication. While they acknowledged the increasing sophistication of these technologies and their potential generative capabilities, most editors had not yet integrated GenAI into their editorial workflows. However, many had explored its utility in other professional domains. For example, Nova described using ChatGPT to develop rubrics based on assignment guidelines, Anson highlighted its use for supporting text translations in collaborative research, and Maggie reported employing GenAI tools “on a daily basis” in her teaching practices. In his article review work for another journal, Sat Wing mentioned using an AI tool to refine reviewer feedback, which he found “looked more professional.”

4.3.1 Policies and practices for GenAI use

Editors expressed a general awareness of their publishers’ policies on GenAI use, which typically focused on narrow aspects of manuscript preparation and emphasised the need for transparent declarations. However, many found these policies ambiguous, leaving substantial room for interpretation. Paul observed that “journals scrambled to pull together statements” following the emergence of ChatGPT but noted these “haven’t been updated since.” He further critiqued these guidelines for their narrow focus on writing, with “very little related to whether the authors used AI within their research processes.” For some journals, these policies were dictated by the publisher with limited input from the editors themselves, nor consideration for specific discipline needs, such as applied linguistics. These policies aligned with some editors’ perceptions but not others. For example, Nova characterised her publisher’s policy as “very restrictive,” explaining that “authors are not allowed to do anything [with AI] except for grammar checking,” which she felt was incredibly frustrating. She linked the publisher’s reluctance to allow some uses of GenAI to possible concerns regarding medical journals that may need to take a more cautious approach to the use of GenAI. She argued in applied linguistics that we might be able to take more risks as the study might not be as “high-stakes” as a study reporting on the efficacy of “a new drug treatment”.

In contrast, other editors (e.g., Azzurra) reported that their publishers allowed the use of GenAI in research methodology and data analysis, provided such use was declared. These policies aligned with the COPE guidelines on AI use, which emphasise transparency and author accountability for any content generated with AI tools (COPE 2023). Yet, they still struggled to interpret the policies for their specific journals. Notably, several editors highlighted the absence of specific guidelines for their editorial roles. Osman remarked that while authors had instructions, editors themselves had no formal policies to follow.

4.3.2 Ambiguity in declarations of GenAI use

Despite policies requiring transparency, editors reported limited evidence of authors declaring their use of GenAI. Many suspected that authors were employing these tools without disclosure. Ruth could recall only one instance where an author explicitly declared using GenAI. The other editors could not recall any examples. Nova pointed to ambiguities around what constitutes AI use and what should be declared. She stated, “I have not come across a manuscript where AI use had been declared.”

Several editors shared cases where they suspected GenAI involvement in manuscript preparation. Anson described encountering a manuscript that cited a fictitious publication supposedly co-authored by himself and a colleague. Further investigation revealed other “telltale signs,” such as the overuse of certain AI-generated phrases like “delved” and other fabricated references. Similarly, Maggie recounted a reviewer identifying a non-existent citation in a submitted manuscript. Paul noted that he had rejected submissions during initial screening when “it was clear that GenAI had been used to put together the whole thing,” though these were typically of low quality and would likely have been rejected regardless.

4.3.3 Responses to address challenges

To mitigate the risks of GenAI misuse, some journals have implemented measures such as requiring authors to include Digital Object Identifiers (DOIs) in their reference lists to verify the authenticity of citations. Patrick explained, “Some journals now request that authors submit the DOIs for each paper, and they do a random cross-checking of each paper before there’s any other review.” However, such measures increase the administrative workload for editorial teams, as Maggie noted.

Editors emphasised the importance of transparency in AI use, with Anson stating, “It’s very important for authors to declare the use of AI.” Participants agreed that clearer policies are needed to define acceptable uses of GenAI and to incentivise declarations during the submission process. Greater clarity would reduce ambiguity and foster trust between authors, editors, and readers.

4.4 Concerns about GenAI in academic publishing

The editors raised several concerns about the integration of GenAI into applied linguistics research and publishing. These included fears about a growing prevalence of generic content, reduced originality, disengagement from the research process, challenges to research integrity, and issues of equity. Additionally, there was caution about how GenAI might be used in editorial work.

4.4.1 Generic content and reduced originality

Editors consistently expressed apprehension that GenAI might exacerbate the trend toward generic and unoriginal submissions, a problem they noted had emerged even before the release of tools like ChatGPT. Paul highlighted cultural pressures that prioritise quantity over quality, stating, “There’s nothing to stop people looking to get published. And we know these people exist. There are thousands of people just padding their CVs with papers that are essentially just generated from absolutely nothing.” Gary echoed this concern, suggesting that a lack of originality could “contaminate the knowledge pool,” making it harder to identify meaningful contributions. Ruth contextualised the issue within the modern metric-oriented appraisal systems, “It’s not really about building knowledge, you know, the science of our work. But we also know, come to annual appraisal, proposal, and promotions, you know. That’s what it’s about”. The focus on publishing over knowledge dissemination is likely to be made worse by the development of tools that can automate all or part of the research and writing process. The editors mentioned that it also makes it more difficult for them to recognise the studies that should go for review and be offered the opportunity to be published in their journals. GenAI may also shift the research prioritise in applied linguistics. For example, Paul, a corpus linguistics researcher, suggested that “Generative AI is both a natural progression from the work that I was doing, as well as, in a way, a perceived threat”. He talked about a recent conference where the topic was, “Do we still need this conference?” So as well as reducing originality, there could be shock to established disciplines and research domains. This is likely to impact of different disciplines and methodologies differently.

4.4.2 Disengagement from the research process

A recurring theme was the fear that reliance on GenAI might reduce researchers’ engagement with critical aspects of the research process, including data analysis and literature engagement. Sit Wing summarised this concern: “It will be a disaster if researchers become lazy and do not really engage with the data or read the literature.” Ruth raised a similar issue regarding automated data analysis, questioning, “How do you know what is representative of the data?” Paul further emphasised this risk, asking, “How do I know if the author of this paper has really understood the data, has really understood what is there and has analysed it with rigor and certainty?” These concerns suggest that the overuse of AI tools without meaningful human oversight could undermine scholarly rigour and reduce the authenticity of research contributions.

4.4.3 Bias and confidentiality in AI use

Concerns about biases embedded in GenAI models and confidentiality risks were also raised. Patrick highlighted that “[GenAI tools] carry certain biases and ideological assumptions that are built-in that can affect the way they interpret data.” He further noted that these tools could compromise participant privacy, adding, “When the data gets fed into a large language model, there are concerns about privacy and confidentiality. Participants may not know what they signed up for and that their data may be added to a larger pool of data.”

4.4.4 Undetectable use of GenAI

Several editors expressed unease about the inability to detect GenAI use in qualitative research processes, particularly in methods like thematic analysis or ethnography. Azzurra noted that in the field of applied linguistics and language teaching, “We rely on thematic analysis, ethnography, and qualitative data analysis that lends itself quite well to almost invisible use of AI; and it’s the invisible use that I’m a bit more worried about. It is impossible to see [the use of AI] unless the authors declare how they used it in the methodology.” These may be different genres than other academic fields. In applied linguistics, the human perspective and interpretation are central to the research methodology. Such undetectable uses can erode trust between authors, editors, and readers. Although, Azzurra elaborated that AI writing often uses the same “phraseology” and can make the writing feel “flat”. Anson added, “If we feel that an author has done more than improve the writing and they use AI to do the thinking,… then the editor should have the right to reject the paper.”

4.4.5 Equity and access to GenAI tools

Editors also highlighted disparities in access to GenAI tools, which reflect broader systemic inequalities in academia. Gary remarked, “The disparity in access to AI tools is not just a technical issue; it reflects broader systemic inequalities in academia.” Nova elaborated, “The disparities in access to AI tools can hinder collaboration and knowledge sharing among researchers from different backgrounds.” Osman called for initiatives to address these inequities, arguing, “There is a need for initiatives that provide equitable access to AI resources for all researchers.”

4.5 The potential and acceptable utility of GenAI in academic publishing

The editors’ responses highlighted considerable ambiguity regarding the potential and acceptable uses of GenAI in academic publishing. This uncertainty spanned various stages of the publishing process, including manuscript preparation, editorial work, and peer review.

4.5.1 GenAI in manuscript preparation

A consensus emerged among the editors that GenAI’s use for improving language quality is acceptable and even desirable, as it could address inequities arising from the dominance of English in scholarly publishing. Anson while discussing the challenge of academic writing for many L2 applied linguistics researchers remarked, “If the tools can help improve the quality of writing, then I would appreciate that kind of help a lot.” Similarly, Gary supported GenAI-assisted language editing and suggested that such usage does not necessarily require a formal declaration. Paul observed that the quality of writing in submissions had improved in recent years, which he attributed to advancements in AI-powered tools such as Grammarly.

Beyond language editing, editors diverged on other potential uses of GenAI in manuscript preparation. Nova and Maggie endorsed AI for brainstorming, citing their own positive experiences. Sit Wing argued that authors’ transparency about GenAI use was key, stating, “If the author declared they used AI in a certain way, then that would be legitimate as the publisher’s policy says that you have to declare and take responsibility.” Conversely, Osman suggested that instead of requiring a declaration, authors should include detailed descriptions of how GenAI was used in the methods section. He argued, “Transparency in how GenAI was used in the methodological process” would demonstrate its integration and value to the research design.

4.5.2 GenAI in the editorial process

While human oversight remained central to editors’ views on the editorial process, they identified potential applications of GenAI to alleviate administrative burdens. Maggie suggested GenAI could assist with the initial screening of articles to ensure they meet submission criteria. She also proposed that with sufficient training and testing, AI could play a role in peer review, particularly given the challenges of finding qualified reviewers in what she described as an “unsustainable” situation. Paul proposed using AI to identify suitable reviewers, although he cautioned that actual reviews should remain the domain of human experts.

Despite these possibilities, editors were sceptical about GenAI’s role in the peer review process itself, citing ethical concerns. There was speculation that some reviewers might already be using GenAI tools due to heavy workloads. Maggie noted, “I think some reviewers are using GPT or AI tools to do the review because it’s just so painful having to do reviews.” While not providing explicit evidence for this claim, it likely feeds from a general scepticism about authoring and whether a text (e.g., manuscript or review report) was written by a human, machine, or human and machine collaboration.

4.5.3 The need for innovation in academic publishing

Editors recognised that GenAI could drive innovation in academic publishing, but they emphasised that its uses required further exploration. Nova suggested that “we’re stuck in the 20th century in terms of the journal article” and called for new genres and journals that could experiment with innovative ways of disseminating knowledge. Osman suggests GenAI could allow applied linguistics scholars to reach wider audiences by prompting LLMs to summarise research articles or identify implications for specific audiences. He stated, “Simplifying for public outreach or practitioners is key…you could put a research paper in a LLM and say what’s the implications for me as an English language teacher.” Journals could potentially embed AI chatbots into their websites for viewers to engage with the content dialogically.

The data revealed that applied linguistics editors view the field as lacking clear expectations and guidelines on GenAI use. Publishers’ existing guidelines were described as ambiguous and inadequate, leading to reduced honesty and transparency in manuscript submissions. From the editors’ perspectives, this ambiguity underscores the urgent need for clarity. Paul argued, “There is now a pressing need for comprehensive policies… to address the evolving challenges posed by AI.”

4.5.4 Collaborative efforts for clearer guidelines

Several editors called for greater collaboration among journal editors to address the challenges posed by GenAI. Osman advocated for “more collegial conversations amongst editors rather than something organized by publishers” and expressed willingness to contribute by writing an opinion piece on GenAI in applied linguistics publishing. Nova proposed forming “a panel of journal editors who can come up with a code of conduct,” emphasising the need to restore transparency and trust in the system. She argued, “We need to figure out a way to create transparency in the system again.”

The editors’ responses collectively reflect their concerns about the ambiguity surrounding GenAI use and their commitment to establishing a concrete path forward. They underscored the importance of clearer, discipline-specific guidelines and greater collaboration within the applied linguistics community to create a framework that promotes transparency, equity, and innovation in academic publishing.

5 Discussion

This study explored the perceptions of applied linguistics journal editors regarding the use of GenAI in academic writing for publication. As gatekeepers to knowledge dissemination, journal editors hold ultimate accountability for the integrity of published work (COPE 2011) Their views and responses to GenAI have the potential to shape journal policies, how they interact with authors and reviewers regarding GenAI (Hosseini and Horbach 2023) and influence the wider field’s attitudes towards using GenAI tools in research, writing and publication processes. The findings provide insights that could help address researchers’ fears about declaring their use of GenAI, particularly given concerns about editors’ perceptions (Farangi and Nejadghanbar 2024).

5.1 Challenges of GenAI in an already-pressured landscape

The study shows that journal editors see the development of GenAI as exacerbating an already challenging publishing context. Applied linguistics journal editors are under a lot of pressure due to increased submissions and the challenge of finding qualified reviewers. This has put a strain on the manuscript curation process, but, as the editors mention, it also risks diluting the knowledge pool as more and more articles are added to the field. Journal editors are concerned that within the “publish and perish” culture (Kendall and Teixeira da Silva 2024; Yeo et al. 2025), applied linguistics scholars are incentivized to speed up their research activities and manuscript writing using GenAI tools. Yet, the study suggests that editors’ concern is not whether GenAI tools are used or not. They are concerned about the way GenAI tools are used, have ethical concerns regarding the use of GenAI tools, and have a preference for the transparent use of GenAI.

At the moment, the findings suggest that the publisher, journal policies, and the wider publishing sector are creating ambiguity towards the use of GenAI, which leads authors to take a cautious or deceptive approach and not declare their use of tools. Similar issues have been found in the higher education context (e.g., Gonsalves 2024; Tan et al. 2025). Indeed, although the editors had identified the use of GenAI in submitted manuscripts, they had not observed scholars declare their use of GenAI tools in their submissions. The reluctance to declare use is somewhat understandable if the submitting author is concerned their manuscript may not be treated fairly by the editor if they declare honestly about their use (Farangi and Nejadghanbar 2024). However, it speaks to wider issues of questionable research practices that are becoming prevalent in academic discourses/debates about publishing (Nejadghanbar et al. 2023). The editors highlighted that applied linguistics may be particularly at risk of undeclared GenAI due to the kinds of problems the field addresses and the methodologies used. If anything, it increases the scepticism and mistrust of scholarship in the field, damaging its reputation (Moorhouse and Nejadghanbar 2025; Szudarski 2025). As such, we argue that clarity is essential to ensure transparent use.

5.2 Ethical concerns and questionable research practices

The journal editors were concerned about the ways researchers may use GenAI. Using tools to support language accuracy seems to be uncontroversial. Despite scholars raising issues of language standardization and limited diversity in LLM responses (Hetzscholdt 2024; Kuteeva and Andersson 2024). However, other ways GenAI can be used, such as for synthesizing literature for the literature review or analyzing data (Belcher 2025), were met with more caution. Editors cautioned that these uses may reduce the researchers’ engagement with the process, potentially increase participant risks, and system biases could affect the studies’ findings. The understanding that AI use requires human oversight was paramount to the editors (Chen et al. 2024; Cotton et al. 2023; Dumit and Roepstorff 2025).

The cases described by Anson and Maggie, where authors submitted manuscripts with fictitious references and inaccurate information, highlight the potential for questionable research practices to become more pervasive with GenAI. These examples underscore the urgency of delineating acceptable and unacceptable uses of GenAI in research and publishing. Editors called for strengthening ethical training for researchers, particularly focusing on GenAI’s appropriate applications. This training should emphasise how responsible use of GenAI can improve research quality while discouraging practices that compromise the integrity of scholarly work (Ebadi et al. 2025; Farangi et al. 2024).

5.3 A call for discipline-specific guidance

The study highlights the need for applied linguistics editors to collaborate on developing discipline-specific guidance or a code of conduct that reflects the field’s diverse research traditions. Such efforts could include colloquia, panel discussions, and opinion pieces that lead to formal position statements on GenAI use. The discussion of GenAI use in academic publishing needs to draw on diverse stakeholders and consider the diverse methodological traditions of the field (Tlili et al. 2025). Applied linguistics research has unique characteristics that should be reflected in any guidance. As Chapelle et al. (2024) suggest, applied linguistics focuses “on language in context from relevant technical perspectives is a distinct value that applied linguistics research provides beyond methodologies offered by other fields of study” (p. 1). Therefore, guidance needs to recognise the problems the field addresses, the human-centred orientation and methodological focuses of the field and see GenAI as tools that enhance the study quality in the field, not just speed up or automate processes at the expense of research traditions and quality (Moorhouse and Nejadghanbar 2025).

In addition, there is a need for methodological models and protocols that detail how GenAI can be integrated into research and writing processes. This would provide authors with a way to document their process and give editors, reviewers, and readers the ability to assess the appropriateness of using GenAI tools in the submitted manuscript. Such models and protocols could lead to more fine-tuned guidance on using GenAI in applied linguistics research, which, in turn, could incentivize researchers to declare their use. In essence, moving from a deficit view of the use of AI to a more strategic view of the use of AI – as a way to enhance the quality and rigor of the research and writing processes (Morgan 2023; Naeem et al. 2025). While acknowledging the complexity of GenAI and writing and promoting ways AI could help open up research to broader audiences by demystifying publication practices (Kuteeva and Andresson 2024). By leading these developments, the field can establish principled uses of GenAI that align with the field’s rich and diverse research traditions, rather than relying on publishers or external stakeholders to dictate the terms.

GenAI represents a transformative force that demands action. Applied linguistics must balance its potential benefits – such as enhanced efficiency and improved language quality – with the risks it poses to research integrity and originality. Editors and researchers alike must collaborate to establish norms and frameworks that ensure GenAI’s responsible use. By proactively addressing these challenges, the field can leverage GenAI’s potential to advance knowledge while safeguarding the rigour and ethics of academic publishing.

6 Conclusions

This study provides the first exploration of applied linguistics journal editors’ perceptions of Generative Artificial Intelligence (GenAI) in academic publishing. It highlights the complex and rapidly evolving context in which editors operate, characterised by increased submission pressures, ambiguous policy guidelines, and the challenges posed by GenAI integration. The findings underscore an urgent need for refined and clear guidance on the acceptable use of GenAI in the research and writing process that addresses the rich and diverse research traditions of the applied linguistics field. Transparency and trust are foundational to academic publishing, and developing robust guidelines to support these principles must be a priority.

While this study offers valuable insights, its scope and size present limitations. The data were collected from a small, conveniently sampled group of editors at a single point in time, and as the field evolves rapidly, their perceptions may have shifted. Although efforts were made to include a diverse range of editors, the findings are not intended to be generalisable. Future research should expand this inquiry to include the perspectives of other key stakeholders, such as authors and reviewers, to develop a more holistic understanding of GenAI’s effects on academic publishing in applied linguistics. Exploring these perspectives will be essential for creating comprehensive, discipline-specific frameworks that address the ethical and practical challenges posed by GenAI while leveraging its potential to enhance research and publishing processes. By advancing this dialogue, the applied linguistics community can take proactive steps to shape the principled integration of GenAI, ensuring that its adoption strengthens the rigour, transparency, and equity of academic knowledge dissemination.


Corresponding author: Benjamin Luke Moorhouse, Department of English, City University of Hong Kong, M8092, Creative Media Centre, Kowloon Tong, Hong Kong SAR, China, E-mail:

About the authors

Benjamin Luke Moorhouse

Benjamin Luke Moorhouse (SFHEA) is an Associate Professor in the Department of English, City University of Hong Kong. He has extensive experience as a teacher educator and primary school English-language teacher. Benjamin’s research has appeared in international journals, including System, TESOL Quarterly, TESOL Journal, RELC Journal, and ELT Journal.

Sal Consoli

Sal Consoli is Lecturer (research) in Language Education at the Institute for Language Education at the Moray House School of Education. Previously, Dr Consoli worked at the University of Warwick, Newcastle University in the UK and at the Hong Kong Polytechnic University in China.

Samantha M. Curle

Samantha M. Curle (DPhil, FHEA, FRSA) is a Reader in Education (Applied Linguistics), Director of all MRes programmes in the Faculty of Humanities and Social Sciences, the Institutional Academic Lead for the South-West Doctoral Training Programme (SWDPT, University of Bath), and Associate Member of the English Medium Instruction Oxford Research Group (University of Oxford).

  1. Conflict of interest: We have no conflicting interests to declare.

  2. Research funding: This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.

  3. Declaration of generative AI in scientific writing: During the preparation of this work the author(s) used ChatGPT and Grammarly in order to receive feedback on the interview guide as part of the tool development, and get suggestions on language and style. After using this tool/service, the author(s) reviewed and edited the content as needed and take(s) full responsibility for the content of the published article. We detail the use of the tools in the Methods section of the manuscript.

References

Ali, O., P. A. Murray, M. Momin, Y. K. Dwivedi & T. Malik. 2024. The effects of artificial intelligence applications in educational settings: Challenges and strategies. Technological Forecasting and Social Change 199. 123076. https://doi.org/10.1016/j.techfore.2023.123076.Search in Google Scholar

Bagenal, J. 2024. Generative artificial intelligence and scientific publishing: Urgent questions, difficult answers. The Lancet 403(10432). 1118–1120. https://doi.org/10.1016/s0140-6736(24)00416-1.Search in Google Scholar

Belcher, D. 2025. The promising and problematic potential of generative Ai as a leveler of the publishing playing field. Journal of Research Publication Purposes 4(1/2). 93–105. https://doi.org/10.1075/jerpp.00025.bel.Search in Google Scholar

Braun, V. & V. Clarke 2006. Using Thematic Analysis in Psychology. Qualitative Research in Psychology 3 (2). 77–101.10.1191/1478088706qp063oaSearch in Google Scholar

Casal, J. E. & M. Kessler. 2023. Can linguists distinguish between ChatGPT/AI and human writing? A study of research ethics and academic publishing. Research Methods in Applied Linguistics 2(3). 100068. https://doi.org/10.1016/j.rmal.2023.100068.Search in Google Scholar

Chan, C. K. Y. & T. Colloton. 2024. Generative AI in higher education: The ChatGPT effect. Oxen: Routledge.10.4324/9781003459026Search in Google Scholar

Chapelle, C. A., G. H. Beckett & J. Ranalli. 2024. GenAI in applied linguistics: Paths forward. Exploring Artificial Intelligence in Applied Linguistics. 262–274. https://doi.org/10.31274/isudp.2024.154.15.Search in Google Scholar

Chen, L., M. Zaharia & J. Zou. 2024. How is ChatGPT’s behavior changing over time? Harvard Data Science Review 6(2). https://doi.org/10.1162/99608f92.5317da47.Search in Google Scholar

Cohen, L., L. Manion & K. Morrison. 2018. Research methods in education, 8th edn. London: Routledge.10.4324/9781315456539Search in Google Scholar

Consoli, S. 2022. Life capital: An epistemic and methodological lens for TESOL research. Tesol Quarterly 56(4). 1397–1409. https://doi.org/10.1002/tesq.3154.Search in Google Scholar

Consoli, S. & S. Ganassin. 2023. Reflexivity in applied linguistics. New York: Routledge.10.4324/9781003149408Search in Google Scholar

COPE. 2011. Code of conduct for journal editors. Available at: https://publicationethics.org/files/Code_of_conduct_for_journal_editors_Mar11.pdf.Search in Google Scholar

COPE. 2023. Authorship and AI tools. Available at: https://publicationethics.org/guidance/cope-position/authorship-and-ai-tools.Search in Google Scholar

Cotton, D. R. E., P. A. Cotton & J. R. Shipway. 2023. Chatting and cheating: Ensuring academic integrity in the era of ChatGPT. Innovations in Education and Teaching International 61/2. 228–239. https://doi.org/10.1080/14703297.2023.2190148.Search in Google Scholar

Dumit, J. & A. Roepstorff. 2025. AI hallucinations are a feature of large language model design, not a bug. Nature 639. https://doi.org/10.1038/d41586-025-00662-7.Search in Google Scholar

Ebadi, S., H. Nejadghanbar, A. R. Salman & H. Khosravi. 2025. Exploring the impact of generative AI on peer review: Insights from journal reviewers. Journal of Academic Ethics. 1–15. https://doi.org/10.1007/s10805-025-09604-4.Search in Google Scholar

Elsevier. 2025. Generative AI policies for journals. https://www.elsevier.com/about/policies-and-standards/generative-ai-policies-for-journalson (accessed 29 April 2025).Search in Google Scholar

Farangi, M. R. & M. Khojastemehr. 2024. Iranian applied linguists (mis) conceptions of ethical issues in research: A mixed-methods study. Journal of Academic Ethics 22(2). 359–376. https://doi.org/10.1007/s10805-023-09489-1.Search in Google Scholar

Farangi, M. R. & H. Nejadghanbar. 2024. Investigating questionable research practices among Iranian applied linguists: Prevalence, severity, and the role of artificial intelligence tools. System 125. 103427. https://doi.org/10.1016/j.system.2024.103427.Search in Google Scholar

Farangi, M. R., H. Nejadghanbar & G. Hu. 2024. Use of generative AI in research: Ethical considerations and emotional experiences. Ethics & Behavior 1–17. https://doi.org/10.1080/10508422.2024.2420133.Search in Google Scholar

Ganjavi, C., M. B. Eppler, A. Pekcan, Brett Biedermann, Andre Abreu, Gary S. Collins, Inderbir S. Gill & Giovanni E. Cacciamani. 2024. Publishers’ and journals’ instructions to authors on use of generative artificial intelligence in academic and scientific publishing: Bibliometric analysis. BMJ 384. e077192. https://doi.org/10.1136/bmj-2023-077192.Search in Google Scholar

Gonsalves, C. 2024. Addressing student non-compliance in AI use declarations: Implications for academic integrity and assessment in higher education. Assessment & Evaluation in Higher Education. 1–15. https://doi.org/10.1080/02602938.2024.2415654.Search in Google Scholar

Guo, D., R. L. M. Ramos & F. Wang. 2024. Qualitative online interviews: Voices of applied linguistics researchers. Research Methods in Applied Linguistics 3/3. 100130. https://doi.org/10.1016/j.rmal.2024.100130.Search in Google Scholar

Hetzscholdt, P. 2024. Is AI giving us more than we can or even should handle? Learned Publishing 37(1). https://doi.org/10.1002/leap.1593.Search in Google Scholar

Horta, H. & H. Li. 2022. Nothing but publishing: The overriding goal of PhD students in mainland China, Hong Kong, and Macau. Studies in Higher Education 48(2). 263–282. https://doi.org/10.1080/03075079.2022.2131764.Search in Google Scholar

Hosseini, M. & S. P. Horbach. 2023. Fighting reviewer fatigue or amplifying bias? Considerations and recommendations for use of ChatGPT and other large language models in scholarly peer review. Research Integrity and Peer Review 8(1). 4. https://doi.org/10.1186/s41073-023-00133-5.Search in Google Scholar

Kendall, G. & J. A. Teixeira da Silva. 2024. Risks of abuse of large language models, like ChatGPT, in scientific publishing: Authorship, predatory publishing, and paper mills. Learned Publishing 37/1.10.1002/leap.1578Search in Google Scholar

Kuteeva, M. & M. Andersson. 2024. Diversity and standards in writing for publication in the age of AI – Between a rock and a hard place. Applied Linguistics 45(3). 561–567. https://doi.org/10.1093/applin/amae025.Search in Google Scholar

Larsson, T., L. Plonsky, S. Sterling, M. Kytö, K. Yaw & M. Wood. 2023. On the frequency, prevalence, and perceived severity of questionable research practices. Research Methods in Applied Linguistics 2/3. 100064. https://doi.org/10.1016/j.rmal.2023.100064.Search in Google Scholar

McFall-Johnson, M. 2024. An AI-generated rat penis highlights a growing crisis that’s plaguing the publishing industry. Business Insider. Available at: https://www.businessinsider.com/fake-science-crisis-ai-generated-rat-giant-penis-image-2024-3.Search in Google Scholar

Mizumoto, A. & M. Eguchi. 2023. Exploring the potential of using an AI language model for automated essay scoring. Research Methods in Applied Linguistics 2(2). 100050. https://doi.org/10.1016/j.rmal.2023.100050.Search in Google Scholar

Mizumoto, A. & M. F. Teng. 2025. Large language models fall short in classifying learners’ open-ended responses. Research Methods in Applied Linguistics 4(2). 100210. https://doi.org/10.1016/j.rmal.2025.100210.Search in Google Scholar

Moorhouse, B. L. 2024. Generative artificial intelligence and ELT. ELT Journal 78(4). 378–392. https://doi.org/10.1093/elt/ccae032.Search in Google Scholar

Moorhouse, B. L. & L. Kohnke. 2024. The effects of generative AI on initial language teacher education: The perceptions of teacher educators. System 122. 103290. https://doi.org/10.1016/j.system.2024.103290.Search in Google Scholar

Moorhouse, B. L. & H Nejadghanbar. 2025. A response to Szudarski’s (2025) book review of ‘Vocabulary, corpus and language teaching. A machine-generated literature overview’. ELT Journal. https://doi.org/10.1093/elt/ccaf018Search in Google Scholar

Morgan, D. L. 2023. Exploring the use of artificial intelligence for qualitative data analysis: The case of ChatGPT. International Journal of Qualitative Methods 22. https://doi.org/10.1177/16094069231211248.Search in Google Scholar

Naeem, M., T. Smith & L. Thomas. 2025. Thematic analysis and artificial intelligence: A step-by-step process for using ChatGPT in thematic analysis. International Journal of Qualitative Methods 24. 16094069251333886. https://doi.org/10.1177/16094069251333886.Search in Google Scholar

Nejadghanbar, H., G. Hu & M. J. Babadi. 2023. Publishing in predatory language and linguistics journals: Authors’ experiences and motivations. Language Teaching 56(3). 297–312. https://doi.org/10.1017/S0261444822000490.Search in Google Scholar

Perkins, M., J. Roe, B. H. Vu, Darius Postma, Don Hickerson, James McGaughran & Huy Q. Khuat. 2024. Simple techniques to bypass GenAI text detectors: Implications for inclusive education. International Journal of Educational Technology in Higher Education 21/53. https://doi.org/10.1186/s41239-024-00487-w.Search in Google Scholar

Roth, W. 2006. Editorial: On editing and being an editor. Cultural Studies of Science Education 1(2). 209–217. https://doi.org/10.1007/s11422-006-9024-y.Search in Google Scholar

Rushby, N. 2015. Editorial: On being an editor. British Journal of Educational Technology 46. 681–683. https://doi.org/10.1111/bjet.12286.Search in Google Scholar

Silver, R. E., E. Lin & B. Sun. 2023. Applied linguistics journal editor perspectives: Research ethics and academic publishing. Research Methods in Applied Linguistics 2/3. 100069. https://doi.org/10.1016/j.rmal.2023.100069.Search in Google Scholar

Stahl, B. C. & D. Eke. 2024. The ethics of ChatGPT–Exploring the ethical issues of an emerging technology. International Journal of Information Management 74. 102700. https://doi.org/10.1016/j.ijinfomgt.2023.102700.Search in Google Scholar

Stankiewicz, M. 2017. Editing as gardening. Studies in Art Education 58. 165–169. https://doi.org/10.1080/00393541.2017.1331092.Search in Google Scholar

Szudarski, P. 2025. Vocabulary, corpus and language teaching. A machine-generated literature overview. ELT Journal. ccaf006. https://doi.org/10.1093/elt/ccaf006.Search in Google Scholar

Tan, X., C. Wang & W. Xu. 2025. To disclose or not to disclose: Exploring the risk of being transparent about GenAI use in second language writing. Applied Linguistics. amae092. https://doi.org/10.1093/applin/amae092.Search in Google Scholar

Tlili, A., M. Bond, A. Bozkurt, K. Arar, T. K. F. Chiu & P. ‘asher Rospigliosi. 2025. Academic integrity in the generative AI (GenAI) era: A collective editorial response. Interactive Learning Environments 33(3). 1819–1822. https://doi.org/10.1080/10494820.2025.2471198.Search in Google Scholar

Van Noorden, R. 2023. More than 10,000 research papers were retracted in 2023 – A new record. Nature 624(7992). 479–481. https://doi.org/10.1038/d41586-023-03974-8.Search in Google Scholar

Yeo, M. A., B. L. Moorhouse & Y. Wan. 2025. From academic text to talk-show: Deepening engagement and understanding with Google NotebookLM. TESL-EJ 28(4). https://doi.org/10.55593/ej.28112int.Search in Google Scholar

Yeo, M. A., W. A. Renandya & S. Tangkiengsirisin. 2022. Re-envisioning academic publication: From “Publish or Perish” to “Publish and Flourish”. RELC Journal 53(1). 266–275. https://doi.org/10.1177/0033688220979092.Search in Google Scholar

Yin, S. & C. A. Chapelle. 2025. A systematic examination of generative artificial intelligence (GenAI) use guidelines in applied linguistics journals. Research Methods in Applied Lingustics 4(3). https://doi.org/10.1016/j.rmal.2025.100227.Search in Google Scholar

Yin, S., S. Huang, P. Xue, Z. Xu, Z. Lian, C. Ye, C. Li, Mingxuan Liu & Peiyi Lu. 2025. Generative artificial intelligence (GAI) usage guidelines for scholarly publishing: A cross-sectional study of medical journals. BMC Medicine 23. 77. https://doi.org/10.1186/s12916-025-03899-1.Search in Google Scholar


Supplementary Material

This article contains supplementary material (https://doi.org/10.1515/applirev-2025-0021).


Received: 2025-01-27
Accepted: 2025-07-14
Published Online: 2025-07-23
Published in Print: 2025-11-25

© 2025 Walter de Gruyter GmbH, Berlin/Boston

Articles in the same Issue

  1. Frontmatter
  2. Research Articles
  3. Investigating the role of ideal L2 writing self, writing growth mindset, and writing enjoyment in L2 writing self-efficacy: a mediation model
  4. The role of home biliteracy environment in Chinese-Canadian children’s early bilingual receptive vocabulary development
  5. Review Article
  6. A systematic review of time-intensive methods for capturing non-linear L2 development
  7. Research Articles
  8. Reflexivity as a means to address researcher vulnerabilities
  9. L2 English pronunciation instruction: techniques that increase expiratory drive through enhanced use of the abdominal muscles, and transfer of learning
  10. Review Article
  11. Early career language teachers’ cognition, practice, and continuous professional development: a scoping review of empirical research
  12. Research Articles
  13. Unpacking digital-driven language management in a Chinese transnational company: an exploratory case study
  14. Unequal translanguaging: the affordances and limitations of a translanguaging space for alleviating students’ foreign language anxiety in language classrooms
  15. Monologuing into a dialogic space through translanguaging: probing the perceptions and practices of a history EMI teacher in higher education
  16. How foreign language learners’ benign and deleterious self-directed humour shapes their classroom anxiety and enjoyment
  17. Generative AI and the future of writing for publication: insights from applied linguistics journal editors
  18. Positive emotions fuel creativity: exploring the role of passion and enjoyment in Chinese EFL teachers’ creativity in light of the investment theory of creativity
  19. The perception of gradient acceptability among L1 Polish monolingual and bilingual speakers
  20. Navigating between ‘global’ and ‘local’: a transmodal genre analysis of flight safety videos
  21. Investigating EFL learners’ reading comprehension processes across multiple choice and short answer tasks
  22. English medium instruction lecturer within-course linguistic evolution: monitoring changes between STEM lectures
  23. Contributions of interaction, growth language mindset, and L2 grit to student engagement in online EFL learning: a mixed-methods approach
  24. Performing my other self: movement between languages, cultures and societies
Downloaded on 31.12.2025 from https://www.degruyterbrill.com/document/doi/10.1515/applirev-2025-0021/html
Scroll to top button