Startseite Permitting disclosed AI assistance in peer review: parity, confidentiality, and recognition
Artikel Öffentlich zugänglich

Permitting disclosed AI assistance in peer review: parity, confidentiality, and recognition

  • Anna Carobene EMAIL logo
Veröffentlicht/Copyright: 16. September 2025
Veröffentlichen auch Sie bei De Gruyter Brill

To the Editor,

Journals have moved swiftly to regulate authors’ use of artificial intelligence (AI), converging on three principles: AI systems are not authors; any use must be disclosed; and human accountability remains paramount [1], [2], [3]. By contrast, guidance for peer reviewers remains uneven. Many journals still require reviewers to certify that no AI tools were used, even as the same journals permit authors to use AI with disclosure. This asymmetry discourages transparency and may discourage experts who could otherwise deliver timely, high-quality reviews augmented, responsibly, by modern tools.

My motivation is practical. Twice in recent months I was invited to review for high-impact journals and was required to declare that I had not used any AI tool to prepare my report. After completing each review, I informed the editors that I had used limited AI assistance for organization and language, uploaded no confidential material, verified all content, and accepted full responsibility, leaving them the option to accept the review or reassign it. This tension between policy and practice suggests the need for clear, enabling policies for reviewers.

Two existing lines of guidance point to a constructive path. First, publisher policies for peer reviewers, for example, the Nature Portfolio, explicitly prohibit uploading manuscripts to generative AI systems and ask reviewers who used AI in any way to declare that use transparently in their report [1]. Second, the International Committee of Medical Journal Editors (ICMJE) 2025 update emphasizes confidentiality and accountability across submission and peer review and advises journals to require permission and safeguards when AI is used to facilitate a review [2]. Read together, these frameworks endorse disclosed and bounded assistance while protecting confidentiality and responsibility.

Within laboratory medicine, the debate is active and pragmatic. In Clinical Chemistry, Rifai urged the community not to “make [LLMs] a foe,” while highlighting confidentiality risks in peer review and the need for editor-level guidance; he also notes the equity dimension, language technologies may help colleagues with language-related disadvantages (e.g., dyslexia, non-native English) provided the expert remains fully responsible [4]. In parallel, Carobene et al. reviewed the rising use of AI in scientific publishing and drew attention to the under-recognition of reviewer labor, urging explicit acknowledgement and credit [5]. Together, these pieces argue for harmonized, transparent policies that support rigorous, human-led review enhanced by carefully constrained tools.

Beyond policy, the context of reviewing has changed. The number of invitations that many experts receive weekly has increased, while the spectrum of contributions submitted spans from highly selective journals to publications of uncertain reputation. Predatory or low-quality publishing practices complicate selection and increase the burden on legitimate editorial workflows [5]. Reviewers are expected to remain continuously current with fast-moving literature, to detect methodological flaws, and to verify reporting fidelity, tasks that are inherently labor-intensive. Structured approaches (e.g., reporting-guideline checklists) and validated tools can help reviewers allocate time to what matters most: study design, clinical validity, and interpretation [6]. Our recent CCLM opinion paper made this dual point clearly: AI offers efficiencies in drafting and reviewing but raises ethical and operational challenges; at the same time, reviewer contributions remain insufficiently recognized [5].

Accordingly, I respectfully propose that the Journal adopt and publicize an explicit policy enabling disclosed, limited AI assistance by peer reviewers, with three guardrails:

Strict confidentiality. Reviewers must not upload any manuscript text, figures, or confidential data to public generative AI systems. Where AI is used, it should be confined to the reviewer’s own notes, checklists, and wording of comments, or to publisher-vetted secure environments under contractual confidentiality [1].

Transparency and accountability. Each report should include a brief statement of AI use (tool, version, purpose, safeguards), with the reviewer accepting full responsibility for the content and judgments. Undisclosed use or confidentiality breaches should be handled as editorial ethics violations [2].

Scope and limits. Permissible assistance may include linguistic editing of the review text, reporting-guideline checklists, and bibliographic cross-checks; prohibited uses include uploading manuscript content to public AI tools, outsourcing evaluative judgments to AI, or introducing unverified citations [3], 7].

Such a policy would deliver three benefits.

  1. First, parity of standards: it aligns reviewer practice with author policy (disclosure plus human responsibility), closing an avoidable gap.

  2. Second, truthful disclosure: by permitting bounded use with disclosure, journals remove incentives for misrepresentation (e.g., inaccurate non-use attestations), shifting behavior from covert to transparent and auditable practice.

  3. Third, sustainability and quality: AI can reduce time spent on low-value tasks (formatting, wording, checklists), freeing up scarce expert attention for the core of rigorous peer review, methodology, clinical relevance, and interpretation, in the context of growing submission volumes and concerns about predatory sources [3], 6].

Recognition and incentives are the other half of sustainability. Many publishers now offer reviewer-recognition pipelines (e.g., deposition of review activity to ORCID) and are expanding transparent peer review, which publicly archives the review file (anonymized or signed) [7]. The recent decision to make transparent peer review standard across Nature signals a broader move toward openness and accountability that is compatible with disclosed AI use under strict safeguards. These mechanisms acknowledge expertise, improve traceability, and can elevate the perceived value of high-quality reviewing.

As journals consider incentive models, the ERROR project offers a provocative proof-of-concept: a bug-bounty-style program that pays researchers to identify and document errors in published papers, with bonuses for verified findings [8]. While post-publication and not a substitute for editorial peer review, this model demonstrates that targeted, auditable rewards can mobilize expert attention toward quality assurance, exactly the outcome journals are looking for. Exploring bounded pilots (e.g., small honoraria for error-checking checklists, or recognition levels linked to verifiable quality indicators) could complement recognition programs and help align effort with impact.

In sum, enabling ethical, disclosed, and secure AI assistance for reviewers, while expanding recognition and carefully designed incentives, would harmonize policies across authorship and peer review, reduce incentives for concealment, and strengthen the rigor and sustainability in the scientific literature. Given the Journal’s leadership in this discussion, adopting explicit reviewer guidance and piloting secure workflows would provide a valuable signal to the field.


Corresponding author: Anna Carobene, Laboratory Medicine, IRCCS San Raffaele Scientific Institute, Via Olgettina 60, 20132, Milan, Italy, E-mail:

  1. Research ethics: Not applicable. This manuscript does not report studies involving human participants or animals and did not require Institutional Review Board approval.

  2. Informed consent: Not applicable.

  3. Author contributions: The author has accepted responsibility for the entire content of this manuscript and approved its submission.

  4. Use of Large Language Models, AI and Machine Learning Tools: OpenAI’s ChatGPT (GPT-5; accessed 8 September 2025) assisted with language editing and terminology checks. No confidential third-party, personal, or patient data were provided. The author reviewed, verified, and takes full responsibility for the content.

  5. Conflict of interest: The author states no conflict of interest.

  6. Research funding: None declared.

  7. Data availability: Not applicable.

References

1. Nature Portfolio (Springer Nature). Peer review policy and guidance: use of AI by peer reviewers – do not upload manuscripts; disclose any AI support. https://www.nature.com/nature-portfolio/editorial-policies/ai [Accessed 29 August 2025].Suche in Google Scholar

2. International Committee of Medical Journal Editors (ICMJE). Recommendations for the conduct, reporting, editing, and publication of scholarly work in medical journals. 2025. www.icmje.org/icmje-recommendations.pdf [Accessed 29 August 2025].Suche in Google Scholar

3. Flanagin, A, Bibbins-Domingo, K, Berkwits, M, Christiansen, SL. Nonhuman “authors” and implications for the integrity of scientific publication and medical knowledge. JAMA 2023;329:637–9. https://doi.org/10.1001/jama.2023.1344.Suche in Google Scholar PubMed

4. Rifai, N. Large language models for scientific publishing: please, do not make them a foe. Clin Chem 2024;70:468–70. https://doi.org/10.1093/clinchem/hvad219.Suche in Google Scholar PubMed

5. Carobene, A, Padoan, A, Cabitza, F, Banfi, G, Plebani, M. Rising adoption of artificial intelligence in scientific publishing: evaluating the role, risks, and ethical implications in paper drafting and review process. Clin Chem Lab Med 2024;62:835–43. https://doi.org/10.1515/cclm-2023-1136.Suche in Google Scholar PubMed

6. Cukier, S, Helal, L, Rice, DB, Pupkaite, J, Ahmadzai, N, Wilson, M, et al.. Checklists to detect potential predatory biomedical journals: a systematic review. BMC Med 2020;18:104. https://doi.org/10.1186/s12916-020-01566-1.Suche in Google Scholar PubMed PubMed Central

7. Springer Nature Group. Transparent peer review now standard for nature (press release) 2025. https://group.springernature.com/gp/group/media/press-releases/transparent-peer-review-now-standard-for-nature/27788498 [Accessed 29 August 2025].Suche in Google Scholar

8. Elson, M. Pay researchers to spot errors in published papers. Nature 2024;629:730. https://doi.org/10.1038/d41586-024-01465-y.Suche in Google Scholar PubMed

Received: 2025-08-29
Accepted: 2025-09-08
Published Online: 2025-09-16

© 2025 Walter de Gruyter GmbH, Berlin/Boston

Heruntergeladen am 3.10.2025 von https://www.degruyterbrill.com/document/doi/10.1515/cclm-2025-1140/html
Button zum nach oben scrollen