Startseite Medizin The FAIR framework: ethical hybrid peer review
Artikel Open Access

The FAIR framework: ethical hybrid peer review

Ein Erratum zu diesem Artikel finden Sie hier: https://doi.org/10.1515/jpm-2025-0658
  • EMAIL logo , und
Veröffentlicht/Copyright: 23. September 2025

Abstract

Objectives

Traditional peer review faces critical challenges including systematic bias, prolonged delays, reviewer fatigue, and lack of transparency. These failures violate ethical obligations of beneficence, justice, and autonomy while hindering scientific progress and costing billions annually in academic labor. To propose an ethically-guided hybrid peer review system that integrates generative artificial intelligence with human expertise while addressing fundamental shortcomings of current review processes.

Methods

We developed the FAIR Framework (Fairness, Accountability, Integrity, and Responsibility) through systematic analysis of peer review failures and integration of AI capabilities. The framework employs standardized prompt engineering to guide AI evaluation of manuscripts while maintaining human oversight throughout all stages.

Results

FAIR addresses bias through algorithmic detection and standardized evaluation protocols, ensures accountability via transparent audit trails and documented decisions, maintains integrity through secure local AI processing and confidentiality safeguards, and upholds responsibility through ethical oversight and constructive feedback mechanisms. The hybrid model automates repetitive tasks including initial screening, methodological verification, and plagiarism detection while preserving human judgment for novelty assessment, ethical evaluation, and final decisions.

Conclusions

The FAIR Framework offers a principled solution to peer review inefficiencies by combining AI-enabled consistency and speed with essential human expertise. This hybrid approach reduces review delays, eliminates systematic bias, and enhances transparency while maintaining confidentiality and editorial control. Implementation could significantly reduce the estimated 100 million hours of global reviewer time annually while improving review quality and equity across diverse research communities.

Introduction

The traditional peer review system remains central to academic publishing, intended to safeguard the validity, originality, and significance of scientific work. However, it is increasingly criticized for inefficiencies, lack of transparency, and structural bias, which delay dissemination and impair equity and scientific progress [1], 2]. Reviewer fatigue and inconsistent standards have further strained the system, contributing to publication bottlenecks and eroding trust.

Delays and unconstructive rejections can harm researchers’ careers and hinder timely medical advancements, violating beneficence-based obligations of the journals to contributors [2], 3]. Biases against authors from non-Western countries, early-career scientists, or non-native English speakers compromise justice-based obligations to all authors [4], 5]. To address these concerns, an ethical framework should guide peer review.

The current peer review system faces critical challenges. Peer reviewers presently donate their services without reimbursement. An estimated 20 % of researchers conduct up to 94 % of all peer reviews, with the top 5 % alone contributing 30 % of total review hours, amounting to 18.9 million hours in 2015 and billions of dollars of “free” donations by peer reviewers [6], [7], [8]. This disproportionate burden, combined with surging submission volumes and increasing specialization, has created persistent bottlenecks in the review process [6], 7], 9], 10]. Editors frequently struggle to identify qualified reviewers willing to provide timely, high-quality reviews leading to prolonged turnaround times and delays in the dissemination of important findings [11], 12]. These inefficiencies directly impact scientific progress, particularly in fast-moving fields like medicine, where delayed publication may hinder clinical innovation or evidence-based policy decision [2].

Recent innovations in generative artificial intelligence (GAI), particularly large language models like ChatGPT, Claude, Gemini, and Perplexity, offer promising support for peer review by streamlining repetitive tasks, potentially removing bias, enhancing consistency, and enabling faster initial evaluations [13], [14], [15]. Studies show that AI can reliably detect methodological inconsistencies and provide structured feedback, particularly during technical verification [16]. However, current models lack the depth to assess novelty, interdisciplinary relevance, or ethical nuance without human oversight [16], [17], [18], [19].

In this paper we propose a novel ethically guided peer review process. The FAIRview framework (Fairness, Accountability, Integrity, and Responsibility) is an ethically sound hybrid model that leverages GAI to augment rather than replace human peer review. By integrating ethically guided AI tools and standardized prompt engineering with expert human judgment, FAIR aims to create a more equitable, transparent, and efficient peer review system while preserving the core values of academic publishing.

Failure of beneficence-based obligations in the traditional peer review process

The ethical obligation of beneficence requires that peer review actively promotes scientific progress and human welfare by ensuring timely dissemination of valid research. However, the lack of reliability in traditional peer review introduces unnecessary harm and undermines trust in the scientific process. Current review cycles stretch usually for months, delaying medical advancements that could improve or save lives. Peer review inefficiencies cost the scientific community billions of dollars annually in wasted academic labor and lost opportunities for medical progress [3], 7], 8].

“Reviewer fatigue” compounds the problem, with journals now sending many invitations to secure a single reviewer. These inefficiencies prevent critical knowledge from reaching clinicians and researchers who rely on it to make informed medical decisions. Peer review should prevent harm by ensuring rigorous and consistent unbiased evaluation of scientific work. However, inconsistency and bias remain one of its most glaring failures. Identical manuscripts submitted to different reviewers often receive contradictory evaluations, and 30–40 % of published papers contain methodological flaws that should have been identified during review [20]. Arbitrary rejections and inconsistent standards damage careers, particularly for early-career researchers who may be penalized by subjective or unconstructive feedback [21].

Failure of justice-based obligations in the traditional peer review process

The ethical obligation of justice demands fair and equitable treatment for all researchers, irrespective of personal characteristics, institutional affiliation, or geographic location. Yet, peer review remains riddled with bias [22], with non-Western countries being rejected at significantly higher rates than equivalent studies from Western institutions, even when controlling for quality metrics [23]. Similarly, female researchers and early-career scientists face longer review times and harsher critiques than their male or established counterparts. Double-blind review was intended to mitigate bias, but reviewers often infer the origin of the authors [24]. These ingrained biases create an exclusionary system where scientific merit is secondary to institutional and demographic privilege. By failing to ensure justice, traditional peer review distorts the global scientific landscape, suppressing diverse perspectives and slowing innovation.

Failure of autonomy-based obligations in the traditional peer review process

The ethical obligation of respect for autonomy requires that researchers maintain intellectual sovereignty over their work and have transparency in the review process. However, opaque editorial decisions, unsubstantiated reviewer demands, and extended waiting periods limit authors’ ability to make informed decisions about their publications and careers [22], 25]. Furthermore, delays in peer review can prevent researchers from meeting grant deadlines, securing promotions, or establishing credibility in their fields [26]. In some cases, reviewers impose ideological or methodological biases that force authors to make substantial changes that alter their intended contributions. These power imbalances erode scientific independence and limit the diversity of thought necessary for innovation.

The FAIR framework: an outline for a hybrid ethical peer review

Several recent publications have explored the potential and challenges of integrating AI into peer review, offering valuable insights into how large language models might assist, augment, or at times complicate human judgment in scientific evaluation [11], 12], 27]. We are proposing a method to ethically improve the peer review process: FAIR, the Framework for AI-Integrated Hybrid Review (Fairness, Accountability, Integrity, and Responsibility) addresses critical ethical and practical shortcomings of traditional peer review by creating a structured system that combines artificial intelligence capabilities with human expertise. FAIR encompasses the core ethical principles that guide this approach, resulting in an equitable review process leveraging the strengths of both human expertise and GAI.

Fairness: The fairness component of FAIR focuses on eliminating systematic biases and ensuring equitable treatment of all manuscripts regardless of author characteristics or institutional affiliations. Bias in peer review is not an incidental failure but a structural problem requiring systemic solutions. FAIR establishes clear, unbiased evaluation metrics and audits outcomes to ensure that review decisions do not systematically disadvantage any group or individual. By leveraging GAI to detect patterns of bias, FAIR prevents reviewers from inadvertently favoring or disfavoring specific institutions, regions, demographics, or individuals. FAIR implements standardized evaluation protocols and algorithmic bias detection to create a level playing field where research is judged solely on its scientific merit. It reduces bias by implementing standardized evaluation protocols and algorithmic bias detection. By automating initial screening and ensuring consistent application of review criteria, it minimizes subjective influences related to an author’s identity, institutional affiliation, or geographic origin.

Accountability: Transparency is critical in ethical peer review. Accountability establishes transparent processes where every review decision can be traced, documented, and justified to relevant stakeholders. It creates comprehensive audit trails and clearly defines responsibilities between AI systems and human reviewers to ensure that outcomes are explainable and defensible. FAIR ensures that every decision is traceable, documented, timed, and justified. AI-assisted reviews provide structured, itemized feedback that editors and authors can evaluate, reducing arbitrary or opaque decision-making. Medical research and public health depend on timely access to validated findings. The COVID-19 pandemic highlighted the urgency of rapid knowledge dissemination, yet traditional peer review prolonged this process. FAIR addresses this inefficiency by streamlining initial screening, accelerating review times, and reducing reviewer burden while maintaining rigorous oversight. Faster, more consistent review processes lead to quicker publication of groundbreaking research, directly benefiting patient care and scientific progress.

Integrity: Integrity preserves the confidentiality and ethical handling of manuscript data throughout the review process. It employs secure, institution-specific technology and strict data governance protocols to maintain the trust that authors place in the scientific publishing system. Maintaining confidentiality and ethical handling of manuscript data is a priority in FAIR. Secure AI models operate within publisher and journal infrastructure to prevent data breaches, and human reviewers oversee AI-assisted recommendations to preserve ethical integrity. The subjective nature of current peer review at times leads to unnecessary revisions and contradictory feedback that can derail promising research. FAIR standardizes reviewer criteria, ensuring that feedback is structured, specific, and aligned with ethical guidelines. This system reduces arbitrary demands for revisions that do not improve the manuscript’s quality but instead reflect personal biases or preferences.

Responsibility: John Gregory, 18th century physician-ethicist, argued that medicine should be a public trust. FAIR recognizes that peer review must serve both the scientific community and society. Responsibility acknowledges the ethical obligations of the peer review system to both the scientific community and society at large. It establishes comprehensive guidelines and regular training both for authors and reviewers to ensure that all participants understand their role in maintaining scientific standards while fostering innovation and progress. Ethical oversight mechanisms ensure that AI tools enhance review processes without compromising author autonomy or imposing rigid standardization that limits scientific diversity. Authors deserve a transparent review process that respects their intellectual contributions. FAIR ensures that review criteria are clearly communicated, decisions are justifiable, and feedback is constructive. Authors benefit from a system where manuscript evaluations are not only fair but also designed to help improve their work rather than arbitrarily delay its publication.

Why the FAIR model?

The hybrid peer-review FAIR model represents an improved approach to peer review because it combines complementary strengths of humans and AI. Humans provide nuanced judgment on significance, innovation, and ethical implications that AI currently cannot fully assess. AI excels at mechanical aspects such as checking citations, methods consistency, and statistical analysis with greater speed and accuracy than human reviewers alone. Together as a hybrid, they address the growing complexity of research while maintaining human judgment where it adds the most value. Proper implementation with secure AI systems preserves confidentiality while enhancing the review process’s quality, speed, and fairness.

Potential risks and limitations or FAIR

Generative artificial intelligence (GAI) systems such as ChatGPT, Claude, Gemini, and Perplexity, are increasingly being discussed and considered for integration into scientific workflows to support tasks such as language generation, document analysis, peer reviews, and structured evaluation of scholarly content [28], [29], [30]. In the peer review process, GAI can efficiently assess manuscripts by analyzing textual components against standardized criteria, offering enhanced speed, consistency, and scalability compared to traditional manual review alone [31], 32].

While FAIR addresses ethical shortcomings of traditional peer review, the integration of AI also introduces potential risks that extend beyond confidentiality. These include algorithmic opacity, which may obscure how decisions are made; AI hallucinations that could produce inaccurate assessments; and the risk of de-skilling human reviewers through overreliance on automated tools. To mitigate these concerns, FAIR includes safeguards such as human oversight of all AI-generated outputs, transparent prompt design, bias monitoring in training data, and structured reviewer training to preserve and enhance human evaluative judgment. These measures ensure that AI remains a tool to augment, not replace human ethical responsibility in peer review.

Generative artificial intelligence (GAI) systems and standardized prompts in the FAIR framework

To effectively harness these capabilities, GAI systems must be guided by carefully crafted prompts, structured, context-aware instructions that direct the AI to evaluate specific elements of a manuscript [33], 34]. A standardized prompt design is a cornerstone of the FAIR approach, with the goal of promoting transparency, minimizing bias, and supporting reproducible peer review outcomes [35]. Within the FAIR framework, prompts serve as the operational bridge between human editorial judgment and AI-supported analysis, ensuring that evaluations align with journal standards, ethical principles, and disciplinary expectations [12]. To achieve both rigor and relevance, FAIR employs a library of modifiable prompts tailored to each journal’s scope and format. These prompts guide the AI through systematic assessments of key manuscript components, including methodological soundness, data presentation, interpretive balance, and conclusion validity, while remaining in alignment with editorial policies [11]. The prompt development process is both structured and adaptive, balancing uniformity with the flexibility needed to accommodate diverse research traditions [36]. Each element of the FAIR model, Fairness, Accountability, Integrity, and Responsibility underpins specific aspects of prompt engineering. This ensures that AI-assisted peer review remains ethically grounded and scientifically robust [37].

In the FAIR framework, prompt engineering ensures that each prompt aligns with core ethical principles and supports transparent, unbiased, and constructive peer review.

  1. Fairness of prompt engineering begins with inclusive language and unbiased evaluation criteria that avoid privileging certain methods or regions; for instance, prompts assess “methodological appropriateness” rather than “statistical rigor” to respect diverse research approaches.

  2. Accountability in prompt engineering is ensured through modular, trackable prompts that provide itemized feedback and transparent criteria, allowing issues to be clearly identified and addressed.

  3. Integrity in prompt engineering is maintained by designing prompts that operate locally, flag ethical concerns, and preserve confidentiality while distinguishing tasks requiring human oversight.

  4. Responsibility in prompt engineering emphasizes developmental and respectful feedback, using prompts that offer constructive alternatives and avoid prescriptive or culturally narrow judgments, ensuring authors retain agency and benefit from a process that supports both quality and diversity in scholarship.

Journals implementing the FAIR framework may develop and continuously improve a library of standardized prompts that are tested and refined. These prompt libraries could include:

  1. Initial screening prompts that assess technical compliance, completeness, and adherence to journal guidelines and similarity indices to detect potential plagiarism

  2. Methodological evaluation prompts tailored to different research approaches

  3. Results verification prompts that check statistical analyses, validate appropriateness of statistical methods, and data presentation.

  4. Discussion assessment prompts that evaluate how findings are contextualized within existing literature

  5. Conclusion validity prompts that examine whether claims are supported by the presented evidence

Standardized prompts enhance consistency while allowing sufficient flexibility to accommodate diverse research traditions. By engineering prompts that systematically address each component of FAIR, the framework ensures that AI assistance enhances rather than undermines the ethical foundations of peer review. (Table 1).

Table 1:

Application of FAIR pillars to AI prompt engineering.

Pillar Fairness Accountability Integrity Responsibility
Focus Equitable treatment of all manuscripts; quick turnaround Transparent and traceable decision-making Confidentiality and ethical data handling “Constructive, author-respecting feedback”
Prompt “Inclusive, bias-aware language; considers diverse methods and global contexts” “Modular prompts; structured, itemized AI feedback; audit trails” Local AI processing; conflict of interest and ethics flagging Balanced tone; suggestions instead of prescriptions; highlights both strengths and issues
Safeguards Bias detection algorithms; regular equity audits Clear documentation of criteria; role distinction between AI and human reviewers Encrypted data handling; non-transmission of content to external servers Cultural/disciplinary sensitivity cues; training for ethical feedback

FAIR implementation

The implementation of FAIR may follow a structured approach focused on the four pillars and ongoing optimization. It could begin with pilot testing that compares traditional and FAIR reviews using metrics such as speed, consistency, and bias, informed by feedback from editors and reviewers [38]. Training and onboarding efforts provide clear guidance and standardized prompts tailored to each journal’s needs, reinforcing human oversight of AI tools [39]. Technological infrastructure supports secure, local AI processing, robust encryption, and comprehensive audit mechanisms [40]. Continuous improvement is maintained through prompt library updates, user feedback, and scheduled bias audits [41]. An emerging element of FAIR is editorial screening, where journals may soon use AI to generate structured preliminary quality assessments to support faster, fairer editorial decisions – while preserving transparency and editorial judgment [42], 43].

Accelerating FAIR peer review while potentially lowering burden and cost

The FAIR model offers a transformative opportunity to reduce delays, cut costs, and improve the integrity of peer review. By automating repetitive tasks such as screening and formatting checks, as well as including plagiarism detection prompts that assess similarity indices against existing publications and flag manuscripts exceeding predefined thresholds (e.g., >20 % similarity), FAIR shortens review cycles, allowing faster dissemination of valid research [42]. Its standardized prompts and AI-assisted structure promote consistency and reduce arbitrary rejections – saving valuable time for editors and reviewers [27]. In doing so, FAIR addresses the growing reviewer workload and reallocates resources toward editorial development and reviewer training [12]. Importantly, by embedding ethical safeguards and transparency at each stage, FAIR enhances trust and accountability while preserving confidentiality [11]. These efficiencies, combined with human oversight from start to finish, position FAIR as an ethically sound and financially responsible evolution of scientific publishing.

Peer review is estimated to consume over 100 million hours of researcher time globally each year, with the top 5 % of reviewers alone contributing over 18.9 million hours, equivalent to roughly $1.5 billion USD in unpaid academic labor [6], 7]. If FAIR reduces the average review cycle by just 7–10 days per manuscript, journals could process thousands of submissions faster annually, freeing up significant editorial capacity without compromising quality. These measurable efficiencies position FAIR not only as an ethical upgrade but also as a fiscally responsible innovation for the future of scientific publishing.

Addressing confidentiality concerns

Critics have argued that using generative AI in peer review may breach confidentiality; however, this concern overlooks critical safeguards and misrepresents how FAIR operates. First and foremost, humans remain in full control of the peer review process from prompt development to oversight of AI outputs to final editorial decisions. AI is never autonomous, never unsupervised, and never replaces human judgment. The FAIR framework mandates that all AI tools be locally integrated within journal systems using secure, non-transmitting infrastructure, ensuring that manuscript content is not shared externally or stored by third parties. Moreover, the concern ignores the reality that human reviewers already consult external tools and even colleagues, often without triggering equivalent scrutiny. When implemented with proper safeguards, AI can enhance, not undermine, confidentiality by standardizing data handling and limiting unnecessary human access. Finally, the ethical use of AI in peer review can improve transparency, reduce reviewer workload, and support more consistent and timely decisions, particularly when guided by strict human-led governance.

Conclusions

The FAIR model offers a principled and practical response to the ethical, logistical, and quality-related shortcomings of traditional peer review. By combining AI-enabled efficiency with human judgment, it ensures that peer review is not only faster and more consistent but also fair, transparent, and ethically sound. At every stage, from prompt design to final editorial decision, human reviewers remain in full control, using AI as a tool, not a substitute. This hybrid approach upholds the core values of beneficence, justice, and respect for autonomy while reducing unnecessary delays and enhancing scientific rigor.

As academic publishing evolves, the focus is on how to integrate AI in a responsible manner. FAIR offers a structured and ethically grounded hybrid approach that maintains and enhances the integrity of peer review for the benefit of science and society.


Corresponding Author: Amos Grünebaum, MD, Northwell Health, Zucker School of Medicine, Hempstead, NY, USA, E-mail:

Acknowledgments

The authors thank the editorial teams and peer reviewers who provided insights into current challenges in academic publishing that informed the development of this framework.

  1. Research ethics: Not applicable.

  2. Informed consent: Not applicable.

  3. Author contributions: A.G., J.D., and F.A.C. contributed to the concept and design of the FAIR framework. A.G. and F.A.C. drafted the initial manuscript. J.D. provided critical revision for important intellectual content regarding ethical considerations in peer review. All authors have accepted responsibility for the entire content of this manuscript and approved its submission.

  4. Use of Large Language Models, AI and Machine Learning Tools: Generative AI tools (ChatGPT-4, Claude) were used to refine language clarity and assist with literature organization during manuscript preparation. All AI-generated content was reviewed, edited, and verified by the authors. The AI tools were not used for conceptual development, analysis, or generation of the core framework principles.

  5. Conflict of interest: The authors state no conflict of interest.

  6. Research funding: None declared.

  7. Data availability: Not applicable.

References

1. Tennant, JP, Vasilevsky, N, Graef, W, Jacobs, A, Pikas, C, Crick, T, et al.. What are the barriers to transparency in peer review? F1000Research 2017;6:1151. https://doi.org/10.12688/f1000research.12037.1.Suche in Google Scholar

2. Guthrie, J, Jeffries, V, Knott, C, Cooper, C, Wilson, P. Why we need a fundamental re-think of the peer review process. J R Soc Med 2023;116:7–10. https://doi.org/10.1177/01410768221146416.Suche in Google Scholar

3. Smith, R. Peer review: a flawed process at the heart of science and journals. J R Soc Med 2006;99:178–82. https://doi.org/10.1177/014107680609900414.Suche in Google Scholar PubMed PubMed Central

4. Amano, K, Ali, K, Aminu, K, Alsheikh-Ali, A, Burchmore, R, Sheikh, A, et al.. Research assessment systems may disadvantage local health researchers in low-income countries: a scoping review. BMJ Glob Health 2021;6:e006118. https://doi.org/10.1136/bmjgh-2021-006118.Suche in Google Scholar

5. Dadze, JK, Hoosen, A, Jafta, N, Mkhize, Z, Banda, S, Ncube, M, et al.. Decolonising peer review: reflections of early career researchers from the global south. BMJ Glob Health 2023;8:e010795. https://doi.org/10.1136/bmjgh-2022-010795.Suche in Google Scholar

6. Kovanis, NK, Trice, AL, Cotton, RJ, Blom, JW, Gussekloo, J. The prevalence of non-author reviewers in scholarly publishing. PLoS One 2015;10:e0121013. https://doi.org/10.1371/journal.pone.0121013.Suche in Google Scholar PubMed PubMed Central

7. Aczel, B, Szaszi, B, Anderson, CJ, Kirby, M, Moore, S, Koudou, B, et al.. A survey of guest editors reveals systemic flaws in the peer review process. PLoS One 2021;16:e0259849. https://doi.org/10.1371/journal.pone.0259849.Suche in Google Scholar PubMed PubMed Central

8. LeBlanc, J, Chen, J, Hardisty, MA, Ertl, M, Zickler, P, Naumann, M, et al.. Estimating the total cost of researchers’ time spent on peer review in the United Kingdom. PLoS One 2022;17:e0266318. https://doi.org/10.1371/journal.pone.0266318.Suche in Google Scholar PubMed PubMed Central

9. Fox, CW, Burns, JM, Munshaw, KG. The contributions of ecologists to peer review. Bull Ecol Soc Am 2016;97:61–7. https://doi.org/10.1002/bes2.1224.Suche in Google Scholar

10. Diaba, B, Spielberg, CD, Sweet, K, Balunga, RE, Lwanga, J, Kambarami, R, et al.. Characteristics of peer reviewers and their self-reported training needs: results of a global survey. BMJ Open 2021;11:e048174. https://doi.org/10.1136/bmjopen-2020-048174.Suche in Google Scholar

11. BaHammam, AS, Menezes, EV, Thomas, A. Artificial intelligence in precision health. Amsterdam: Elsevier; 2023.Suche in Google Scholar

12. Biswas, D, Sarkhel, JK, Brazdil, P. Artificial intelligence in peer review: opportunities, challenges, and ethical considerations. J Assoc Inf Sci Technol 2023;14. https://doi.org/10.1002/asi.24872.Suche in Google Scholar

13. Tang, T, Bamakan, SMH, Erfani, SM, editors. Artificial intelligence and machine learning in smart healthcare. Boca Raton: Chapman and Hall/CRC; 2023.Suche in Google Scholar

14. Costa, H, Dreier, J, Romao, X. Can ChatGPT contribute to the scientific peer review process? an exploratory study. Account Res 2023;30:549–57. https://doi.org/10.1080/08989621.2023.2219813.Suche in Google Scholar

15. Saad, M, Salah, A, Al-Jaboori, M, Al-Khaleefa, L, Allahham, A, Shukri, M, et al.. ChatGPT: a comprehensive literature review. arXiv preprint arXiv:2311.16625. 2023.Suche in Google Scholar

16. Kim, Y, Choi, J, Oh, J. Can artificial intelligence help in peer review? a case study using statistical methodology assessment. Scientometrics 2022;127:5771–88. https://doi.org/10.1007/s11192-022-04479-7.Suche in Google Scholar

17. Yilmaz, E, Jijkoun, V, Asadi, N, editors. Artificial intelligence for information retrieval. Cambridge: Cambridge University Press; 2023.Suche in Google Scholar

18. Hosseini, M, Uzuner, O, Caragea, C. A survey of artificial intelligence approaches in scholarly writing assistance. Artif Intell Rev 2023;56:6787–832. https://doi.org/10.1007/s10462-023-10444-2.Suche in Google Scholar

19. Lee, D-H, Resnik, DB, Koo, M-M. The ethics of using artificial intelligence in biomedical research publication. Sci Eng Ethics 2023;29:753–63. https://doi.org/10.1007/s11948-023-00755-w.Suche in Google Scholar

20. Francois, BS, Durieux, P, Harambat, J. Quality of reporting of observational studies in pediatric nephrology: a methodological systematic review. J Clin Epidemiol 2020;128:55–66. https://doi.org/10.1016/j.jclinepi.2020.08.007.Suche in Google Scholar PubMed

21. Silbiger, NJ, Rossi, PM. To improve gender equity in peer review, we need to measure it. PLoS One 2022;17:e0266366. https://doi.org/10.1371/journal.pone.0266366.Suche in Google Scholar PubMed PubMed Central

22. Tomkins, A, Zhang, M, Heavlin, B. Reviewer bias in single- versus double-blind peer review. Proc Natl Acad Sci U S A 2017;114:12708–13. https://doi.org/10.1073/pnas.1707323114.Suche in Google Scholar PubMed PubMed Central

23. Tavoletti, G, Manzini, R, Dal Fiore, A. Patterns of research evaluation practices and their effects on knowledge diffusion: evidence from management scholars. Scientometrics 2021;126:199–228. https://doi.org/10.1007/s11192-020-03733-4.Suche in Google Scholar

24. Wennerås, C, Wold, A. Nepotism and sexism in peer review. Nature 1997;387:341–3. https://doi.org/10.1038/387341a0.Suche in Google Scholar PubMed

25. McIntosh, HM, Wigginton, B, Anderson, C, Letcher, T, Brown, G, Shubber, Z, et al.. Authorship transgressions in health research: problems and solutions. BMC Med 2023;21:1–13. https://doi.org/10.1186/s12916-023-02793-1.Suche in Google Scholar

26. Vuong, QH. The limitations of the peer-review process and the costs of pursuing impactful articles. SAGE Open 2022;12. https://doi.org/10.1177/21582440221087317.Suche in Google Scholar

27. Zhang, L, Romero, AE, Bird, S, editors. Peer review: past, present and future. Cambridge: Cambridge University Press; 2023.Suche in Google Scholar

28. van Dis, EA, de Graaf, DL, Ramadan, J, Bollen, J, Moerman, EJ, Bertil, H, et al.. ChatGPT: five priorities for research. Nature 2023;613:654–5. https://doi.org/10.1038/d41586-023-00105-7.Suche in Google Scholar

29. Korinek, A. Language models as economic tools. arXiv preprint arXiv:2302.10362. 2023. Note: Preprint, not peer-reviewed journal.Suche in Google Scholar

30. Naddaf, SY, Kheradpisheh, SR, Yazdani, A. ChatGPT in scientific writing: a preliminary exploration of potentials and limitations. arXiv preprint arXiv:2302.04446 2023.Suche in Google Scholar

31. O’Connor, S, Wieteck, B, Brereton, P, Kitchenham, BA, editors. Using machine learning to support systematic literature reviews: a systematic review. In: Proceedings of the 20th international conference on evaluation and assessment in software engineering; 2016:104–13 pp.Suche in Google Scholar

32. Himmelstein, DS, Romero, AE, Levernier, JG, Clark, OA, Brinton, LT, Hidary, J. SciRate: a comprehensive scholarly paper rating system. PLoS One 2011;6:e25791. https://doi.org/10.1371/journal.pone.0025791.Suche in Google Scholar PubMed PubMed Central

33. Hewing, J, Sattler, KU, Wiedemann, P. ChatGPT and knowledge graphs: potentials, limitations, and future prospects. arXiv preprint arXiv:2307.00933 2023.Suche in Google Scholar

34. Joshi, K, Pandya, S, Bhatt, C, Doshi, K. Navigating the landscape of radiology research: a comprehensive review of artificial intelligence applications, challenges, and ethical considerations. Radiol Artif Intell 2024;6:e230075. https://doi.org/10.1148/ryai.230075.Suche in Google Scholar

35. Wang, Y. Towards trustworthy artificial intelligence: a perspective from the journal of medical systems. J Med Syst 2023;47:42. https://doi.org/10.1007/s10916-023-01948-4.Suche in Google Scholar

36. Klinger, R, Howard, J, Allen, JD, Goldstein, D, Balsmeier, B, Yali, A, et al.. Systematic review of machine learning in biomedical text classification: 20 years of research. BMC Med Inform Decis Mak 2024;24:15. https://doi.org/10.1186/s12911-023-02416-1.Suche in Google Scholar

37. Latona, RR, Rajkomar, A, Rudin, C, Obermeyer, Z. Fairness in artificial intelligence for medicine. NEJM 2024;AI:1. https://doi.org/10.1056/aifo2300065.Suche in Google Scholar

38. Farber, NJ, Dodenhoff, K, Gröneberg, DA, Islam, M, Mandl, KD, Zoller, T, et al.. Validation of artificial intelligence–supported peer review: randomized controlled trial. JMIR Med Educ 2023;9:e45595. https://doi.org/10.2196/45595.Suche in Google Scholar

39. Sanchez, S, Lewinski, AA, Nadkarni, PM, O’Brien, PC, Desai, T. Evaluating a machine learning tool to assist editors in prioritizing abstracts for full-text review in systematic reviews. J Am Med Inform Assoc 2023;30:10–18. https://doi.org/10.1093/jamia/ocac184.Suche in Google Scholar PubMed PubMed Central

40. Ghosh, B, Li, J, Berendt, B, Shokri, R, Kerschbaum, F, Zhang, Y, et al.. Evaluating local differential privacy in text processing. Artif Intell 2023;324:104005. https://doi.org/10.1016/j.artint.2023.104005.Suche in Google Scholar

41. Verharen, J, Connor, AM, Adams, J, editors. Bias and inequality in scholarly peer review. Abingdon: Routledge; 2023.Suche in Google Scholar

42. Bauchner, H, Fontanarosa, PB. Using artificial intelligence to improve peer review. JAMA 2023;330:2263–4. https://doi.org/10.1001/jama.2023.24458.Suche in Google Scholar

43. Gao, X, Howard, DM, Lupu, Y, Guggenberger, C, Dreisbach, C, Gallagher, P, et al.. Assessing the feasibility of using artificial intelligence to support editorial decision-making. JAMA Netw Open 2023;6:e2348549. https://doi.org/10.1001/jamanetworkopen.2023.48549.Suche in Google Scholar

Received: 2025-05-25
Accepted: 2025-07-18
Published Online: 2025-09-23
Published in Print: 2025-10-27

© 2025 the author(s), published by De Gruyter, Berlin/Boston

This work is licensed under the Creative Commons Attribution 4.0 International License.

Artikel in diesem Heft

  1. Frontmatter
  2. Reviews
  3. Mothers by contract: the moral and regulatory maze of surrogacy
  4. The incidence of Bandl’s ring and its impact on labor outcomes: a review of the published literature
  5. Serum biomarkers in the early detection of necrotizing enterocolitis: a systematic review
  6. Commentary
  7. The FAIR framework: ethical hybrid peer review
  8. Original Articles – Obstetrics
  9. Integrating KANET and Doppler indices to predict neurodevelopmental delays in high-risk pregnancies
  10. Association of in vitro fertilization with cesarean delivery in nulliparous, term, singleton, vertex pregnancies
  11. Molecular evidence in support of hematogenous dissemination of intraamniotic infection caused by Listeria monocytogenes in spontaneous preterm labor
  12. Low-dose prednisone and pregnancy prolongation in threatened preterm birth a randomized pilot study
  13. Pentraxins 3 levels in pregnant women diagnosed with preeclampsia and their relationship with the severity of the condition
  14. Factors influencing recurrence of preeclampsia in pregnant women with a history of preeclampsia and the establishment of a predictive model
  15. Prediction of gestational diabetes mellitus using clinical and ultrasonographic parameters: development of independent maternal and fetal models
  16. Comparison of intrapartum transfer from out-of-hospital births with intrapartum transfer from an alongside midwifery unit: a real-world data analysis of a German cohort
  17. Maternal and perinatal outcomes in obese parturients with epidural analgesia: a systematic review
  18. Original Articles – Fetus
  19. A novel approach to calculating expected total fetal lung volume in fetuses with isolated congenital diaphragmatic hernia and fetal growth restriction: a theoretical computational simulation
  20. Abnormal fetal genitalia: two- and three-dimensional ultrasound assessment
  21. Fetal music therapy and AI-driven Doppler ultrasound: a neuromodulation perspective
  22. Original Article – Neonates
  23. Impact of low dose nicotine on brain-derived neurotrophic factor after global hypoxia in newborn piglets
  24. Letters to the Editor
  25. Feasibility and reproducibility of speckle tracking echocardiography in routine assessment of the fetal heart in a low-risk population: a commentary letter
  26. Improved visualization of a fetal scalp cyst with B-mode and 3D ultrasound compared to magnetic resonance imaging
Heruntergeladen am 23.3.2026 von https://www.degruyterbrill.com/document/doi/10.1515/jpm-2025-0285/html
Button zum nach oben scrollen