Home Charting the Landscape of Artificial Intelligence Ethics: A Bibliometric Analysis
Article Open Access

Charting the Landscape of Artificial Intelligence Ethics: A Bibliometric Analysis

  • Jiaxuan Qiu

    Jiaxuan Qiu is a research fellow in Guanghua Law School at Zhejiang University, specializing in international law and digital law. Her research interests include International AI regulations, AI Ethics, and digital law.

    ORCID logo
    , Le Cheng

    Le Cheng is Chair Professor of Law, and Professor of Cyber Studies at Zhejiang University. He serves as the Executive Vice Dean of Zhejiang University’s Academy of International Strategy and Law, Acting Head of International Institute of Cyberspace Governance, Editor-in-Chief of International Journal of Legal Discourse, Editor-in-Chief of International Journal of Digital Law and Governance, Co-Editor of Comparative Legilinguistics (International Journal for Legal Communication), Associate Editor of Humanities and Social Sciences Communications, former Co-Editor of Social Semiotics, and editorial member of Semiotica, Pragmatics & Society, and International Journal for the Semiotics of Law. As a highly-cited scholar, he has published widely in the areas of international law, digital law and governance, cyber law, semiotics, discourse studies, terminology, and legal discourse.

    ORCID logo EMAIL logo
    and Jin Huang

    Jin Huang is a professor at Zhejiang University, focusing on cloud security, big data security, vulnerability discovery, and offensive-defense technologies. Previously Senior Vice President and Chief Smart City Security Officer at DBAPPSecurity, he has led large-scale R&D teams and published nearly 200 invention patents. He has also contributed to national and industry standards, received multiple honors – including the 20th National Youth Post Expert – and continues to bridge academic research with real-world cybersecurity applications.

Published/Copyright: April 18, 2025

Abstract

Using bibliometric methods, this study systematically analyzes 6,084 AI ethics-related articles from the Web of Science Core Collection (2015–2025), capturing both recent advances and near-future directions in the field. It begins by examining publication trends, disciplinary categories, leading journals, and major contributing institutions/countries. Subsequently, co-citation (journals, authors, references) and keyword clustering methods reveal the foundational knowledge structure and highlight emerging research hotspots. The findings indicate increasing interdisciplinary convergence and international collaboration in AI ethics, with core themes focusing on algorithmic fairness, privacy and data security, ethical governance in autonomous vehicles, medical AI applications, educational technology, and challenges posed by generative AI (e.g., large language models). Burst keyword detection further shows an evolutionary shift from theoretical debates toward practical implementation strategies and regulatory framework development. Although numerous global initiatives have been introduced to guide AI ethics, broad consensus remains elusive, underscoring the need for enhanced cross-disciplinary and international cooperation. This research provides valuable insights for scholars, policymakers, and industry practitioners, laying a foundation for sustainable and responsible AI development.

1 Introduction

Artificial intelligence (AI) has emerged as a transformative force in modern society, driving scientific innovation and powering industrial development across various sectors (Jiang et al. 2022). Although the foundational ideas of AI can be traced back to Alan Turing’s (1950) “imitation game” (commonly known as the Turing Test), the 1956 Dartmouth Conference is widely regarded as the formal birth of AI research (Moor 2006). Since then, technological advancements – particularly in machine learning and deep learning – have continuously pushed the boundaries of AI, expanding its applications in fields ranging from healthcare and finance to transportation and education (Abduljabbar et al. 2019; Bahrammirzaee 2010; Chen et al. 2020; Jiang et al. 2017). AI’s accelerated progress has led to the phenomenon of “Intelligence Emergence,” exemplified by powerful generative models (Yao 2024). These technologies offer substantial economic and social benefits but have also given rise to pressing concerns about issues such as data privacy, algorithmic bias, and accountability. As AI becomes increasingly integrated into economic and societal frameworks, ethical considerations surrounding its development and deployment have gained significant traction, shaping policy agendas and prompting proactive governance measures worldwide.

Recognizing the ethical implications of AI, various international organizations and national governments have proposed guidelines and frameworks to ensure responsible AI practices. In November 2021, UNESCO published the Recommendation on the Ethics of Artificial Intelligence, providing a comprehensive normative foundation for AI ethics by emphasizing human rights, global collaboration, and sustainability (UNESCO 2021). In China, the 2021 publication of the Ethical Norms for the New Generation of Artificial Intelligence by the National New Generation AI Governance Expert Committee highlighted core principles such as fairness, privacy, and security.[1] In U.S., The Department of Defense released Ethical Principles for AI, focusing on military and defense contexts, emphasizing responsibility, fairness, traceability, reliability, and governability.[2] Within the European Union, the Ethics Guidelines for Trustworthy AI, drafted by the High-Level Expert Group on AI, propose seven key requirements: human agency and oversight, technical robustness and safety, privacy and data governance, transparency, diversity and fairness, societal and environmental well-being, and accountability.[3] While these initiatives signal growing awareness and tangible efforts to regulate AI’s impact, current approaches often focus on specific ethical dimensions (e.g., privacy, security, or bias). As a result, consensus on a universal ethical framework remains elusive, and scholars and policymakers alike continue to grapple with the complexities of AI governance.

Academic interest in AI ethics has surged in recent years, leading to numerous studies that discuss ethical principles, fairness, bias, security, and other critical concerns (HuYupeng et al. 2021; John-Mathews et al. 2022; Nilsson 2014; Ntoutsi et al., n.d.). However, existing literature reviews are usually constrained to particular subtopics, adopting qualitative methods, providing fragmented insights rather than an overarching perspective on the discipline as a whole (Hagendorff 2020; Jobin et al. 2019; Liu et al. 2021; Mehrabi et al. 2022; Ntoutsi et al., n.d.; Zhang et al. 2021). This piecemeal understanding makes it difficult to map out broad trends, to identify emergent themes, or to assess the collaboration networks within AI ethics research. To address these limitations, this study employs bibliometric methods to analyze the landscape of AI ethics research. Bibliometrics is a method that employs basic or advanced statistical techniques to systematically organize and analyze data from published research within a given discipline – including citations, authors’ affiliations, keywords, discussion topics, and methods used – to evaluate and monitor that discipline’s progress (Belter 2015; Wallin 2005). Bibliometric methods have seen widespread application across a range of fields, including management, law, and linguistics, etc. (Li and Hu 2022; Li, Kit, and Cheng 2024; Zupic and Čater 2014). Numerous studies employ such approaches to clarify methodological frameworks, identify prolific and influential scholars or institutions, map knowledge structures, categorize research domains over time, reveal geographic patterns, spotlight specific research topics, and evaluate their levels of maturity. This study focuses on publications from 2015 through 2025 in the Web of Science Core Collection. We selected this timeframe to capture both the recent surge in AI ethics discussions and projections of how the field may evolve in the near future. Where possible, data on forthcoming or in-press articles for 2024–2025 have been included to reflect the most up-to-date scholarly work. By leveraging techniques such as co-citation analysis, keyword clustering, and thematic evolution mapping, we aim to provide a systematic and quantitative overview of this rapidly expanding field.

This study adopts bibliometric analysis to offer a comprehensive perspective on AI ethics research, shedding light on how the field’s themes have evolved over the past decade. It identifies influential authors, institutions, and regions that shape the field through citation and keyword analyses, revealing emergent topics and areas warranting deeper investigation. Additionally, these findings can guide policymakers, industry stakeholders, and researchers in developing ethically sound AI systems. The remainder of the paper proceeds as follows: Section 2 explains the data collection procedures and outlines the bibliometric methodology, including the selection criteria for publications and the analytic techniques employed. Section 3 presents the descriptive bibliometric results, including the chronological development of the literature, leading journals, subject categories, and the key institutions and countries involved. Section 4 investigates the knowledge base in AI ethics by conducting co-citation analyses of journals, authors, and references. Section 5 then explores the field’s dynamic evolution through keyword clustering, offering a detailed examination of research themes across different time periods. Finally, Section 6 summarizes the study’s findings and main contributions, and provides an outlook on future research directions, emphasizing the significance of this bibliometric approach in shaping the ethical trajectory of AI research and practice.

2 Data and Methodology

This study’s dataset was retrieved from the Web of Science (WOS) Core Collection and includes 6,084 articles authored by scholars affiliated with 201 institutions in 136 countries. This study focuses on publications from 2015 through 2025 in the Web of Science Core Collection. We selected this time frame not only to capture the recent, dramatic surge in AI ethics discussions, largely driven by advancements in machine learning and big data analytics (Zhang et al. 2021), but also to gain preliminary insights into near-future directions. Where possible, data on forthcoming or in-press articles for 2024–2025 have been included. While the inclusion of such prospective data aims to reflect the most current scholarly work, we acknowledge the inherent limitations in predicting future research trajectories definitively based on pre-publication information.

We then entered two search terms – “artificial intelligen*” and “ethic*” – in the topic field (encompassing article titles, keywords, and abstracts), using the AND operator. The topic field was chosen to achieve a balance between breadth and relevance, aiming to capture papers where AI ethics is a central theme, rather than restricting the search solely to titles which might miss relevant works (Wallin 2005). While alternative terms like “moral*” or “responsible AI” exist, the chosen terms combined with the topic field search were deemed comprehensive for capturing the core literature within the WoS database for this analysis. After obtaining the initial search results, we selected “Article” as the document type and “English” as the language, ultimately collecting 6,084 records, which were exported in plain-text format.

Before conducting a comprehensive CiteSpace analysis, the descriptive results presented in Section 3 also involve publication years, subject categories, publication titles, affiliations, and countries/regions as provided by the WOS website. Next, the exported data underwent cleaning. Using CiteSpace’s “remove duplicate” function, we further filtered the dataset based on category and publication date.

The primary methodology of this paper is to perform bibliometric analysis using the CiteSpace software on a large corpus of literature. CiteSpace, developed in Java, is an information visualization tool grounded in co-citation analysis theory and pathfinder network algorithms. By analyzing a set of publications in a specific field, CiteSpace reveals key paths and knowledge turning points in the evolution of that domain. Through a series of visual mappings, it also highlights underlying drivers of disciplinary development and locates emerging research frontiers (Chen et al. 2015).

CiteSpace is built around three core concepts: (1) Kleinberg’s burst detection algorithm for identifying emerging research frontiers (Kleinberg 2002), (2) Freeman’s betweenness centrality for highlighting pivotal points (Scott 2002), and (3) the use of heterogeneous networks. In CiteSpace’s framework, a knowledge domain is conceptualized as a mapping function between research frontiers and their knowledge base. This mapping provides a conceptual tool for addressing three practical questions: (1) identifying the nature of a research frontier, (2) labeling specializations, and (3) promptly detecting new trends or abrupt changes. CiteSpace extracts n-grams – single words or phrases up to four words long – from titles, abstracts, descriptors, and identifiers in cited references. The identification of frontier terms depends on sharp increases in their frequency. Two complementary views – cluster views and timeline (or timezone) views – are designed to analyze and visualize the two-dimensional co-citation network (Chen 2006).

In this study, we create cluster views, timeline views, timezone views, and burst detection charts for co-authorship, institutions, countries, keywords, cited references, cited authors, and cited journals. During the creation of these visualizations, we employ relevant metrics to ensure that the resulting maps are reliable and accurate. CiteSpace provides two key indicators to evaluate network structure and clustering clarity: the modularity (Q) value and the average silhouette (S) value. Generally, Q values lie within the interval [0.1), and a Q value above 0.3 indicates that the community structure is notably distinct. When the S value exceeds 0.7, the clustering is considered highly efficient and convincing; an S value above 0.5 is still regarded as acceptable (Chen 2018). Additionally, CiteSpace’s built-in burst detection algorithm can detect rapid increases of interest in particular topics. These indicators shed light on persistent issues, ongoing themes, and burgeoning academic interests, collectively illustrating shifts within the research area and providing valuable insights for future investigations. Throughout the CiteSpace analyses, parameters for network generation (e.g., node selection criteria like g-index, top N%) and visualization (e.g., pruning algorithms like Pathfinder) were chosen based on established practices (Chen 2018) and preliminary testing to ensure the resulting maps were both informative and interpretable, balancing detail with clarity (Zupic and Čater 2014).

It should be noted, however, that while CiteSpace offers robust quantitative findings, these results can benefit greatly from complementary qualitative assessments to ensure a more well-rounded perspective on the research domain. Therefore, in the discussion of co-citation and cluster analyses (Sections 4 and 5), interpretations are enriched by examining the content of highly cited, central, or bursty documents and the contributions of key authors identified within the networks, providing context to the quantitative patterns observed (Small 1973; Zupic and Čater 2015).

3 Descriptive Statistical Analysis

3.1 Yearly Distribution of AI Ethics Studies

All relevant publications in the Web of Science (WOS) database cover the period from 1991 to 2025. Figure 1 illustrates the annual publication trends of artificial intelligence (AI) ethics research indexed in WOS over this time span, highlighting three distinct phases. The initial phase (2000–2014) experienced modest growth, with publications remaining relatively low. The subsequent phase (2015–2019) saw a significant acceleration, with the annual number of AI ethics publications increasing from 11 in 2017 to 84 in 2021. This surge corresponds with heightened global attention to AI’s ethical implications. The final phase (2020–2025) witnessed a sustained high level of publications, peaking in 2023, reflecting ongoing scholarly engagement with AI ethics. This progression underscores the growing recognition of AI’s societal impact and the imperative for ethical discourse in its development and application.

Figure 1: 
Annual publication of AI ethics studies on WoS.
Figure 1:

Annual publication of AI ethics studies on WoS.

3.2 Scientific Category

The interdisciplinary nature of AI ethics is evident in the distribution of articles across various scientific categories, as indexed by Web of Science. Figure 3 displays the publication counts for the top 20 categories. The ‘Computer Science Artificial Intelligence’ category leads significantly with 562 publications, underscoring the technical core of the field in ethical discourse. However, substantial contributions also come from disciplines such as ‘Education Educational Research,’ ‘Ethics,’ ‘Computer Science Information Systems,’ and ‘Medicine General Internal,’ reflecting broad engagement across diverse sectors. Figure 2 provides a treemap visualization of these categories, further illustrating the varied academic landscape contributing to AI ethics and emphasizing the topic’s cross-cutting relevance, driven by the pervasive impact of AI technology across nearly all societal domains (Gao et al. 2024).

Figure 2: 
Visualization of top 20 categories.
Figure 2:

Visualization of top 20 categories.

Figure 3: 
Top 20 categories in AI ethics research.
Figure 3:

Top 20 categories in AI ethics research.

3.3 Journal Distribution

Table 1 presents the top 20 journals that have published the largest number of AI ethics–related articles, alongside their article counts, countries of origin, and scientific categories (as defined in the Web of Science). As the data indicate, Germany, England, the United States, the Netherlands, Switzerland, and Canada all figure prominently, underscoring the global interest in ethical considerations of artificial intelligence. In addition, the journals span a wide spectrum of fields – from computer science and engineering to medicine, social sciences, and ethics – reflecting the inherently interdisciplinary nature of AI ethics research. Several entries at the top of the list, such as AI & Society, BMJ Open, and IEEE Access, illustrate diverse focal points on technology, healthcare, and broader societal impacts. This distribution showcases the multifaceted conversations around AI ethics, bringing together scholars from varying disciplines and regions to address the complex ethical challenges posed by AI technologies.

Table 1:

Top 20 journals publishing AI ethics articles.

Rank Journals Count Country Category
1 AI & Society 220 Germany Computer Science, Artificial Intelligence
2 BMJ Open 98 England Medicine, General & Internal
3 IEEE Access 85 USA Telecommunications Computer Science, Information Systems Engineering, Electrical & Electronic
4 Cureus Journal of Medical Science 76 USA Medicine, General & Internal
5 Ethics and Information Technology 71 Netherlands Information Science & Library Science, Philosophy, Ethics
6 Journal of Medical Internet Research 57 Canada Health Care Sciences & Services, Medical Informatics
7 Science and Engineering Ethics 56 Netherlands Engineering, Multidisciplinary, History & Philosophy Of Science, Multidisciplinary Sciences, Ethics
8 Education and Information Technologies 47 USA Education & Educational Research
9 Sustainability 44 Switzerland Environmental Sciences, Environmental Studies, Green & Sustainable Science & Technology
10 Journal of Medical Ethics 40 England Social Sciences, Biomedical, Medical Ethics, Social Issues, Ethics
11 Big Data & Society 38 USA Social Sciences, Interdisciplinary
12 Frontiers in Artificial Intelligence 37 Switzerland Computer Science, Artificial Intelligence, Computer Science, Information Systems
13 Applied Sciences-Basel 36 Switzerland Engineering, Multidisciplinary, Chemistry, Applied Physics, Materials Science
14 International Journal of Mobile Human Computer Interaction 32 USA Computer Science, Cybernetics
15 JMIR Medical Education 30 Canada Education, Scientific Disciplines
16 Minds and Machines 28 Netherlands Computer Science, Artificial Intelligence
16 Technology in Society 28 England Social Sciences, Interdisciplinary, Social Issues
18 Humanities Social Sciences Communications 26 England Social Sciences, Interdisciplinary, Humanities, Multidisciplinary
19 Journal of Business Ethics 25 Netherlands Ethics, Business
20 BMC Medical Ethics 24 England Social Sciences, Biomedical, Medical Ethics, Ethics
20 Computer Law & Security Review 24 England Law
20 Education Sciences 24 Switzerland Education & Educational Research

3.4 Institutions Distribution

Using CiteSpace with a timespan set from 2015 to 2025 and a slice length of one year, a total of 131 institutions (N = 131) and 174 connections (E = 174) were mapped to illustrate the collaborative landscape of AI ethics research (Figure 4). The analysis employed a g-index (k = 5) selection criterion (LRF = 2.5, L/N = 10, LBY = 5, e = 1.0) and used Pathfinder pruning to refine the network. The resulting density is 0.0204, and 119 nodes (90 %) belong to the largest connected component, indicating that most institutions are interlinked rather than isolated. The high Modularity Q value of 0.7566 suggests distinct clustering patterns, while the Weighted Mean Silhouette score of 0.9549 reflects well-defined clusters. As visualized in Figure 4, the node sizes and their spatial distribution highlight the centrality of prominent universities in driving AI ethics scholarship. Table 2 lists the top 20 most active institutions, led by the University of London, the University of Oxford, and Harvard University. Collectively, these leading centers underscore the expanding and collaborative nature of AI ethics research, spanning multiple disciplines and national boundaries. The prominence of institutions from the UK, USA, and Canada in Table 2 mirrors the country-level analysis that follows, indicating concentrated research leadership in these regions.

Figure 4: 
Visualization of institutions distribution.
Figure 4:

Visualization of institutions distribution.

Table 2:

Top 20 active institutions in AI ethics publication.

Institutions Count Location Institutions Count Location
University Of London 157 UK Stanford University 70 USA
University Of Oxford 143 UK Technical University Of Munich 65 Germany
Harvard University 139 USA University Of Cambridge 65 UK
University Of California System 119 USA National University Of Singapore 63 Singapore
State University System Of Florida 85 USA University Of Pennsylvania 59 USA
Harvard University Medical Affiliates 84 USA Pennsylvania Commonwealth System Of Higher Education Pcshe 58 USA
University Of Toronto 84 Canada University System Of Ohio 58 USA
University Of Texas System 76 USA Monash University 55 Australia
Harvard Medical School 74 USA Imperial College London 53 UK
University College London 72 UK Swiss Federal Institutes Of Technology Domain 52 Switzerland

3.5 Countries/Regions Distribution

Figure 5 (N = 123, E = 170, Density = 0.0227) depicts the co-country network of AI ethics research from 2015 to 2025, highlighting both established leaders like the United States, England, and China and a diverse array of emerging contributors. This network’s largest connected component encompasses 94 % of the nodes (116 out of 123), underscoring robust international collaboration within the domain. The node sizes and linkages reveal active collaborations that extend beyond regional clusters, reflecting the global nature of AI ethics scholarship. As shown in Table 3, some countries demonstrate marked “burst” periods, signifying surges in research output during specific intervals. These bursts, along with the breadth of international participation, point to a field that is rapidly evolving and deeply interconnected. The overall pattern underscores how AI ethics research benefits from the complementary expertise and cultural perspectives of various nations, reinforcing a collective drive to tackle the social, legal, and technological challenges posed by artificial intelligence.

Figure 5: 
Visualization of Co-countries network.
Figure 5:

Visualization of Co-countries network.

Table 3:

Top 10 most active countries.

Freq rank Burst Burst begin Burst end Country Freq rank
1,033 4.23 2017 2018 USA 1,033
491 8.63 2016 2018 ENGLAND 491
364 0 / / PEOPLES R CHINA 364
353 0 / / GERMANY 353
274 0 / / AUSTRALIA 274
252 0 / / CANADA 252
232 0 / / INDIA 232
228 0 / / ITALY 228
225 0 / / SPAIN 225
200 6.17 2015 2021 NETHERLANDS 200

4 Knowledge Base of AI Ethics Researches: Co-Citation Analysis

Co-cited documents appear together in the reference list of a third publication, indicating a thematic connection (Chen et al. 2010; Osareh 1996). Such interconnections form a co-citation network that illustrates relationships among journals, authors, and references. Although co-citation counts can sometimes be numerically lower than direct citation frequencies, they are especially valuable for identifying influential works that lie outside a core research area or that are tightly interconnected across multiple domains. Co-citation analysis thus provides a powerful means of constructing domain maps, monitoring a field’s scientific evolution, and assessing its interdisciplinary linkages (Small 1973). Consequently, the three main forms of co-citation analysis – journal, document, and author – shed crucial light on the structural and relational aspects of a research field.

4.1 Journal Co-Citation Analysis

Figure 6 provides a high-resolution visualization of the journal co-citation network in AI ethics research (2015–2025) constructed via CiteSpace. It contains 277 nodes and 267 edges (Density = 0.007), with 84 % of the nodes clustered in the largest connected component, and 10 % of the nodes labeled after applying Pathfinder pruning. Journal Co-Citation Analysis (JCA) highlights how frequently certain journals appear together in reference lists, revealing interlinked knowledge structures and the most influential publication venues in the field (Baker 1990).

Figure 6: 
Visualization of Co-cited sources.
Figure 6:

Visualization of Co-cited sources.

In Table 4, the top 10 most frequently co-cited journals are accompanied by four key metrics – Burst, Degree, Centrality, and Sigma – all of which help characterize a journal’s prominence and role in bridging research clusters (Chen 2004): Burst tracks sudden spikes in citation over defined intervals, indicating a surge of scholarly attention; Degree tallies how many direct links a journal shares with others, reflecting the breadth of its co-citation visibility; Centrality gauges whether a journal serves as a bridge between otherwise separate clusters; and Sigma integrates burst and centrality, spotlighting journals with sustained, transformative impact.

Table 4:

Top 10 most Co-cited sources.

Freq Burst Burst begin Burst end Degree Centrality Sigma Journal Begin year
981 23.84 2016 2020 6 0.64 126,762.28 Nature 2016
819 44.28 2016 2022 6 0.3 107,756.37 Science 2016
684 0 3 0.05 1 AI & Society 2019
638 7.11 2021 2022 3 0.02 1.19 Nature Machine Intelligence 2020
619 11.9 2019 2020 2 0.02 1.33 PLoS One 2019
571 30.84 2017 2022 5 0.27 1,633.81 Minds & Machines 2017
550 0 6 0.12 1 IEEE Access 2020
533 0 6 0.21 1 Journal of Medical Internet Research 2019
533 31.68 2019 2022 3 0.16 116.75 Science and Engineering Ethics 2019
475 0 4 0.05 1 Nature Medicine 2020

Among the general science flagships, Nature (est. 1869; Freq = 981; Burst = 23.84, 2016–2020; Centrality = 0.64; Sigma = 126,762.28) and Science (est. 1880; Freq = 819; Burst = 44.28, 2016–2022; Centrality = 0.30; Sigma = 107,756.37) demonstrate exceptionally high citation frequencies and bursts, affirming their longstanding influence. This aligns with the institutional findings [Section 3.4], suggesting foundational and high-impact work often originates from leading research centers and is published in these widely recognized journals before disseminating to more specialized venues. Meanwhile, more specialized AI ethics outlets, such as AI & Society (est. c.1987; Freq = 684; Degree = 3; Centrality = 0.05) and Minds & Machines (est. 1991; Freq = 571; Burst = 30.84, 2017–2022; Sigma = 1,633.81), reflect the field’s intellectual core on social, philosophical, and cognitive dimensions of AI. Notably, Minds & Machines exhibits both a strong citation surge and a relatively high degree – indicating its notable traction among AI ethics scholars.

The presence of interdisciplinary and open-access venues – PLoS One (est. 2006; Freq = 619; Burst = 11.9, 2019–2020) and IEEE Access (est. 2013; Freq = 550; Degree = 6; Centrality = 0.12) – underscores AI ethics’ broad appeal across various research communities, from basic science to applied engineering. Journal of Medical Internet Research (est. 1999; Freq = 533; Centrality = 0.21) further extends these discussions into healthcare, highlighting the ethical nuances of digital and AI-driven health technologies. Science and Engineering Ethics (est. 1995; Freq = 533; Burst = 31.68, 2019–2022) confirms the growing emphasis on regulatory, social, and moral concerns that cut across AI applications. Finally, newer Nature sub-journals – Nature Machine Intelligence (est. 2019; Freq = 638; Burst = 7.11, 2021–2022) and Nature Medicine (est. 1995; Freq = 475; Degree = 4) – demonstrate the rapid emergence of specialized AI content within premier publications, focusing on cutting-edge machine learning, robotics, and medical innovations.

Overall, Figure 6 and Table 4 reveal a dynamic scholarly ecosystem where high-impact general science journals share prominence with domain-specific outlets. Surges (bursts) often correspond to pivotal debates or breakthroughs in AI ethics, while bridging functions (high centrality) mark those journals that synthesize insights across diverse subfields. By mapping these co-citation relationships, we observe how foundational research in flagship journals converges with specialized, interdisciplinary discussions – together shaping the trajectory of AI ethics research (Tables 5 and 6).

Table 5:

Top 20 most co-cited authors.

Count Centrality Begin year Author Count Centrality Begin year Author
479 0.48 2018 Floridi L 164 0.06 2020 Mittelstadt B
337 0.27 2020 Jobin A 147 0.03 2021 Stahl Bc
245 0.81 2015 Bostrom N 147 0 2022 UNESCO
237 0.15 2022 Dwivedi Yk 143 0.32 2023 OpenAI
212 0.02 2019 Russell S 136 0 2022 Braun V
206 0.51 2021 Hagendorff T 133 0.03 2020 OECD
191 0 2021 Morley J 110 0 2020 World Health Organization
184 0.02 2019 Coeckelbergh M 108 0 2020 Dignum V
184 0.05 2020 Obermeyer Z 97 0 2019 Mccarthy J
179 0.1 2020 Topol Ej 97 0.73 2018 Mittelstadt Bd
Table 6:

Lists of top 20 keywords with high-frequency and high-centrality.

Rank Count Centrality Year Keywords
1 2,456 0.13 2015 Artificial intelligence
2 382 0.02 2018 Machine learning
3 187 0.03 2019 Technology
4 184 0.07 2018 Big data
5 183 0.2 2015 Ethics
6 172 0.03 2017 ai ethics
7 170 0.08 2019 ai
8 170 0.03 2019 Artificial intelligence (ai)
9 121 0.36 2016 Challenges
10 119 0.08 2019 Health
11 118 0.05 2019 Future
12 109 0 2023 Generative ai
13 105 0.04 2019 Impact
14 101 0.12 2019 Deep learning
15 97 0.05 2023 Higher education
16 94 0.05 2019 Management
17 92 0.28 2017 Care
18 92 0 2020 Model
19 90 0.21 2020 Decision making
20 86 0.03 2023 Large language models

4.2 Author Co-Citation Analysis

In the field of AI ethics, author co-citation analysis (ACA) involving a comprehensive examination of 220 highly-cited authors and their co-citation relationships reveals the academic structure and profound influence of core scholars, providing a clear picture of AI ethics research (Jeong et al. 2014). This analysis demonstrates a network modularity Q value of 0.8583 and a weighted mean silhouette S value of 0.9629, reflecting a high degree of structural clarity and consistency in the clustering results (Chen and Song 2019). Through cluster analysis, 16 semantic clusters emerge, encompassing diverse dimensions of AI ethics such as philosophical foundations, global governance, emerging technologies, algorithmic ethics, and sustainable development intertwined with medical practices, collectively offering vital guidance for technological innovation and policy-making.

Within this rich academic landscape, philosopher and futurist Nick Bostrom and philosopher of information Luciano Floridi emerge as central figures. Bostrom, celebrated for his deep exploration of existential risks associated with AI, authored Superintelligence: Paths, Dangers, Strategies (2014), a work cited 245 times with a centrality of 0.81. Floridi, a pioneer in AI governance and ethical frameworks, led the creation of the Ethics Guidelines for Trustworthy AI (2019), cited 479 times with a centrality of 0.48, exerting influence across Europe and beyond in shaping global AI policy.

The European Union’s regulatory stance also plays a prominent role, with the European Commission’s Ethics Guidelines for Trustworthy AI (2019) cited 256 times as a cornerstone for AI governance, complemented by AI ethicist and policy analyst Jess Morley’s Ethics as a Service (2021), which applies these principles to medical AI’s ethical challenges and has garnered 191 citations.

On the global stage, social scientist and ethicist Anna Jobin’s The Global Landscape of AI Ethics Guidelines (2019), cited 337 times with a centrality of 0.27, meticulously organizes worldwide AI ethics principles. Computer scientist and ethicist Thilo Hagendorff’s The Ethics of AI Ethics: An Evaluation of Guidelines (2020), cited 206 times with a centrality of 0.51, critiques the gaps in their implementation (Figure 7).

Figure 7: 
Visualization of co-cited authors.
Figure 7:

Visualization of co-cited authors.

The advent of generative AI introduces fresh ethical dilemmas, addressed by information systems scholar Yogesh K. Dwivedi’s A Multidisciplinary Perspective on Generative Conversational AI (2023), which explores societal impacts of technologies like ChatGPT and has been cited 237 times with a centrality of 0.15, capturing cutting-edge trends. Technical transparency remains a crucial focus, with computer scientist Alejandro Barredo Arrieta’s Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges (2020), cited 96 times, proposing frameworks to enhance AI interpretability.

Sustainability and medical applications further enrich AI ethics, as seen in philosopher of technology Mark Coeckelbergh’s AI Ethics (2020), cited 184 times, which examines AI’s long-term societal impact, and information systems ethicist Bernd Carsten Stahl’s Artificial Intelligence for a Better Future (2021), cited 147 times, advocating responsible research and innovation. Data ethicist Brent Mittelstadt’s The Ethics of Algorithms: Mapping the Debate (2016), cited 97 times with a centrality of 0.73, delves into algorithmic ethics in healthcare. Cardiologist and digital medicine researcher Eric J. Topol’s Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again (2019), cited 179 times, underscores AI’s transformative potential in medicine.

Together, these scholars and their works form the backbone of AI ethics, bridging philosophical inquiry, global norms, technical clarity, and medical practice, providing both theoretical depth and practical direction to navigate the ethical complexities of AI’s rapid evolution.

4.3 Document Co-Citation Analysis

Document Co-citation Analysis (DCA) is a valuable methodology for understanding the intellectual structure and evolution of a research field by examining the relationships between co-cited publications. Through the lens of DCA, researchers can identify key publications, emerging trends, and intellectual turning points in the field. This analysis offers a deeper understanding of scholarly knowledge dissemination, highlights influential works, and aids in recognizing underexplored areas of research that may otherwise go unnoticed (Chen 2012).

As shown in Figure 8, the co-citation network of documents over the past decade (2015–2025) consists of 292 nodes (documents) and 5,307 co-citation relationships, with a density of 0.0072, indicating a relatively sparse network of connections between documents. The modularity score of 0.863 reflects the high level of clustering within the network, while the silhouette score of 0.964 suggests a well-defined cluster structure. This clustering indicates the presence of multiple distinct research themes, each contributing to the overall development of the field.

Figure 8: 
Visualization of co-cited documents.
Figure 8:

Visualization of co-cited documents.

The co-citation network reveals several key clusters that reflect important themes and research trajectories. Cluster #0, for instance, primarily focuses on algorithmic bias, with a central concern for the ethical implications of artificial intelligence (AI). High-impact papers by Topol (2019) and Obermeyer et al. (2019) discuss the challenges of AI in medicine, particularly issues related to bias in machine learning algorithms and the implications for fairness and equity in healthcare (Obermeyer et al. 2019; Topol 2019). These works form the foundation of this cluster, showcasing the increasing attention to the ethical use of AI in healthcare decision-making (Figure 9).

Figure 9: 
Top 10 burst references in chronological evolution.
Figure 9:

Top 10 burst references in chronological evolution.

Cluster #1 and Cluster #2 are heavily focused on generative artificial intelligence and large language models, respectively. These clusters represent the rapid expansion of AI technologies capable of generating human-like content, such as OpenAI’s GPT models. Important documents in these clusters examine both the technical aspects of building large-scale AI systems and their societal implications, including concerns over bias, accountability, and the broader effects on various industries. These clusters reveal a shift toward the practical applications of AI, particularly in fields such as natural language processing, content generation, and automated decision-making.

An essential feature of DCA is the identification of citation bursts, which highlight documents that experience a significant increase in citations within a specific time frame, often indicating emerging trends or hot topics in the field. From the citation burst data, it is evident that certain key documents have driven attention and research focus in recent years (Chen 2006).

For example, the work by Floridi et al. (2018) on AI ethics experienced a citation burst between 2020 and 2023, reflecting the growing interest in ethical considerations and governance in AI research (Floridi et al. 2018). Similarly, Cathy O’Neil (2017) “Weapons of Math Destruction,” which discusses the dangers of big data and its role in perpetuating inequality, saw a citation surge starting in 2020, underlining the increasing concern over the societal impacts of AI and data analytics. These documents, along with others like Rudin (2019) and Kaplan and Haenlein (2019), have become pivotal references, shaping the current discourse on the ethical, social, and political implications of AI technologies (Kaplan and Haenlein 2019; Rudin 2019).

In terms of centrality and citation counts, several key works stand out as highly influential in the network. The most cited document in the co-citation network is the general notion of artificial intelligence, followed by critical papers that have shaped the understanding of AI ethics, including works by Jobin et al. (2019) and Bender et al. (2021). These works are not only frequently cited but also serve as central nodes in the co-citation network, connecting various subfields of AI research and driving ongoing academic discussions on topics like algorithmic fairness, transparency, and the regulation of AI technologies.

The Document Co-citation Analysis (DCA) of AI-related literature provides valuable insights into the intellectual structure of the field. The analysis reveals significant clusters of co-cited documents, highlighting key research themes such as AI ethics, generative AI, and large language models. The citation bursts further emphasize the emergence of critical issues, particularly concerning the societal impacts and ethical dimensions of AI. By focusing on the influential publications within these clusters, this analysis offers a comprehensive view of the ongoing developments in AI research and identifies core documents that have shaped the direction of the field. Through this examination of co-cited documents, a clearer understanding is gained of the evolution of AI-related research over the past decade, the issues currently receiving significant academic attention, and the potential directions for future research endeavors.

5 Evolutionary Trends in AI Ethics Research: Thematic Cluster Analysis

5.1 Keyword Clustering

Keyword analysis in bibliometrics tracks the evolution of research trends through the frequency of node words, centrality, and citation metrics (Chen 2018). The clusters in this study, based on CiteSpace’s LLR algorithm, represent distinct thematic areas in the artificial intelligence (AI) landscape. Figure 10 illustrates 16 prominent clusters, with an average silhouette value of 0.87, indicating a robust structure within the dataset (N = 297, E = 347, Density = 0.0089, S = 0.9632). Among these clusters, five primary themes have emerged, each focusing on a unique aspect of AI and its societal implications. The analysis below categorizes the clusters into three broad themes: Frontiers of AI Innovation, Socio-Ethical Dimensions of AI, and Transformative AI Applications Across Sectors. Table 7 presents details for the most prominent keyword clusters identified using the LLR algorithm, including cluster size, silhouette score, mean year, and representative labels. The following discussion highlights key thematic areas revealed by these clusters (visualized in Figures 10 and 11).

Figure 10: 
Keywords in major clusters.
Figure 10:

Keywords in major clusters.

Table 7:

Clusters of keywords co-occurrence.

ClusterID Size Silhouette mean (Year) Label (LLR)
0 25 0.961 2021 Artificial intelligence (ai) (98.52, 1.0E-4); ethical ai (35.24, 1.0E-4); decision making (30.44, 1.0E-4); ethics- medical (18, 1.0E-4); trustworthy ai (15.03, 0.001)
1 20 0.878 2021 Large language model (38.3, 1.0E-4); prompt engineering (20.85, 1.0E-4); technology acceptance model (20.46, 1.0E-4); machine learning (14.41, 0.001); model safety (12.29, 0.001)
2 19 0.846 2020 Human-robot interaction (17.92, 1.0E-4); robot ethics (15.92, 1.0E-4); smart city (13.14, 0.001); intelligent systems (11.68, 0.001); care ethics (11.68, 0.001)
3 19 0.961 2022 Mental health (41.17, 1.0E-4); science (18.43, 1.0E-4); sustainable development (18.39, 1.0E-4); artificial intelligence in education (17.13, 1.0E-4); artificial intelligence ethics (12.59, 0.001)
4 18 0.922 2020 Autonomous vehicles (25.05, 1.0E-4); risk (23.29, 1.0E-4); digital transformation (23.06, 1.0E-4); ai ethics (13.39, 0.001); youth (11.64, 0.001)
5 18 0.961 2021 Deep learning (76.2, 1.0E-4); human rights (33.89, 1.0E-4); medical education (26.37, 1.0E-4); computer vision (25.37, 1.0E-4); data privacy (24.65, 1.0E-4)
6 18 0.874 2023 Predictive model (31.68, 1.0E-4); predictive analytics (28.52, 1.0E-4); artificial intelligence (27.75, 1.0E-4); predictive system (25.34, 1.0E-4); practical model (25.34, 1.0E-4)
7 17 0.902 2020 Machine learning (192.12, 1.0E-4); big data (29.96, 1.0E-4); generative ai (17.99, 1.0E-4); classification (13.71, 0.001); deep learning (13.46, 0.001)
8 17 0.904 2021 Systematic review (30.29, 1.0E-4); ai regulation (24.86, 1.0E-4); internet of things (22.28, 1.0E-4); virtual reality (21.18, 1.0E-4); metaverse (18.92, 1.0E-4)
9 17 0.947 2022 Higher education (106.03, 1.0E-4); generative ai (103.01, 1.0E-4); generative artificial intelligence (102.35, 1.0E-4); large language models (82.74, 1.0E-4); academic integrity (81.91, 1.0E-4)
Figure 11: 
Timeline view of keyword clusters.
Figure 11:

Timeline view of keyword clusters.

5.1.1 Frontiers of AI Innovation

5.1.1.1 Cluster #0: Artificial Intelligence

The largest and most comprehensive cluster (#0) is centered around the theme of artificial intelligence, including advancements in AI technologies, its applications, and public perceptions. This cluster is heavily populated with terms like “AI,” “decision making,” and “trustworthy AI,” emphasizing the increasing focus on creating reliable AI systems that can be integrated into sensitive domains like healthcare and autonomous systems. The silhouette value of 0.961 suggests a strong consensus on the cluster’s research direction.Notable articles, such as Nasir et al. (2024) on AI’s role in healthcare, highlight the growing need for frameworks that ensure AI technologies are transparent and trustworthy. The keyword “ethical AI” has appeared frequently, pointing to ongoing concerns around algorithmic bias and fairness in AI systems. The research trajectory suggests that AI is being increasingly viewed not only as a tool but as an entity that needs ethical governance and responsible deployment.

5.1.1.2 Cluster #5: Deep Learning

Cluster #5 focuses on deep learning models, particularly their applications in fields like medical diagnostics and industrial defect detection. The presence of keywords like “deep learning,” “computer vision,” and “medical education” points to the diverse applications of deep learning technologies, where they are used to improve accuracy in diagnostics and automate processes across various industries. The major citing article in this cluster, Hatherley et al. (2024), discusses how deep learning techniques can enhance interpretability in medical AI, which aligns with the growing trend toward explainable AI (XAI). Deep learning’s increasing prominence in AI research is evident, especially in its medical applications. This trend reveals how AI is becoming integral to sectors requiring high precision, such as healthcare, where deep learning models are utilized to detect defects in medical images or predict the onset of diseases. The growing focus on “human rights” within this cluster indicates that researchers are mindful of the ethical implications, especially in areas where AI decisions could impact human lives directly.

5.1.2 Socio-Ethical Dimensions of AI

5.1.2.1 Cluster #13: AI Ethics

Cluster #13 is dedicated to the ethical considerations surrounding artificial intelligence, with a strong focus on moral agency, machine ethics, and the governance of AI. This cluster’s silhouette value of 1.0 suggests a well-defined research focus. Key terms such as “AI ethics,” “moral agency,” and “machine ethics” highlight the philosophical and practical challenges researchers face when designing ethical frameworks for AI systems. The articles within this cluster, such as those by Graves (2017), examine how AI can be aligned with human values and ethical principles. Research also explores how AI’s decision-making processes should be transparent and interpretable to avoid the “black box” problem, particularly in areas like healthcare where trust is paramount. The cluster’s focus on ethical issues, such as the potential for AI to act autonomously in a morally responsible manner, underscores the importance of establishing regulatory frameworks that ensure AI does not exacerbate social inequalities or harm vulnerable populations.

5.1.2.2 Cluster #14: Informed Consent

This cluster emphasizes the intersection of AI with healthcare, specifically around the concept of informed consent. With a focus on “informed consent,” “trust,” and “healthcare,” the cluster deals with how AI systems should be used in medical decision-making processes while respecting patient autonomy. Research like Astromskė (2021) tackles the challenges of applying AI in medical diagnostics and ensuring that patients are fully informed about the implications of AI-driven decisions. The research trajectory in this cluster reflects a societal concern that AI’s use in healthcare must be governed by strict ethical standards. As AI systems begin to influence medical outcomes, ensuring transparency and patient involvement in decision-making processes is critical. The rising importance of “trust” in AI highlights that public acceptance of AI in healthcare hinges not only on its technological capability but also on ethical assurances.

5.1.3 Transformative AI Applications Across Sectors

5.1.3.1 Cluster #9: Higher Education

Cluster #9 focuses on the application of AI in higher education, with keywords like “higher education,” “generative AI,” and “academic integrity” indicating a growing interest in how AI can reshape the learning environment. The major citing articles in this cluster, including Dwivedi et al. (2023), discuss AI’s role in transforming education by facilitating personalized learning and supporting the professional development of educators. However, challenges related to “academic integrity” and the use of generative AI models in cheating detection are also addressed. The cluster’s emphasis on “continuous training” and “pedagogical challenges” signals a recognition that AI’s impact on education requires careful planning and management. Generative AI’s ability to automate content creation (e.g., essays, tutoring) is reshaping educational practices. Still, it raises concerns about fairness and the potential for cheating, making it crucial to develop ethical guidelines for AI’s use in academic settings.

5.1.3.2 Cluster #4: Autonomous Vehicles

Cluster #4’s research revolves around autonomous vehicles, with keywords like “autonomous vehicles,” “risk,” and “digital transformation” indicating the cluster’s focus on the societal and ethical implications of self-driving technology. Autonomous vehicles are seen as a crucial application of AI, but their integration into public life presents challenges in terms of safety, regulation, and public trust. The significant focus on “AI ethics” and “youth” in this cluster suggests an emerging discourse on how autonomous systems can be designed to serve the public good while minimizing risks. The research trajectory emphasizes the importance of aligning technological advances with societal values, including public safety and ethical decision-making. Articles like Nasir et al. (2024) on the ethical implications of autonomous vehicles point to the growing need for comprehensive legal and ethical frameworks that guide the development of these technologies.

The clusters analyzed above reveal how AI research is evolving across various dimensions, from technological advancements like deep learning to the pressing ethical and social implications tied to its use in society. The focus on AI Ethics and Informed Consent shows an increasing awareness of the potential risks AI poses, particularly in sensitive domains like healthcare and education. Meanwhile, the technological clusters such as Deep Learning and Autonomous Vehicles demonstrate the ongoing drive to push the boundaries of AI capabilities in practical and impactful ways. These trends reflect a broader shift towards integrating AI into real-world applications while carefully considering its ethical dimensions. Researchers are navigating the complexities of balancing innovation with responsibility, and as AI continues to shape different sectors, ethical concerns will remain a central point of focus (Figure 12).

Figure 12: 
Top 25 burst keywords in chronological evolution.
Figure 12:

Top 25 burst keywords in chronological evolution.

5.2 Evolution of the Keyword Bursts

Burst detection reveals sudden surges in scholarly focus over specific periods, reflecting how AI research priorities shift as new challenges and opportunities arise (Kempe et al. 2003). By examining the top 25 strongest bursts (2015–2025), three overlapping phases emerge, illustrating the field’s progression from initial ethical considerations to more recent emphases on human-AI collaboration, transparency, and accountability.

5.2.1 Formative Stage (2015–2017)

Early bursts on ethics (strength = 8.8; 2015–2021) and machine ethics (strength = 9.59; 2017–2022) set a foundational tone, highlighting the moral and philosophical dimensions of AI. Scholars questioned whether AI could bear moral responsibility and how best to mitigate potential harms. For instance, Nasir et al. (2024) emphasizes the role of ethical frameworks in guiding AI-driven healthcare decisions, while Graves (2017) explores shared moral development between humans and AI agents. These inquiries paved the way for ongoing debates on human-centered AI, underlining the importance of responsible innovation from the outset.

5.2.2 Transition Stage (2018–2021)

During this phase, big data recorded the highest burst strength (16.65; 2018–2022), reflecting the promise and complexity of large-scale datasets in powering advanced algorithms (Dwivedi et al. 2023). This expansion of AI capabilities went hand-in-hand with discussions on robot ethics (strength = 5.29; 2018–2021) and autonomous vehicles (strength = 4.83; 2018–2021), where safety, liability, and social acceptance became key concerns (Nasir et al. 2024). Concurrently, privacy and data ethics rose to prominence (Astromskė 2021), illuminating how rapid AI innovation must be balanced with the protection of personal autonomy and data security. Within healthcare, for example, Hatherley et al. (2024) highlights the risk of algorithmic biases in clinical diagnoses, suggesting that interpretability and fairness became pivotal issues as AI began permeating life-critical domains.

5.2.3 Recent Surge (2022–2023)

Building on earlier ethical discourses, the latest bursts reveal a marked shift toward ensuring AI operates equitably and responsibly in complex social environments. Human-robot interaction (strength = 4.02; 2022–2023) emphasizes how AI-driven devices integrate into workplaces, schools, and hospitals, necessitating robust design principles for mutual trust and collaboration (Methnani et al. 2021). Meanwhile, algorithmic bias (strength = 3.62; 2022–2023) points to an intensified drive to identify and rectify systemic disparities in AI-driven decisions (Dwivedi et al. 2023). Parallel bursts on accountability (strength = 3.62; 2022–2023) and transparency (strength = 3.55; 2022–2023) further underscore a maturing AI landscape committed to explainable, auditable systems that uphold ethical standards. In practice, these developments manifest in the push for transparent governance structures and regulatory guidelines – particularly relevant as generative AI, large language models, and autonomous systems become increasingly common in society and industry.

These burst trends chart the field’s journey from foundational ethical reflection in the mid-2010s to technical expansion and real-world deployment in the late 2010s, culminating in renewed calls for transparency, accountability, and equity in the early 2020s. Collectively, they highlight a guiding principle in contemporary AI: technology-driven progress must remain continuously aligned with the public good, ensuring that innovation and responsibility advance in tandem.

6 Conclusions

This study conducts a comprehensive bibliometric analysis of artificial intelligence (AI) ethics literature, revealing both the evolutionary trajectory and the cross-disciplinary impact of AI ethics research. The findings demonstrate that AI ethics research extends beyond the traditional domain of computer science, encompassing diverse fields such as medicine, sociology, and philosophy, thus creating a multidimensional and rich research landscape. Countries like the United States, Germany, and the United Kingdom exhibit the highest research productivity, while journals such as AI & Society and IEEE Access play pivotal roles in advancing the academic discourse surrounding AI ethics. Through co-citation and cluster analyses, this research identifies critical focal areas in AI ethics, including ethical AI, machine ethics, algorithmic bias, data privacy, and the social responsibility of AI. These themes underscore the intricate relationship between technological advancements in AI and the ethical considerations they provoke, particularly in high-stakes domains such as healthcare, education, and transportation.

From the perspective of technological application, rapid advancements in deep learning and autonomous driving technologies have introduced significant ethical challenges. Deep learning, as a core AI technology, has seen widespread adoption in sectors like medical diagnostics and industrial inspections, greatly enhancing operational efficiency and diagnostic accuracy. However, these advancements have ignited intense debates about the need for algorithmic transparency, fairness, and societal implications. For instance, in healthcare, ethical discussions increasingly focus on how AI systems can make decisions while protecting patient privacy and respecting individual autonomy. As AI decision-making capabilities continue to grow, the balance between technological advancement and ethical accountability – particularly in mitigating social inequalities – has become a central concern for both academic researchers and policymakers (O’Neil 2017).

AI ethics research has evolved to explore not only technical considerations but also the profound societal, cultural, and individual impacts of AI technologies. Ethical dilemmas associated with autonomous driving and human-AI interaction have highlighted the urgent need for robust legal and ethical frameworks to ensure that technological progress aligns with public welfare. Key unresolved issues include how autonomous systems should make moral decisions, especially those involving human safety, and how to reconcile societal risks with technological innovation (Selvaggio et al. 2021). Additionally, the rise of generative AI technologies, such as large language models, has amplified AI’s influence in areas like education and the creative industries, sparking debates about “academic integrity” and “content creation fairness.” Ensuring the responsible deployment of these emerging technologies to prevent misuse and address potential inequities remains a critical challenge for AI ethics (Crawford 2021; Jobin et al. 2019).

Finally, this study reveals the interdisciplinary nature and global collaborative trends within AI ethics research. As AI technologies proliferate, international academic cooperation has become more pronounced, with nations demonstrating strong efforts to develop ethical frameworks that guide the responsible development and deployment of AI. Institutions such as UNESCO, alongside national governments and academic bodies, provide critical policy guidance for AI governance. However, despite the growing number of ethical guidelines, achieving a universal consensus on AI ethics remains challenging, likely due to factors such as deep-rooted cultural differences, varying national priorities, the inherent complexity of translating abstract principles into concrete technical implementations, and the diverse perspectives brought by the interdisciplinary nature of the field itself (Cihon et al. 2020; Schmitt 2022).

Before concluding, it is important to acknowledge the limitations of this study. Bibliometric analyses are inherently dependent on the chosen database, potentially excluding relevant works from other sources or non-English publications. Keyword analysis can be affected by synonymy and polysemy, and co-citation analysis reflects perceived relationships rather than direct influence. Furthermore, given the rapid pace of AI development, any bibliometric snapshot represents the state of the field at a particular time and requires ongoing updates (Haustein and Larivière 2015).

Moving forward, AI ethics research must not only continue to advance technological innovation and refine ethical frameworks but also prioritize international collaboration and the accumulation of practical experience to address the global ethical challenges posed by AI (Floridi 2023; Hagerty and Rubinov 2019). The insights from this analysis – such as identifying key research fronts (e.g., generative AI fairness, XAI in healthcare), influential actors (authors, institutions), and evolving thematic bursts (accountability, transparency) – can directly inform research agendas, funding priorities, policy interventions, and the development of targeted ethical training and best practices within industry.


Corresponding author: Le Cheng, Guanghua Law School and School of Cyber Science and Technology, Zhejiang University, Hangzhou, China, E-mail:

Award Identifier / Grant number: 24BYY151

About the authors

Jiaxuan Qiu

Jiaxuan Qiu is a research fellow in Guanghua Law School at Zhejiang University, specializing in international law and digital law. Her research interests include International AI regulations, AI Ethics, and digital law.

Le Cheng

Le Cheng is Chair Professor of Law, and Professor of Cyber Studies at Zhejiang University. He serves as the Executive Vice Dean of Zhejiang University’s Academy of International Strategy and Law, Acting Head of International Institute of Cyberspace Governance, Editor-in-Chief of International Journal of Legal Discourse, Editor-in-Chief of International Journal of Digital Law and Governance, Co-Editor of Comparative Legilinguistics (International Journal for Legal Communication), Associate Editor of Humanities and Social Sciences Communications, former Co-Editor of Social Semiotics, and editorial member of Semiotica, Pragmatics & Society, and International Journal for the Semiotics of Law. As a highly-cited scholar, he has published widely in the areas of international law, digital law and governance, cyber law, semiotics, discourse studies, terminology, and legal discourse.

Jin Huang

Jin Huang is a professor at Zhejiang University, focusing on cloud security, big data security, vulnerability discovery, and offensive-defense technologies. Previously Senior Vice President and Chief Smart City Security Officer at DBAPPSecurity, he has led large-scale R&D teams and published nearly 200 invention patents. He has also contributed to national and industry standards, received multiple honors – including the 20th National Youth Post Expert – and continues to bridge academic research with real-world cybersecurity applications.

  1. Research ethics: Not applicable.

  2. Author contributions: The authors have accepted responsibility for the entire content of this manuscript and approved its submission.

  3. Conflict of interest: The authors state no conflict of interest.

  4. Research funding: This work was supported by the project of National Social Science Foundation (Grant No. 24BYY151) and the Fundamental Research Funds for the Central Universities, Zhejiang University.

  5. Data availability: The raw data can be obtained on request from the corresponding author.

References

Abduljabbar, Rusul, Dia Hussein, Sohani Liyanage, and Saeed Asadi Bagloee. 2019. “Applications of Artificial Intelligence in Transport: An Overview.” Sustainability 11 (1): 189. https://doi.org/10.3390/su11010189.Search in Google Scholar

Astromskė, Kristina, Eimantas Peičius, and Paulius Astromskis. 2021. “Ethical and Legal Challenges of Informed Consent Applying Artificial Intelligence in Medical Diagnostic Consultations.” AI & Society 36 (2): 509–20. https://doi.org/10.1007/s00146-020-01008-9.Search in Google Scholar

Bahrammirzaee, Arash. 2010. “A Comparative Survey of Artificial Intelligence Applications in Finance: Artificial Neural Networks, Expert System and Hybrid Intelligent Systems.” Neural Computing & Applications 19 (8): 1165–95. https://doi.org/10.1007/s00521-010-0362-z.Search in Google Scholar

Baker, Donald R. 1990. “Citation Analysis: A Methodological Review.” In Social Work Research and Abstracts, Vol. 26, 3–10. Oxford University Press. https://academic.oup.com/swra/article-abstract/26/3/3/1696551.10.1093/swra/26.3.3Search in Google Scholar

Belter, Christopher W. 2015. “‘Bibliometric Indicators: Opportunities and Limits’. Journal of the Medical Library Association.” JMLA 103 (4): 219–21. https://doi.org/10.3163/1536-5050.103.4.014.Search in Google Scholar

Bender, Emily M., Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell. 2021. “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 610–23. Canada: ACM.10.1145/3442188.3445922Search in Google Scholar

Chen, Chaomei. 2004. “Searching for Intellectual Turning Points: Progressive Knowledge Domain Visualization.” Proceedings of the National Academy of Sciences 101 (suppl_1): 5303–10. https://doi.org/10.1073/pnas.0307513100.Search in Google Scholar

Chen, Chaomei. 2006. “CiteSpace II: Detecting and Visualizing Emerging Trends and Transient Patterns in Scientific Literature.” Journal of the American Society for Information Science and Technology 57 (3): 359–77. https://doi.org/10.1002/asi.20317.Search in Google Scholar

Chen, Chaomei. 2012. “Predictive Effects of Structural Variation on Citation Counts.” Journal of the American Society for Information Science and Technology 63 (3): 431–49. https://doi.org/10.1002/asi.21694.Search in Google Scholar

Chen, Chaomei. 2018. “Visualizing and Exploring Scientific Literature with CiteSpace: An Introduction.” In Proceedings of the 2018 Conference on Human Information Interaction&Retrieval – CHIIR, Vol. 18, 369–70. New Brunswick, NJ, USA: ACM Press.10.1145/3176349.3176897Search in Google Scholar

Chen, Chaomei, Fidelia Ibekwe‐SanJuan, and Jianhua Hou. 2010. “The Structure and Dynamics of Cocitation Clusters: A Multiple‐perspective Cocitation Analysis.” Journal of the American Society for Information Science and Technology 61 (7): 1386–409. https://doi.org/10.1002/asi.21309.Search in Google Scholar

Chen, Chaomei, and Min Song. 2019. “Visualizing a Field of Research: A Methodology of Systematic Scientometric Reviews.” PLoS One 14 (10): e0223994. https://doi.org/10.1371/journal.pone.0223994.Search in Google Scholar

Chen, Lijia, Pingping Chen, and Zhijian Lin. 2020. “Artificial Intelligence in Education: A Review.” IEEE Access 8: 75264–78, https://doi.org/10.1109/ACCESS.2020.2988510.Search in Google Scholar

Chen, Yue, Chaomei Chen, Zeyuan Liu, Zhigang Hu, and Xianwen Wang. 2015. “The Methodology Function of Cite Space Mapping Knowledge Domains.” Studies in Science of Science 33 (2): 242–53. https://doi.org/10.16192/j.cnki.1003-2053.2015.02.009.Search in Google Scholar

Cihon, Peter, Matthijs M. Maas, and Luke Kemp. 2020. “Fragmentation and the Future: Investigating Architectures for International AI Governance.” Global Policy 11 (5): 545–56. https://doi.org/10.1111/1758-5899.12890.Search in Google Scholar

Crawford, Kate. 2021. The Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. New Haven: Yale University Press.10.12987/9780300252392Search in Google Scholar

Dwivedi, Yogesh k., Kshetri Nir, Laurie Hughes, Emma louise Slade, Anand Jeyaraj, Arpan kumar Kar, Abdullah m. Baabdullah, et al.. 2023. ““So what if ChatGPT Wrote it?” Multidisciplinary Perspectives on Opportunities, Challenges and Implications of Generative Conversational AI for Research, Practice and Policy.” International Journal of Information Management 71 (102642): 102642. https://doi.org/10.1016/j.ijinfomgt.2023.102642.Search in Google Scholar

Floridi, Luciano. 2023. The Ethics of Artificial Intelligence: Principles, Challenges, and Opportunities. Oxford: Oxford University Press.10.1093/oso/9780198883098.001.0001Search in Google Scholar

Floridi, Luciano, Josh Cowls, Monica Beltrametti, Raja Chatila, Patrice Chazerand, Virginia Dignum, Christoph Luetge, et al.. 2018. “AI4People—An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations.” Minds and Machines 28 (4): 689–707.10.1007/s11023-018-9482-5Search in Google Scholar

Gao, Di Kevin, Andrew Haverly, Sudip Mittal, Jiming Wu, and Jingdao Chen. 2024. “AI Ethics: A Bibliometric Analysis, Critical Issues, and Key Gaps.” International Journal of Business Analytics 11 (1): 1–19. https://doi.org/10.4018/IJBAN.338367.Search in Google Scholar

Graves, Mark. 2017. Shared Moral and Spiritual Development Among Human Persons and Artificially Intelligent Agents. Theology and Science. https://www.tandfonline.com/doi/abs/10.1080/14746700.2017.1335066.10.1080/14746700.2017.1335066Search in Google Scholar

Hagendorff, Thilo. 2020. “The Ethics of AI Ethics: An Evaluation of Guidelines.” Minds and Machines 30 (1): 99–120. https://doi.org/10.1007/s11023-020-09517-8.Search in Google Scholar

Hagerty, Alexa, and Igor Rubinov. 2019. “Global AI Ethics: A Review of the Social Impacts and Ethical Implications of Artificial Intelligence.” arXiv. https://doi.org/10.48550/arXiv.1907.07892.Search in Google Scholar

Hatherley, Joshua, Robert Sparrow, and Mark Howard. 2024. “The Virtues of Interpretable Medical AI.” Cambridge Quarterly of Healthcare Ethics 33 (3): 323–32. https://doi.org/10.1017/S0963180122000664.Search in Google Scholar

Haustein, Stefanie, and Vincent Larivière. 2015. “The Use of Bibliometrics for Assessing Research: Possibilities, Limitations and Adverse Effects.” In Incentives and Performance: Governance of Research Organizations, edited by Isabell M. Welpe, Jutta Wollersheim, Stefanie Ringelhan, and Margit Osterloh, 121–39. Cham: Springer International Publishing.10.1007/978-3-319-09785-5_8Search in Google Scholar

HuYupeng, KuangWenxin, LiKenli QinZheng, GaoYansong ZhangJiliang, and LiKeqin LiWenjia. 2021. Artificial Intelligence Security: Threats and Countermeasures. New York: ACM Computing Surveys (CSUR).10.1145/3487890Search in Google Scholar

Jeong, Yoo Kyung, Min Song, and Ying Ding. 2014. “Content-Based Author Co-Citation Analysis.” Journal of Informetrics 8 (1): 197–211. https://doi.org/10.1016/j.joi.2013.12.001.Search in Google Scholar

Jiang, Fei, Yong Jiang, Hui Zhi, Dong Yi, Hao Li, Sufeng Ma, Yilong Wang, Qiang Dong, Haipeng Shen, and Yongjun Wang. 2017. “Artificial Intelligence in Healthcare: Past, Present and Future.” Stroke and Vascular Neurology 2 (4): 230–43. https://doi.org/10.1136/svn-2017-000101.Search in Google Scholar

Jiang, Yuchen, Li Xiang, Hao Luo, Shen Yin, and Okyay Kaynak. 2022. “Quo Vadis Artificial Intelligence?” Discover Artificial Intelligence 2 (1): 4. https://doi.org/10.1007/s44163-022-00022-8.Search in Google Scholar

Jobin, Anna, Marcello Ienca, and Effy Vayena. 2019. “The Global Landscape of AI Ethics Guidelines.” Nature Machine Intelligence 1 (9): 389–99. https://doi.org/10.1038/s42256-019-0088-2.Search in Google Scholar

John-Mathews, Jean-Marie, Dominique Cardon, and Christine Balagué. 2022. “From Reality to World. A Critical Perspective on AI Fairness.” Journal of Business Ethics 178 (4): 945–59. https://doi.org/10.1007/s10551-022-05055-8.Search in Google Scholar

Kaplan, Andreas, and Michael Haenlein. 2019. “Siri, Siri, in my Hand: Who’s the Fairest in the Land? On the Interpretations, Illustrations, and Implications of Artificial Intelligence.” Business Horizons 62 (1): 15–25.10.1016/j.bushor.2018.08.004Search in Google Scholar

Kempe, David, Jon Kleinberg, and Éva Tardos. 2003. “Maximizing the Spread of Influence through a Social Network.” In Proceedings of the Ninth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 137–46. Washington, D.C.: ACM.10.1145/956750.956769Search in Google Scholar

Kleinberg, Jon. 2002. “Bursty and Hierarchical Structure in Streams.” In Proceedings of the Eighth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 91–101. Edmonton Alberta Canada: ACM.10.1145/775047.775061Search in Google Scholar

Li, Jian, and Xitao Hu. 2022. “Visualizing Legal Translation: A Bibliometric Study.” International Journal of Legal Discourse 7 (1): 143–62. https://doi.org/10.1515/ijld-2022-2067.Search in Google Scholar

Li, Siyue, Chunyu Kit, and Le Cheng. 2024. “Unveiling the Landscape of Onomastics from 1972 to 2022: A Bibliometric Analysis.” Names: A Journal of Onomastics 72 (3): 40–64. https://doi.org/10.5195/names.2024.2576.Search in Google Scholar

Liu, Ximeng, Lehui Xie, Yaopeng Wang, Jian Zou, Jinbo Xiong, Zuobin Ying, and Athanasios V. Vasilakos. 2021. “Privacy and Security Issues in Deep Learning: A Survey.” IEEE Access 9: 4566–93, https://doi.org/10.1109/ACCESS.2020.3045078.Search in Google Scholar

Mehrabi, Ninareh, Fred Morstatter, Nripsuta Saxena, Kristina Lerman, and Aram Galstyan. 2022. “A Survey on Bias and Fairness in Machine Learning.” arXiv 54 (6): 1–35, https://doi.org/10.48550/arXiv.1908.09635.Search in Google Scholar

Methnani, Leila, Andrea Aler Tubella, Virginia Dignum, and Andreas Theodorou. 2021. “Let Me Take over: Variable Autonomy for Meaningful Human Control.” Frontiers in Artificial Intelligence 4 (September): 737072. https://doi.org/10.3389/frai.2021.737072.Search in Google Scholar

Moor, James. 2006. “The Dartmouth College Artificial Intelligence Conference: The Next Fifty Years.” AI Magazine 27 (4): 87. https://doi.org/10.1609/aimag.v27i4.1911.Search in Google Scholar

Nasir, Sidra, Rizwan Ahmed Khan, and Samita Bai. 2024. “Ethical Framework for Harnessing the Power of AI in Healthcare and Beyond.” IEEE Access 12: 31014–35, https://doi.org/10.1109/ACCESS.2024.3369912.Search in Google Scholar

Nilsson, Nils J. 2014. Principles of Artificial Intelligence. San Francisco: Morgan Kaufmann.Search in Google Scholar

Ntoutsi, Eirini, Pavlos Fafalios, Ujwal Gadiraju, Vasileios Iosifidis, Wolfgang Nejdl, Maria‐Esther Vidal, Salvatore Ruggieri, et al.. 2020. “Bias in Data‐driven Artificial Intelligence Systems – An Introductory Survey.” WIREs Data Mining and Knowledge Discovery 10 (3): e1356. https://doi.org/10.1002/widm.1356.Search in Google Scholar

Obermeyer, Ziad, Brian Powers, Christine Vogeli, and Sendhil Mullainathan. 2019. “Dissecting Racial Bias in an Algorithm used to Manage the Health of Populations.” Science 366 (6464): 447–53.10.1126/science.aax2342Search in Google Scholar

O’Neil, Cathy. 2017. Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. New York: Crown.Search in Google Scholar

Osareh, Farideh. 1996. “Bibliometrics, Citation Analysis and Co-Citation Analysis: A Review of Literature I.” Libri - International Journal of Libraries and Information Services 46 (3): 149–58. https://doi.org/10.1515/libr.1996.46.3.149.Search in Google Scholar

Rudin, Cynthia. 2019. “Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead.” Nature Machine Intelligence 1 (5): 206–15.10.1038/s42256-019-0048-xSearch in Google Scholar

Schmitt, Lewin. 2022. “Mapping Global AI Governance: A Nascent Regime in a Fragmented Landscape.” AI and Ethics 2 (2): 303–14. https://doi.org/10.1007/s43681-021-00083-y.Search in Google Scholar

Scott, John. 2002. Social Networks: Critical Concepts in Sociology. Oxford: Taylor & Francis.Search in Google Scholar

Selvaggio, Mario, Marco Cognetti, Stefanos Nikolaidis, Serena Ivaldi, and Siciliano Bruno. 2021. “Autonomy in Physical Human-Robot Interaction: A Brief Survey.” IEEE Robotics and Automation Letters 6 (4): 7989–96. https://doi.org/10.1109/LRA.2021.3100603.Search in Google Scholar

Small, Henry. 1973. “Co-Citation in the Scientific Literature: A New Measure of the Relationship between Two Documents.” Journal of the American Society for Information Science 24 (4): 265–9. https://doi.org/10.1002/asi.4630240406.Search in Google Scholar

Topol, Eric J. 2019. “High-performance Medicine: The Convergence of Human and Artificial Intelligence.” Nature Medicine 25 (1): 44–56.10.1038/s41591-018-0300-7Search in Google Scholar

Wallin, Johan A. 2005. “Bibliometric Methods: Pitfalls and Possibilities.” Basic and Clinical Pharmacology and Toxicology 97 (5): 261–75. https://doi.org/10.1111/j.1742-7843.2005.pto_139.x.Search in Google Scholar

Yao, Jia. 2024. “人工智能的训练数据制度——以“智能涌现”为观察视角[Training Data System for Artificial Intelligence: From the Perspective of “intelligence Emergence].” Guizhou Social Sciences 2: 51–7. https://doi.org/10.13713/j.cnki.cssci.2024.02.006.Search in Google Scholar

Zhang, Yi, Mengjia Wu, George Yijun Tian, Guangquan Zhang, and Jie Lu. 2021. “Ethics and Privacy of Artificial Intelligence: Understandings from Bibliometrics.” Knowledge-Based Systems 222 (June): 106994. https://doi.org/10.1016/j.knosys.2021.106994.Search in Google Scholar

Zupic, Ivan, and Tomaž Čater. 2014. “Bibliometric Methods in Management and Organization.” Organizational Research Methods 18 (3): 429–72.10.1177/1094428114562629Search in Google Scholar

Received: 2025-04-04
Accepted: 2025-04-05
Published Online: 2025-04-18
Published in Print: 2025-04-28

© 2025 the author(s), published by De Gruyter on behalf of Zhejiang University

This work is licensed under the Creative Commons Attribution 4.0 International License.

Downloaded on 6.11.2025 from https://www.degruyterbrill.com/document/doi/10.1515/ijdlg-2025-0007/html?lang=en&srsltid=AfmBOoofG6wQ1jv7BvrA6Obw84TgObFV9HtPyerrg5MJy-j9UYf7fFmY
Scroll to top button