Home Navigating the Ethical Landscape of AI Innovation: Challenges and Opportunities
Article Open Access

Navigating the Ethical Landscape of AI Innovation: Challenges and Opportunities

  • Qiyan Mao

    Qiyan Mao is a Ph.D. student at Guanghua Law School, Zhejiang University, specializing in digital law and artificial intelligence governance. Her research interests include AI ethics, data privacy, smart justice, and regulatory frameworks.

    ORCID logo EMAIL logo
    and Ming Xu

    Ming Xu is a lecturer at Zhejiang Sci-Tech University. Her research interests and publications include legal discourse, network governance, digital rule of law, corpus linguistics, and social semiotics.

    ORCID logo
Published/Copyright: June 11, 2025

Abstract

The rapid development of Artificial Intelligence (AI) is reshaping the global landscape; however, this progress has concurrently given rise to a series of profound ethical and legal challenges. Issues such as algorithmic bias, lack of transparency, and insufficient privacy protection not only threaten technology’s credibility but also pose systemic risks to human rights protection, social stability, and even national security. Striking a dynamic balance between rapid technological innovation and effective risk regulation has emerged as a critical issue in the realm of global AI governance. In order to investigate the synergistic evolution between technological innovation and normative constraints, and to facilitate responsible development and sustainable governance of AI, this study addresses the aforementioned challenges. It integrates legal and ethical perspectives to conduct a comparative analysis alongside case studies of major global AI governance models, and to explore strategies for establishing an adaptive regulatory framework amidst the dynamic interplay of technological advancement and risk evolution. On this basis, the proposed Dynamic Interactive Double-helix Model for AI Governance–the “Technology-driven Innovation Axis” and the “Ethical-legal Constraint Axis”–facilitates the simultaneous enhancement of its security and development through dynamic interaction. It emphasizes the intrinsic driving force of technological progress as well as the embedded guarantee of ethical values, in an attempt to provide an innovative solution for the global AI governance system that combines both theoretical depth and practical feasibility.

1 Introduction

The rapid innovation of Artificial Intelligence (AI) is driving profound transformations across various sectors of society. From autonomous driving (Alomari et al. 2021) to intelligent healthcare (Alahmari et al. 2022) and smart justice systems (Formosa and Ryan 2021), AI technology is constantly pushing the boundaries of traditional domains. The fundamental characteristic of AI lies not only in its ability to break through conventional technological paradigms through continuous and iterative innovation but also in its profound reshaping of human lifestyles, social structures, and governance models (Allen et al. 2025). While this disruptive innovation holds great potential for enhancing productivity and optimizing public services, it also raises significant challenges. As autonomous driving systems redefine traffic ethics, intelligent diagnostic systems intervene in life and health decisions, and judicial assistance systems influence fairness in legal judgments, the ethical externalities of technological innovation have moved beyond theoretical exploration to practical application, exerting a far-reaching impact on existing legal frameworks and social contracts.

At the current stage, AI not only inherits the ethical challenges of previous information technologies but also introduces new complexities due to inherent algorithmic biases, lack of transparency, challenges in interpretability (Doran et al. 2017), and tensions between data utilization and privacy protection. These factors may lead to various ethical risks, impacting fundamental human rights, social stability, and national security (Zhang et al. 2021). Consequently, effectively regulating the potential risks associated with AI-driven innovation has become a critical issue for the global governance system. However, addressing AI-related risks is not merely a technical challenge – it also requires the development of comprehensive governance frameworks and technological mechanisms to ensure the ethical and responsible use of AI systems (Mäntymäki et al. 2022).

AI governance encompasses a set of regulations, methodologies, procedures, and technical mechanisms designed to ensure that the development and deployment of AI technologies align with established strategies, principles, and objectives. Ethics, as a core component of AI governance, serves as a guiding principle in the design, deployment, and utilization of AI systems, aiming to uphold fairness, transparency, and accountability (Zhang and Zhang 2023). In this context, balancing ethical principles, legal regulations, and technical requirements has emerged as a key issue widely discussed by policymakers and scholars worldwide (Pierson et al. 2023).

The development of AI governance should follow a progressive trajectory, transitioning from ethical principles to the establishment of AI ethics frameworks and ultimately to legislation and technical standards (Welsch 1996). This process not only involves regulating AI technologies themselves but also assessing their societal impact and formulating enforceable governance frameworks to balance technological advancement with ethical risks. Against this backdrop, this study focuses on three critical issues in AI governance: algorithmic bias, algorithmic transparency, and privacy protection. These issues directly affect the fairness and interpretability of AI systems and raise significant questions about how to establish effective legal and ethical regulatory frameworks to ensure the sustainable development of AI technology.

Building upon the problem identification and analysis, this study introduces a Dynamic Interactive Double-helix Model for AI Governance, which intertwines two main axes: the “Technology-driven Innovation Axis” and the “Ethical-legal Constraint Axis”. This model seeks to transcend the limitations of traditional one-dimensional regulatory approaches by emphasizing the co-evolution of technological innovation iterations and ethical and legal constraints as its core logic. Through a dynamic tuning mechanism, this model aims to guide AI system innovation along the trajectory of “Technology for the Greater Good” (Cheng et al. 2023). By engaging in an in-depth discussion of the core issues in AI governance, this study aspires to contribute insights and strength toward the creation of a more intelligent, fair, and harmonious society.

2 Literature Review

Firstly, algorithmic bias is a central issue in AI ethics, concerning the potential unfair impacts that AI systems may produce during data training, model design, and real-world applications. While data serves as the foundation of AI systems, the key factor in the emergence of bias lies in how the system extracts patterns, trends, and relationships from vast amounts of data (Schmarzo and Borne 2020). Due to historical, geographical, and social structural disparities in data sources, AI systems often inherit and even amplify biases present in the real world (Ferrara 2023).

To ensure that AI systems are trusted and fairly applied in society, identifying biases and effectively mitigating their negative impacts is essential (Friedman and Nissenbaum 1996). However, there is no universally accepted definition of bias in mathematics or the social sciences. Bias in AI systems typically originates from multiple sources, including data collection methods, algorithm design, training processes, and sociocultural factors (Krumme et al. 2024). If these biases are not effectively identified and addressed, AI systems may exhibit unintended behaviors during operation, potentially leading to safety and ethical risks (Hickman and Petrin 2021). Academic discussions on algorithmic bias are primarily centered around three distinct theoretical frameworks: (i) the ISO’s technical definition of AI bias (ISO 2021); (ii) the bias management framework developed by the United States National Institute of Standards and Technology (NIST) (Schwartz et al. 2022); and (iii) the ethical perspective proposed by Friedman and Nissenbaum (1996) based on the Value-Sensitive Design (VSD) (Friedman and Nissenbaum 1996). These frameworks explore the sources and impacts of bias from technical, regulatory, and ethical perspectives, respectively. While the ISO and NIST focus on mitigating bias through technological and managerial measures, Friedman and Nissenbaum (1996) emphasize the social nature of bias, arguing that human decision-making plays a crucial role in its formation.

Secondly, transparency is a fundamental pillar of AI ethics and governance, playing a crucial role in enhancing the interpretability, traceability, and regulatory oversight of AI decision-making. As AI systems become more deeply integrated into critical sectors such as healthcare, finance, and public safety, concerns over their decision-making transparency have intensified (Leke and Marwala 2019; Marwala 2018). The research underscores that AI transparency is not merely a technical necessity but also a key foundation for ensuring fairness, accountability, and regulatory compliance (Jobin et al. 2019). In response, governments worldwide are actively formulating standards, policies, and regulations aimed at improving the interpretability of AI technologies and ensuring their alignment with legal and ethical frameworks.

However, transparency is not merely a technical challenge but also deeply intertwined with ethical, legal, and social responsibilities. Therefore, to achieve true AI transparency, AI systems may need to provide further explanations by disclosing their decision-making logic, data sources, and potential biases. This would enhance public trust and ensure accountability (Larsson and Heintz 2020). Furthermore, transparency is essential for building user trust. Research indicates that trust in AI largely depends on users’ understanding of how the system operates and the reasoning behind its decisions (Von Eschenbach 2021). Conversely, opaque “black box” AI models can undermine user confidence and hinder widespread adoption. Additionally, transparency plays a critical role in error detection and correction when AI systems make erroneous or biased decisions, a transparent framework allows for more efficient tracking and rectification, thereby reducing ethical and legal risks (Morley et al. 2020). Moreover, transparency is indispensable for data privacy and ethical compliance, especially as AI systems handle vast amounts of sensitive personal data. Ensuring that data processing aligns with ethical and legal standards not only helps users understand how their data is utilized but also strengthens public trust in AI decision-making (Ribeiro et al. 2016). Finally, in socially sensitive areas such as credit scoring, hiring, and criminal justice, a lack of transparency can lead to systemic discrimination and unfair outcomes, posing significant threats to social justice and fairness (Larsson and Heintz 2020).

Thirdly, data privacy is widely recognized as both a fundamental human right and a critical concern in AI governance (Veltmeijer and Gerritsen 2025). As AI systems increasingly rely on vast amounts of data, the processes of data collection, storage, and sharing have grown more complex, heightening concerns over privacy breaches and security risks (Shin 2024). Effective data governance must go beyond ensuring data quality – it must also prioritize privacy protection and security to mitigate biases and foster fair, transparent AI decision-making (Shams et al. 2023). Moreover, the robustness, interpretability, and accountability of AI systems are equally essential to ensure their reliability in complex environments and to strengthen public trust in AI-driven decisions (Wallach and Marchant 2018).

While data analytics has been a powerful driver of economic and social progress, the associated privacy risks have become an increasingly pressing challenge (Doran et al. 2017). The misuse of data can result in discriminatory decisions and unfair outcomes, ultimately eroding social trust (Hewage et al. 2024). Therefore, striking a balance between data utilization and privacy protection is paramount in AI governance. Achieving this balance requires more than just legal and regulatory frameworks – it demands an integrated approach that combines technological safeguards with ethical guidelines to establish a more transparent, secure, and sustainable AI governance framework.

Overall, algorithmic bias, algorithm transparency, and privacy protection are the three core issues in current AI ethics research, and scholars both domestically and internationally have engaged in in-depth discussions on these topics. In the future, a key challenge in AI governance is to balance fairness and transparency with privacy protection and technological innovation. This study will build upon existing literature to further explore the dynamic interaction mechanisms of AI ethics regulation and provide corresponding policy recommendations to promote the responsible development of AI technology.

3 Methodology

This study explores the rapid development of AI and the ethical and legal challenges it raises in the process of innovation, with a particular focus on the corresponding regulatory mechanisms. The research path follows a three-stage argumentation structure, rooted in a problem-oriented approach, which includes the logical framework: (i) problem identification; (ii) issue analysis; and (iii) solution proposal (Fensel and Motta 2002). This research path is conducive to offering valuable insights for building a more balanced and effective AI governance framework. After conducting a literature review that synthesizes existing domestic and international research on AI governance, this study utilizes a combination of comparative methods and case analysis to develop a comprehensive argumentative framework. Specifically, the comparative method examines AI governance laws and ethical regulations across different jurisdictions, revealing governance experiences, regulatory differences, and best practices among countries. Besides, case analysis reveals the commonality and individuality of AI governance.

Based on an extensive review of relevant literature and an analysis of the current regulatory landscape, this study focuses on the core proposition of ethical challenges and opportunities in AI innovation. It further raises a key question: how can a dynamically adaptive ethical regulation system be constructed while maintaining the momentum of technological advancements?

In terms of algorithmic bias, AI’s social attributes dictate that technology cannot be fundamentally neutral. The “Algorithmic Transparency Paradox” in AI suggests that excessive pursuit of interpretability may suppress innovation or lead to conflicts with fairness, security, privacy protection, and practical feasibility. As a result, “increasing transparency” does not always yield better governance outcomes and may even have adverse effects. At the same time, the dichotomy between privacy protection and data utilization reflects a deeper value trade-off between individual rights and the enhancement of social welfare.

To comprehensively assess the ethical and legal landscape of AI innovation, this study further conducts a comparative analysis of China’s legal policies and landmark cases on AI governance, juxtaposing them with governance models in major jurisdictions such as the European Union and the United States. As a global leader in AI regulation, the European Union has established a stringent legal framework through the Artificial Intelligence Act to ensure the compliant development of AI technologies. In contrast, the United States adopts a more flexible regulatory approach, promoting AI innovation through a combination of industry self-regulation and policy guidance. Meanwhile, China has been actively exploring AI governance by continuously refining its legal and regulatory framework, striving to balance technological innovation with the protection of social order and public interests. Through comparative analysis, this study aims to uncover the current state and trends of global AI regulation and provide a reference for building a more scientific and reasonable AI governance system.

4 Ethical Challenges in Current AI Innovation Practices

The continuous innovation and development of artificial intelligence cannot be separated from the synergistic drive of the three elements of arithmetic power, algorithm and data (Cheng and Liu 2024). While promoting social change and industrial upgrading, the widespread application of AI technology has also brought unprecedented ethical challenges. At present, as AI is deeply embedded in high-risk scenarios such as finance, healthcare, and justice, the breeding and proliferation mechanism of algorithmic bias, the governance dilemma of discriminatory decision-making, and the conflict between the boundary of data utilization and privacy protection have become the core issues that need to be urgently addressed.

Researchers from BAAI have collected more than 20 international proposals on AI ethical principles and analyzed their texts for keywords, identifying the core terms that frequently appear, including “privacy”, “confidentiality”, “security”, “transparency”, “accountability”, and “fairness” (Zeng et al. 2018). These keywords reflect the main concerns of AI ethical governance.

On this basis, this study will focus on the key ethical issues in the practice of AI innovation, sort out the logic and root causes of its generation, and then analyze the practical progress of the current governance path and the institutional challenges it faces, in an attempt to provide theoretical support and institutional reference for the construction of a responsible AI innovation system.

4.1 Biased AI: The Non-Neutrality of Technology Determined by Social Attributes

The social attribute of technology refers to the fact that the generation and application of technology is never an isolated or neutral process but is shaped and developed in a specific social background, cultural context and power structure (Weber 2019). Technology is not only a tool to satisfy human needs, but also a product that carries and reflects social values, institutional arrangements and power distribution (Howard and Borenstein 2018). As a controversial social technology, AI is designed, trained and deployed in close interaction with social factors.

Currently, with the gradual evolution of AI from weak AI, strong AI to super AI, its algorithmic structure and processing power have been significantly enhanced, and the “autonomy” of the system is also increasing. This transformation has sparked legal and ethical discussions across various fields. For example, in the recruitment process, the application of AI may raise concerns regarding fairness, privacy protection, and other related issues (Dattner et al. 2019). However, despite continuous iterations in AI technology, its fundamental data foundation (e.g. gender ratio in historical employment data) and training goals (e.g. efficiency-first recruitment model) are still set by humans (Shin 2024). Therefore, AI’s “autonomous learning” is not truly autonomous, but rather a conscious or unconscious projection of human bias: when training data reflects inherent societal discrimination (e.g. male dominance in the computer industry), the algorithm not only fails to correct for these discriminations but instead automatically reinforces them, resulting in systemic discrimination and algorithmic bias. Machine learning models are essentially “probabilistic replicators” of historical data, and their core goal is to find patterns and maximize prediction accuracy, not to actively challenge the social power structure behind the data. In other words, when “efficiency first” becomes the underlying logic of the algorithm design, the model will consider the group characteristics in the existing data (e.g. the correlation between gender and occupation) as “objective laws” to be reinforced, thus maintaining or even exacerbating the existing data bias. This “technological black box” effect translates bias from implicit social structures into explicit decision-making outcomes (e.g. a preference for men in computer positions), creating a vicious cycle of “discriminate-reinforce-reproduce”.

The existing governance system’s response to “biased AI” reveals the cognitive limitations of the technocentric paradigm. The current mainstream anti-discrimination regulation approach still sees algorithmic bias as a fixable technological fault but ignores its nature as a digital mapping of social structures. Therefore, numerous governance challenges remain in achieving an effective response to algorithmic discrimination.

4.2 AI Algorithm Disclosure: Limits of Transparency

Transparency of AI systems usually refers to the degree to which the relevant subjects can understand the algorithm’s decision-making logic, data input and output mechanisms, and potential risks (O’Neill 2012). With the wide application of AI in key fields such as healthcare, architecture, and public safety, its “black box” characteristics have become more and more prominent, and people are often confused by the logic of algorithmic operation and decision-making basis, which triggers a strong demand for transparency, and pushes the academic community and the public to continuously call for “algorithmic disclosure”. The public and academics continue to call for “algorithm disclosure”. However, the disclosure of AI algorithms faces multiple contradictions and challenges.

On the one hand, the essence of AI transparency is the human subject’s cognitive breakout of the technological black box. Although the disclosure of algorithms is regarded as an important means to deal with the “black box” problem, in addition to the perceived deliberate concealment, the opacity of algorithms is fundamentally rooted in the complexity of the technology (Ding 2022). At the same time, the purpose of algorithmic disclosure is to allow consumers, developers and stakeholders to understand the internal mechanisms of AI models. However, for algorithms, especially deep learning models, with their complex structures and highly automated operations, even if a certain degree of disclosure can be achieved technically, due to the high threshold of professional knowledge, the huge amount of information, and the fact that the data operations rely on correlation rather than causation logic (Schölkopf 2022), consumers and general stakeholders are still unable to form a substantive understanding of the logic of the algorithm, and even more so, the original purpose of algorithm disclosure cannot be realized (Alsaigh et al. 2023). It even makes “technological democracy” a mere formality, reduced to a symbolic display of symbols. On the other hand, in practice, the demand for algorithmic disclosure is often alienated as a tool for responsibility avoidance. Take the COMPAS algorithm applied in the United States judicial system as an example, when the decision-making results of the system are questioned about the existence of bias, the designer uses the name of “technical objectivity” to transfer the ethical responsibility through the “black-box defense” (Engel et al. 2024), and attributes the responsibility to the model’s own non-interpretability. This phenomenon reflects the demand for transparency. This phenomenon reflects a technological authorization and instrumentalization of the demand for transparency, which essentially weakens the effectiveness of ethical accountability.

The deeper crisis lies in the fact that when the public blames algorithmic bias on “code defects”, the fundamental role of human subjectivity is overlooked. The value penetration in data labeling, the power game in feature engineering, and the design preference of the model objective function, the “human intervention” in these key links is covered by the black box effect (Burrell 2016). As a result of this cognitive inversion, technical governance often stays at the surface level of parameter disclosure and algorithm publicity, failing to touch on the substantive reconstruction of the power structure and value mechanism behind.

4.3 Data Utilization and Privacy Protection: Value Trade-offs Dilemmas

Data has been described as a “new type of production factor” in the AI era, and its efficient circulation and in-depth utilization constitute the core support for algorithm iteration and model optimization (Cheng and Gong 2024). In this context, data security has transcended the traditional scope of technical protection and has been upgraded to the most fundamental and universal normative requirements in the AI ethical system. The right to privacy is a product of the conflict between interests and balance, and its connotation is constantly transmuted with the exponential growth of computing power and the paradigm revolution of data utilization mode (Romanosky and Acquisti 2009). Nowadays, the technical rationality represented by data utilization and the ethical value centered on privacy protection show an increasingly significant antagonistic tension, which not only permeates the operational aspects of data collection, processing, and sharing, but also runs through the complete life cycle of AI systems from design and development to deployment and application (Hewage et al. 2024).

On the one hand, the precise evolution of AI systems relies on the continuous “feeding” of massive data: medical diagnostic models need to be supported by millions of medical records data, financial risk control system relies on real-time consumption behavior trajectory, and automatic driving algorithms need to devour massive road condition information (Regan 2003). This perpetual demand for “data fuel” makes technology giants constantly break through the traditional data collection boundaries, and realize the “ultra-granularity” capture of personal information through sensor networks, biometrics and other technologies. On the other hand, uncontrolled data collection and commercialization have given rise to the risk of privacy infringement, data abuse, and even digital “surveillance capitalism” (Saheb 2023), with the individual’s right to self-determination and human dignity facing systematic erosion. The essence of this dichotomy lies in the fact that privacy protection takes the protection of individual rights as its core demand, while the utilization of data points to the fact that the realization of public well-being relies on the efficient circulation of data and the transformation of its value. The two are in structural opposition to each other, with the values of rights-based and efficiency prioritization.

This paradox is further exacerbated by the ambiguity of data property rights. As a mainstream regulatory paradigm, “personal data empowerment” attempts to empower individuals with data rights from the perspective of the object on which algorithms are based, so as to strengthen individuals’ knowledge and control over their personal data (Ding 2022). In practice, however, this approach has revealed three challenges: first, when users are confronted with 10,000-word privacy agreements, the actual right to choose is virtually non-existent. In such cases, users either passively accept the terms and conditions in order to access the service or face exclusion from the service. When the technical architecture reduces “informed consent” to a binary option, the individual’s control over the flow of data deviates from the original purpose of empowerment (Chomanski and Lauwaert 2024). Second, the definition of personal data is controversial, especially in the context of the widespread use of anonymization and de-identification technologies, and the question of whether data belongs to the category of “identifiable individuals” has become a focal point (Song and Mittal 2021). For example, behavioral data such as browsing records and location information may re-identify individuals after a multi-dimensional intersection, leading to the blurring of the boundary between “non-personal data” and “personal data”, making it difficult to clarify the scope of empowerment, and providing grey space for enterprises to circumvent regulation (Podda 2024). Third, data infringement is often covert, technical and difficult to identify, which makes it difficult for users to obtain sufficient evidence even if they realize that their rights and interests have been damaged (Schneider 2023). Moreover, the evidentiary burden, the delayed responsiveness of existing remedial mechanisms, and the high costs associated with individual rights protection further hinder the effectiveness of personal data empowerment mechanisms. As a result, these mechanisms frequently fail to fulfill their intended protective function, leading to the dilemma of formalized rights without substantive enforcement.

5 Differentiated Practices and Trends in Global AI Governance Frameworks

In response to the ongoing intensification of core ethical issues such as algorithmic bias, algorithmic transparency, data privacy, and others in AI innovation practices, as well as recognizing the inherent uncertainties surrounding AI development, most countries are actively promoting the advancement and application of AI technologies. At the same time, these nations are also steering the direction of AI development through national policies, laws, and regulations. Thus, the summarization of the main paths of global AI laws and policies, and the assessment of the governance effectiveness may lay the foundation for subsequent institutional proposals for a paradigm shift.

5.1 International AI Legal Governance

5.1.1 European Union: Ethical Leadership and the Rule of Law

With the rapid development of AI technology, the European Union attaches great importance to the risk prevention, control and value guidance of AI technology at the ethical level, and has continued to improve the relevant legal and policy systems, gradually building a three-dimensional governance framework based on ethical principles, rule of law safeguards, and human-oriented values. On the whole, the governance path of “ethical leadership and rule of law escort” has been presented (Smuha 2019), reflecting that the European Union has always adhered to the governance concept of integrating ethical norms into the whole life cycle of AI, and has institutionalized the implementation of European values and fundamental rights.

As early as 2019, the European Commission presented Ethics Guidelines for Trustworthy Artificial Intelligence,[1] which was the first to systematically articulate the core principles of AI ethical governance (Hickman and Petrin 2021). The Guidelines explicitly state that AI should be developed in accordance with the principle of “Trustworthy AI”[2] and, accordingly, establish three fundamental requirements: first, AI technology must comply with legal regulations; second, AI systems must adhere to ethical principles and values; and third, AI should demonstrate technical robustness and social reliability. This ethical framework not only emphasizes core values such as human autonomy, fairness, and transparency but also prioritizes the protection of vulnerable groups, including children and people with disabilities. It aims to foster responsible AI development while safeguarding individual rights and interests.

Building upon the Ethics Guidelines for Trustworthy AI, AI Act,[3] officially enacted by the European Union and taking effect in 2024, further reinforces the institutional framework for ethical AI governance and the rule of law, making it the world’s first comprehensive legal framework for AI regulation. The AI Act focuses on ensuring the safety, transparency, traceability, interpretability, and non-discrimination of AI systems, aiming to mitigate the risks AI poses to human rights, democratic values, and social stability while ensuring an enabling environment and regulatory safeguards for AI innovation and investment (Novelli et al. 2024; Veltmeijer and Gerritsen 2025). The Act adopts a risk-based tiered regulatory framework, classifying AI systems into four categories – prohibited, high-risk, limited-risk, and minimal-risk – based on their potential societal risks. It embodies a dynamic governance approach, where regulatory stringency increases proportionally with risk levels, seeking to reconcile ethical safeguards with technological progress.

Regarding this guideline, the Vice President of the European Commission and Vice President of Strategy of the European Union Single Digital Market, Andrews Ansip, proposes that “AI that meets ethical standards will bring a win-win situation, and it can be a competitive advantage for Europe, and Europe can become a trusted, people-centered AI leader”. Overall, the European Union’s AI governance has always adhered to the core value orientation of “people-oriented”, insisted on the equal importance of ethical leadership and the rule of law, and is committed to building a credible, controllable and sustainable AI governance system through the synergistic promotion of ethical norms and the legal system (Roberts et al. 2024). This governance model not only reflects the European Union’s adherence to high ethical standards and social responsibility but also provides a systematic sample and practical experience for global AI ethical governance.

5.1.2 The United States: Technology Prioritization and Ethical Embedding

The governance of artificial intelligence in the United States has an important demonstration effect in the global context, and its ethical regulation has always been centered on national strategies and social concerns. In 2019, Executive Order 13859-Maintaining American Leadership in Artificial Intelligence[4] was issued, explicitly directing federal agencies to uphold the United States’ global leadership in AI. It also emphasizes that technical standards should embody the core values of “promoting innovation and enhancing public trust” while advancing international standards to support these strategic priorities.

In response to the Executive Order, in 2019, the federally operated NIST issued U.S. Leadership in AI: A Plan for Federal Engagement in Developing Technical Standards and Related Tools.[5] The plan explicitly states that AI systems should be “reliable, robust, and trustworthy” and emphasizes that social and ethical considerations must be duly integrated into the development of AI standards (Hewage et al. 2024). The plan calls for active federal participation in shaping ethical standards for AI, fostering interdisciplinary research and cross-sector collaboration, and enhancing the understanding of AI’s socio-ethical implications to ensure that technological advancements remain aligned with fundamental human rights and core values.

Meanwhile, the application of ethical norms in the field of defense is particularly prominent. In 2019, the United States Department of Defense (DoD) adopted the Guidelines for Artificial Intelligence: Ethical Recommendations for the Use of Artificial Intelligence in the U.S. Department of Defense[6] proposed by the Defense Innovation Board (DIB), which establishes the five basic principles of AI ethics: responsible, controllable, fair, traceable, and trustworthy. These principles cover the entire process of development, deployment and use of AI systems, aiming to prevent the misuse of AI technology in the military domain and strengthen the prevention of key ethical issues such as algorithmic bias, unpredictable risks and attribution of responsibility. By institutionalizing ethical commitments, the United States seeks to maintain the moral high ground in international military AI competition while safeguarding the legitimacy and transparency of technology applications.

Generally speaking, the United States AI governance has long pursued the strategy of “prioritizing innovation and postponing regulation”, relying on a flexible and loose institutional environment to stimulate the vitality of technological innovation, and at the same time paying moderate attention to the ethical risks triggered by AI, especially related to the issues of data privacy, algorithmic bias and social justice. With the opening of the second term of the Trump administration, the path of AI governance in the United States may see important adjustments. On the one hand, given its high regard for national security and technological dominance, it is expected to further strengthen the strategic deployment of AI in the military and security fields, and promote the relaxation of regulatory constraints on civilian AI to maximize the release of corporate innovation potential; on the other hand, at the ethical level, the regulatory efforts may tend to contract, and the policy direction may shift to “industry self-regulation priority”, by reducing government intervention. On the other hand, at the ethical level, regulatory efforts may tend to shrink, and the policy direction may shift to “prioritizing industry self-regulation” to safeguard the free space for technological development by reducing government intervention (Drabiak et al. 2023). While reducing the cost of corporate compliance, such policy reconstruction may also intensify the discussion of algorithmic fairness and social responsibility, becoming a key variable in the future trend of AI governance in the United States.

5.2 Domestic AI Legal Governance

With the rapid development of AI technology, China’s legislative practice around AI governance has been deepening, gradually building up a multi-level and three-dimensional regulatory system covering algorithmic safety, ethical norms, data governance and industry applications. On the whole, it shows a governance pattern by strategic planning and domain-specific advancements, forming a governance path with Chinese characteristics in which institutional norms and technical standards are synergistically linked.

In 2017, the State Council issued the Development Plan on the New Generation of Artificial Intelligence,[7] which puts forward the strategic goal of “three steps”, setting off a new boom in artificial intelligence, and explicitly mentions the need to “strengthen the research on legal, ethical and social issues related to artificial intelligence, and establish a legal, regulatory and ethical framework that guarantees the healthy development of artificial intelligence”. It also explicitly proposed to “strengthen research on legal, ethical and social issues related to AI, and establish a legal, regulatory and ethical framework to ensure the healthy development of AI”. Translation: Guiding Opinions on Strengthening Overall Governance of Internet Information Service Algorithms[8] makes it clear that the goal of algorithm governance is to establish a comprehensive governance pattern with a sound governance mechanism, a perfect regulatory system, and a standardized algorithm ecosystem; and the Opinions on Strengthening the Governance of Ethics in Science and Technology[9] further advocates that ethics should come first, and calls for the ethics of science and technology to be carried out throughout the entire process of scientific research and technological development. In addition, documents such as A Next Generation Artificial Intelligence Development Plan,[10] Ethical Norms for New Generation Artificial Intelligence,[11] and the Guiding Opinions on Accelerating Scenario Innovation to Promote High-Level Application of Artificial Intelligence for High-Quality Development of the Economy[12] collectively clarify the direction for our country to grasp the opportunities for the development of AI and promote high-level application, and provide basic guidance and value guidance for the development of the technology through the reinforcement of ethical norms.

Sub-domain breakthroughs and precise response to key risks: In key governance areas, China has implemented specialized regulations to address risks. In 2021, Provisions on Administration of Algorithmic Recommendation in the Internet Information Service[13] addressed issues such as big data-driven price discrimination, and established mechanisms for algorithm registration, human intervention, and user autonomy. In 2023, the Administrative Provisions on Deep Synthesis in Internet-based Information Services[14] focused on deep synthesis services, enforcing content labeling and strengthening review mechanisms to ensure information authenticity and maintain public trust. The Interim Measures for the Management of Generative Artificial Intelligence Services[15] issued in the same year clarified the specific measures and basic norms to promote the healthy development of generative artificial intelligence, proposed to adhere to the principle of equal emphasis on development and safety, and the combination of innovation and governance in accordance with the law, and to adopt an inclusive, prudent, classified and hierarchical regulatory approach, and to encourage technological innovation. Although it focuses on the sub-field of generative AI, its classification and grading governance model also provides an important first demonstration for the overall regulation of AI in China in the future.

Overall, China’s AI governance system is committed to realizing a governance balance between “safe and controllable” and “innovation-driven” (Cheng 2023), emphasizing the equal importance of development and safety, and the combination of innovation and governance in accordance with the law, as well as the ethical principles throughout the entire process of the development of AI technology, in an effort to In addition, it has incorporated ethical principles throughout the entire process of AI technology development, and strives to provide solid ethical and institutional safeguards for the healthy and sustainable development of AI technology by guiding the development of AI towards goodness while safeguarding technological progress (Ding 2025).

6 Paradigm Shift and Innovation Recommendations for AI Governance

Global AI governance is in a pattern of exploration and games, and how to safeguard the ethical bottom line while releasing the positive kinetic energy of technological innovation is a common governance problem faced by all countries. In this process, the importance of international cooperation has become more and more prominent, and the transnational influence of AI determines that it is difficult for the regulatory model of a single country to fully cover the ethical challenges brought about by technological development, and the promotion of synergistic co-construction of global governance frameworks has become a real demand (Li et al. 2023). It is worthwhile to draw reference from the fact that the European Union emphasizes the equal importance of ethical review and technical regulation, and the United States advocates interdisciplinary collaboration and industry self-regulation, which provide useful reference for the construction of China’s AI ethical governance system. On this basis, this study tries to put forward the Double-helix governance model of the “Technology-driven Innovation Axis” and the “Ethical-legal Constraint Axis”, with a view to exploring a dynamically adapted, synergistic and symbiotic AI governance path on the basis of the absorption of international experience and local practice, promoting the coordinated evolution of ethics and technology, and realizing a sustainable and responsible AI governance system. It aims to explore a path of dynamic interaction and symbiosis in AI governance based on the incorporation of international experience and local practice, promote the coordinated evolution of ethics and technology, and realize the sustainable and responsible development of AI.

6.1 Systematic Choice of Ethical Governance Pathways

In the realm of AI governance, two major theoretical orientations dominate academic discourse: the oppositional theory and the system theory (Dubber et al. 2020). The oppositional theory perceives technological development as inherently at odds with ethical constraints, advocating for strict regulatory measures to mitigate potential risks (Mäntymäki et al. 2022). In contrast, the system theory underscores the symbiotic relationship between technology and social structures, emphasizing the need to achieve a dynamic balance between innovation and ethical values.

Building upon prior discussions on AI’s social attributes, this study particularly highlights the necessity of embracing a systemic governance approach in the ethical dimension. Technological advancement and ethical oversight are not isolated forces; rather, they are deeply embedded within specific social structures, cultural contexts, and political systems, continuously influencing and shaping one another. Consequently, AI governance should break away from the binary of regulator and regulated. Instead, it should follow a more nuanced path grounded in social systems analysis (Neuberg 2003), shifting the logic of governance from confrontation to collaboration.

China’s AI governance practices in recent years have exemplified the importance of “systemic synergy”, implementing a regulatory framework that spans the entire AI lifecycle. This holistic approach integrates all relevant stakeholders – including individuals, enterprises, and organizations involved in AI management, research and development, supply, and usage – into a unified governance framework (Wang and Ding 2023). This demonstrates the state’s commitment to comprehensive coordination and regulation across the AI ecosystem. The emphasis on multi-stakeholder involvement and end-to-end governance lays a strong institutional foundation for constructing a systematic ethical governance model that ensures technological development aligns with societal values and interests (Schmarzo and Borne 2020).

6.2 Double-Helix Governance Model

In order to respond to the current dilemma of AI governance, this study tries to put forward the Double-helix Model for AI Governance of the “Technology-driven Innovation Axis” and the “Ethical-legal Constraint Axis”. The aim is to maintain the momentum of technological innovation while building a governance system with social responsibility and institutional constraints. The model does not regard technology and ethics as opposing dimensions but rather emphasizes that the two are constantly shaping each other in dynamic synergy, so as to realize the simultaneous enhancement of governance effectiveness and technological progress through spiraling interaction.

  1. The “Technology-driven Innovation Axis”

This axis focuses on promoting independent innovation, industrialization and application of AI technology, as well as the sustainable improvement of economic benefits, and its core driving force covers key elements such as algorithm breakthroughs, data sharing, arithmetic enhancement and talent cultivation. As a highly complex socio-technical system, the development of AI not only relies on the technical support of machine learning models and software components, but is also deeply embedded in specific social organizations and institutional environments, and its evolution has a wide range of impacts on different cultures, societies, and political contexts, which urgently requires multidisciplinary collaborative systematic analyses to comprehensively grasp the social changes triggered by the technological advancement of AI.

In addition, the power axis of technological development should also actively respond to key issues such as talent cultivation, algorithmic discrimination and the “technological black box”. Realizing a full understanding of the AI decision-making process and effective supervision, establishing model monitoring and accountability mechanisms instead of blind trust in AI capabilities have become the core propositions of AI governance. Advancing interpretability research, enhancing the understanding of model logic, and gradually improving fairness algorithms are expected to provide more solid technical support and practical pathways for ethical and legal governance.

  1. The “Ethical-legal Constraint Axis”

The second axis aims to establish value guidance and behavioral boundaries for the development of AI technology through ethical norms, legal systems and regulatory mechanisms, with the core mission of preventing the abuse of the technology and safeguarding social justice and basic human rights (Tsamados et al. 2022). Especially, in the context of increasing digitization, this governance axis should always adhere to the digital humanism stance, emphasizing that the development of AI should always serve the dignity and well-being of human beings, and ensure that technological progress does not deviate from the core value of “people-centeredness” (Wallach and Marchant 2018).

In terms of governance paths, it is possible to promote the embedding of ethical principles into the entire life cycle of AI systems, and to realize preventive regulation of high-risk technologies through unified ethical assessment standards and risk review mechanisms (Schmarzo and Borne 2020). At the same time, it is necessary to strengthen the embedded supervision of human beings in the key decision-making process, ensure that they have the substantive right to supervise and intervene in the operation of the AI systems, and effectively enhance the transparency and accountability of algorithms through the establishment of an independent review organization and a professional evaluation team. In addition, to address the technical and legal challenges of data rights and circulation, flexible mechanisms such as “scenario-based identification” (Ding 2022) and “data holding rights” (Gao 2023) can be explored to promote the rational use of data resources on the basis of safeguarding individual rights. Through prudent and flexible institutional design, the “Ethical-legal Constraint Axis” is expected to provide flexible and powerful institutional support for the innovative development of AI while protecting basic human rights and social values.

6.3 Dynamic Interaction: Realizing the Security and Development Spiral

Given the evolving nature of AI, governance strategies should be adjusted prudently in response to different periods and contexts, ensuring an optimal allocation of regulatory resources while striving for a balanced governance ecosystem. Rather than adhering rigidly to static regulatory norms, AI governance should remain adaptable and responsive, continuously refining its approach in alignment with technological progress and societal feedback. By fostering a governance framework that emphasizes “Technology for the Greater Good” (Cheng et al. 2023), AI can be guided toward steady and responsible development. This approach aspires to ensure that AI serves as a tool for advancing social well-being while supporting the creation of a more intelligent, equitable, and sustainable governance system.

On this basis, the AI governance model is supposed to facilitate the simultaneous enhancement of security and development through dynamic interaction, as illustrated in Figure 1.

Figure 1: 
Dynamic interactive double-helix model for AI governance.
Figure 1:

Dynamic interactive double-helix model for AI governance.

In the face of ethical challenges and opportunities in the process of AI innovation, the proposed Dynamic Interactive Double-helix Model for AI Governance provides a feasible path for balancing security and development by dynamically interacting the “Technology-driven Innovation Axis” and the “Ethical-legal Constraint Axis”. The former promotes AI innovation, industrial upgrading and economic growth, while the latter pursues fairness, transparency, responsibility and social well-being. The synergy between the two helices fosters the establishment of a governance system that is both inclusive and robust. This model enables AI to develop fully in a safe manner, while simultaneously guiding society towards a smarter, fairer, and more sustainable future. Moreover, it is also in line with the evaluation model of establishing a green, harmonious and sustainable network content ecosystem (Cheng et al. 2023).

7 Conclusions and Implications

The rapid development of AI has brought unprecedented opportunities while also posing significant ethical and governance challenges, necessitating a framework that simultaneously fosters technological innovation and ensures ethical responsibility (Cui 2024). As AI becomes increasingly integrated into various sectors of society, issues such as algorithmic fairness, data security, accountability, and regulatory compliance have become more prominent. This study examines these critical issues and explores how global AI governance models have responded to them, with a particular focus on the European Union’s strong regulatory approach, the United States’ market-driven flexibility, and China’s unique advantages in policy continuity and full-lifecycle regulation. These comparisons reveal the core characteristics of different governance models in balancing technological advancement with ethical and legal constraints.

Building on this analysis, this study proposes the Dynamic Interactive Double-helix Model for AI Governance, offering a novel analytical framework for balancing technological progress and ethical-legal oversight. Unlike traditional governance models that predominantly emphasize either technological acceleration or regulatory intervention, this model highlights the dynamic interaction, interdependence and co-evolution between the “Technology-driven Innovation Axis” and the “Ethical-legal Constraint Axis”. By integrating global best practices with China’s AI governance experience, this study underscores a dynamic, multi-stakeholder collaboration mechanism that adapts to the evolving challenges posed by emerging ethical risks and disruptive technological changes.

Looking ahead, AI governance must continue to explore key areas such as global cooperation, embedding ethical principles into technology, risk-based regulatory classification, and data ownership and circulation (Shams et al. 2023). The future of AI should not be confined to technological breakthroughs and industrial upgrades; rather, it should adhere to the fundamental principle that AI is created by and for the people, ensuring that technological progress remains centered on human well-being (Zuiderwijk et al. 2021). As emphasized in the Chinese position, AI development should be closely integrated with enhancing and safeguarding people’s livelihoods, driven by societal needs, and applied extensively in work, education, and daily life to foster a more intelligent, convenient, and just social environment. Only through the synergistic evolution of technology, law, and ethics can AI truly achieve human-centric development, laying a solid foundation for a smarter, fairer, and more sustainable society.


Corresponding author: Qiyan Mao, Guanghua Law School, Zhejiang University, Hangzhou, China, E-mail:

About the authors

Qiyan Mao

Qiyan Mao is a Ph.D. student at Guanghua Law School, Zhejiang University, specializing in digital law and artificial intelligence governance. Her research interests include AI ethics, data privacy, smart justice, and regulatory frameworks.

Ming Xu

Ming Xu is a lecturer at Zhejiang Sci-Tech University. Her research interests and publications include legal discourse, network governance, digital rule of law, corpus linguistics, and social semiotics.

  1. Research ethics: Not applicable.

  2. Informed consent: Not applicable.

  3. Author contributions: All authors have accepted responsibility for the entire content of this manuscript and approved its submission.

  4. Use of Large Language Models, AI and Machine Learning Tools: None declared.

  5. Conflict of interest: The authors state no conflict of interest.

  6. Research funding: None declared.

  7. Data availability: Not applicable.

References

Alahmari, N., S. Alswedani, A. Alzahrani, I. Katib, A. Albeshri, and R. Mehmood. 2022. “Musawah: A Data-Driven AI Approach and Tool to Co-create Healthcare Services with a Case Study on Cancer Disease in Saudi Arabia.” Sustainability 14 (6): 3313. https://doi.org/10.3390/su14063313.Search in Google Scholar

Allen, D., S. Hubbard, W. Lim, A. Stanger, S. Wagman, K. Zalesne, and O. Omoakhalen. 2025. “A Roadmap for Governing AI: Technology Governance and Power-Sharing Liberalism.” AI and Ethics 5 (1): 1–23. https://doi.org/10.1007/s43681-024-00635-y.Search in Google Scholar

Alomari, E., I. Katib, A. Albeshri, T. Yigitcanlar, and R. Mehmood. 2021. “Iktishaf+: A Big Data Tool with Automatic Labeling for Road Traffic Social Sensing and Event Detection Using Distributed Machine Learning.” Sensors 21 (9): 2993. https://doi.org/10.3390/s21092993.Search in Google Scholar

Alsaigh, R., R. Mehmood, and I. Katib. 2023. “AI Explainability and Governance in Smart Energy Systems: A Review.” Frontiers in Energy Research 11: 1–28. https://doi.org/10.3389/fenrg.2023.1071291.Search in Google Scholar

Burrell, J. 2016. “How the Machine ‘Thinks’: Understanding Opacity in Machine Learning Algorithms.” Big Data & Society 3 (1): 1–12. https://doi.org/10.1177/2053951715622512.Search in Google Scholar

Cheng, L. 2023. “Legal Regulation of Generative Artificial Intelligence: A Perspective from ChatGPT.” Journal of Political Science and Law 4 (4): 69–80.Search in Google Scholar

Cheng, L., and X. Gong. 2024. “Appraising Regulatory Framework towards Artificial General Intelligence (AGI) Under Digital Humanism.” International Journal of Digital Law and Governance 1 (2): 269–312. https://doi.org/10.1515/ijdlg-2024-0015.Search in Google Scholar

Cheng, L., and X. Liu. 2024. “Unravelling Power of the Unseen: Towards an Interdisciplinary Synthesis of Generative AI Regulation.” International Journal of Digital Law and Governance 1 (1): 29–51. https://doi.org/10.1515/ijdlg-2024-0008.Search in Google Scholar

Cheng, L., M. Xu, and C. Y. Chang. 2023. “Exploring Network Content Ecosystem Evaluation Model Based on Chinese Judicial Discourse of Digital Platform.” International Journal of Legal Discourse 8 (2): 199–224. https://doi.org/10.1515/ijld-2023-2010.Search in Google Scholar

Chomanski, B., and L. Lauwaert. 2024. “Online Consent: How Much Do We Need to Know?” AI & Society 39 (6): 2879–89. https://doi.org/10.1007/s00146-023-01790-2.Search in Google Scholar

Cui, Y. 2024. “Expert Interviews on AI and the Rule of Law Development.” In Blue Book on AI and Rule of Law in the World (2021), 405–23. Singapore: Springer Nature Singapore.10.1007/978-981-99-9085-6_7Search in Google Scholar

Dattner, B., T. Chamorro-Premuzic, R. Buchband, and L. Schettler. 2019. “The Legal and Ethical Implications of Using AI in Hiring.” Harvard Business Review 97 (3): 1–7.Search in Google Scholar

Ding, X. 2025. “On Legal Regulation of Mega Internet Platforms.” Science of Law (Journal of Northwest University of Political Science and Law) 43 (1): 94–108.Search in Google Scholar

Ding, X. 2022. “On the Legal Regulation of Algorithms.” Frontiers of Law in China 17 (1): 88.10.56397/SLJ.2022.12.03Search in Google Scholar

Doran, D., S. Schulz, and T. R. Besold. 2017. “What Does Explainable AI Really Mean? A New Conceptualization of Perspectives.” https://arxiv.org/abs/1710.00794.Search in Google Scholar

Drabiak, K., S. Kyzer, V. Nemov, and I. El Naqa. 2023. “AI and Machine Learning Ethics, Law, Diversity, and Global Impact.” British Journal of Radiology 96 (1150): 20220934. https://doi.org/10.1259/bjr.20220934.Search in Google Scholar

Dubber, M. D., F. Pasquale, and S. Das, eds. 2020. The Oxford Handbook of Ethics of AI. Oxford: Oxford University Press.10.1093/oxfordhb/9780190067397.001.0001Search in Google Scholar

Engel, C., L. Linhardt, and M. Schubert. 2024. “Code is Law: How COMPAS Affects the Way the Judiciary Handles the Risk of Recidivism.” Artificial Intelligence and Law 32 (1): 1–22. https://doi.org/10.1007/s10506-024-09389-8.Search in Google Scholar

Fensel, D., and E. Motta. 2002. “Structured Development of Problem-Solving Methods.” IEEE Transactions on Knowledge and Data Engineering 13 (6): 913–32. https://doi.org/10.1109/69.971187.Search in Google Scholar

Ferrara, E. 2023. “Fairness and Bias in Artificial Intelligence: A Brief Survey of Sources, Impacts, and Mitigation Strategies.” Sciences 6 (1): 3. https://doi.org/10.3390/sci6010003.Search in Google Scholar

Formosa, P., and M. Ryan. 2021. “Making Moral Machines: Why We Need Artificial Moral Agents.” AI & Society 36 (3): 839–51. https://doi.org/10.1007/s00146-020-01089-6.Search in Google Scholar

Friedman, B., and H. Nissenbaum. 1996. “Bias in Computer Systems.” ACM Transactions on Information Systems 14 (3): 330–47. https://doi.org/10.1145/230538.230561.Search in Google Scholar

Gao, F. 2023. “Rights Allocation of Data Holders: Legal Implementation of Structural Separation of Data Property Rights.” Comparative Law Research 3: 26–40.Search in Google Scholar

Hewage, C., L. Yasakethu, and D. N. K. Jayakody. 2024. “Data Protection Challenges and Opportunities Due to Emerging AI and ML Technologies.” Data Protection: The Wake of AI and Machine Learning 1: 1–27. https://doi.org/10.1007/978-3-031-76473-8_1.Search in Google Scholar

Hickman, E., and M. Petrin. 2021. “Trustworthy AI and Corporate Governance: The EU’s Ethics Guidelines for Trustworthy Artificial Intelligence from a Company Law Perspective.” European Business Organization Law Review 22 (4): 593–625. https://doi.org/10.1007/s40804-021-00224-0.Search in Google Scholar

Howard, A., and J. Borenstein. 2018. “The Ugly Truth about Ourselves and Our Robot Creations: The Problem of Bias and Social Inequity.” Science and Engineering Ethics 24 (5): 1521–36. https://doi.org/10.1007/s11948-017-9975-2.Search in Google Scholar

Jobin, A., M. Ienca, and E. Vayena. 2019. “The Global Landscape of AI Ethics Guidelines.” Nature Machine Intelligence 1 (9): 389–99. https://doi.org/10.1038/s42256-019-0088-2.Search in Google Scholar

Krumme, J., L. Wienands, and A. Teynor. 2024. “Never Mind the Codes of Conduct. DARE You to Tackle Ethics in Software Development for eHealth.” In Digital Health and Informatics Innovations for Sustainable Health Care Systems, 2–6. Amsterdam, The Netherlands: IOS Press.10.3233/SHTI240330Search in Google Scholar

Larsson, S., and F. Heintz. 2020. “Transparency in Artificial Intelligence.” Internet Policy Review 9 (2): 1–16. https://doi.org/10.14763/2020.2.1469.Search in Google Scholar

Leke, C. A., and T. Marwala. 2019. Deep Learning and Missing Data in Engineering Systems, 179. Berlin: Springer International Publishing.10.1007/978-3-030-01180-2Search in Google Scholar

Li, J., X. Cai, and L. Cheng. 2023. “Legal Regulation of Generative AI: A Multidimensional Construction.” International Journal of Legal Discourse 8 (2): 365–88. https://doi.org/10.1515/ijld-2023-2017.Search in Google Scholar

Mäntymäki, M., M. Minkkinen, T. Birkstedt, and M. Viljanen. 2022. “Putting AI Ethics into Practice: The Hourglass Model of Organizational AI Governance.” https://arxiv.org/abs/2206.00335.10.1007/s43681-022-00143-xSearch in Google Scholar

Marwala, T. 2018. Handbook of Machine Learning: Volume 1: Foundation of Artificial Intelligence. Singapore: World Scientific.10.1142/11013Search in Google Scholar

Morley, J., C. C. Machado, C. Burr, J. Cowls, I. Joshi, M. Taddeo, and L. Floridi. 2020. “The Ethics of AI in Health Care: A Mapping Review.” Social Science & Medicine 260: 113172. https://doi.org/10.1016/j.socscimed.2020.113172.Search in Google Scholar

Neuberg, L. G. 2003. “Causality: Models, Reasoning, and Inference, by Judea Pearl, Cambridge University Press, 2000.” Econometric Theory 19 (4): 675–85. https://doi.org/10.1017/s0266466603004109.Search in Google Scholar

Novelli, C., F. Casolari, A. Rotolo, M. Taddeo, and L. Floridi. 2024. “Taking AI Risks Seriously: A New Assessment Model for the AI Act.” AI & Society 39 (5): 2493–7. https://doi.org/10.1007/s00146-023-01723-z.Search in Google Scholar

O’Neill, P. H. 2012. “Truth, Transparency, and Leadership.” Public Administration Review 72 (1): 11–2. https://doi.org/10.1111/j.1540-6210.2011.02487.x.Search in Google Scholar

Pierson, J., A. Kerr, S. C. Robinson, R. Fanni, V. E. Steinkogler, S. Milan, and G. Zampedri. 2023. “Governing Artificial Intelligence in the Media and Communications Sector.” Internet Policy Review 12 (1). https://doi.org/10.14763/2023.1.1683.Search in Google Scholar

Podda, E. 2024. “Anonymization of Personal Data (Legal Perspective).” In Encyclopedia of Cryptography, Security and Privacy, 1–3. Cham, Switzerland: Springer.10.1007/978-3-642-27739-9_1829-1Search in Google Scholar

Regan, P. 2003. “Privacy and Commercial Use of Personal Data: Policy Developments in the United States.” Journal of Contingencies and Crisis Management 11 (1): 12–8. https://doi.org/10.1111/1468-5973.1101003.Search in Google Scholar

Ribeiro, M. T., S. Singh, and C. Guestrin. 2016. “Why Should I Trust You? Explaining the Predictions of Any Classifier.” In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–44. San Francisco, California, USA: Association for Computing Machinery (ACM).10.1145/2939672.2939778Search in Google Scholar

Roberts, H., E. Hine, M. Taddeo, and L. Floridi. 2024. “Global AI Governance: Barriers and Pathways Forward.” International Affairs 100 (3): 1275–86. https://doi.org/10.1093/ia/iiae073.Search in Google Scholar

Romanosky, S., and A. Acquisti. 2009. “Privacy Costs and Personal Data Protection: Economic and Legal Perspectives.” Berkeley Technology Law Journal 24 (3): 1061.Search in Google Scholar

Saheb, T. 2023. “Ethically Contentious Aspects of Artificial Intelligence Surveillance: A Social Science Perspective.” AI and Ethics 3 (2): 369–79. https://doi.org/10.1007/s43681-022-00196-y.Search in Google Scholar

Schneider, I. 2023. “Digital Sovereignty and Governance in the Data Economy: Data Trusteeship Instead of Property Rights on Data.” In A Critical Mind: Hanns Ullrich’s Footprint in Internal Market Law, Antitrust and Intellectual Property, 369–406. Berlin, Heidelberg: Springer Berlin Heidelberg.10.1007/978-3-662-65974-8_15Search in Google Scholar

Schölkopf, B. 2022. “Causality for Machine Learning.” In Probabilistic and Causal Inference: The Works of Judea Pearl, 765–804. Cham, Switzerland: Springer.10.1145/3501714.3501755Search in Google Scholar

Schwartz, R., A. Vassilev, K. Greene, L. Perine, A. Burt, and P. Hall. 2022. Towards a Standard for Identifying and Managing Bias in Artificial Intelligence. Gaithersburg, MD: US Department of Commerce, National Institute of Standards and Technology.10.6028/NIST.SP.1270Search in Google Scholar

Schmarzo, B., and K. Borne. 2020. The Economics of Data, Analytics, and Digital Transformation: The Theorems, Laws, and Empowerments to Guide Your Organization’s Digital Transformation. Birmingham: Packt Publishing.Search in Google Scholar

Shams, R. A., D. Zowghi, and M. Bano. 2023. “AI and the Quest for Diversity and Inclusion: A Systematic Literature Review.” AI and Ethics 5 (1): 1–28. https://doi.org/10.1007/s43681-023-00362-w.Search in Google Scholar

Shin, D. 2024. Artificial Misinformation: Exploring Human-Algorithm Interaction Online. London: Palgrave Macmillan.10.1007/978-3-031-52569-8Search in Google Scholar

Smuha, N. A. 2019. “The EU Approach to Ethics Guidelines for Trustworthy Artificial Intelligence.” Computer Law Review International 20 (4): 97–106. https://doi.org/10.9785/cri-2019-200402.Search in Google Scholar

Song, L., and P. Mittal. 2021. “Systematic Evaluation of Privacy Risks of Machine Learning Models.” In 30th USENIX Security Symposium (USENIX Security 21), 2615–32. Berkeley, CA, USA: USENIX Association.Search in Google Scholar

Tsamados, A., N. Aggarwal, J. Cowls, J. Morley, H. Roberts, M. Taddeo, and L. Floridi. 2022. “The Ethics of Algorithms: Key Problems and Solutions.” AI & Society 37 (1): 215–30, https://doi.org/10.1007/s00146-021-01154-8..Search in Google Scholar

Veltmeijer, E., and C. Gerritsen. 2025. “Legal and Ethical Implications of AI-Based Crowd Analysis: The AI Act and beyond.” AI and Ethics 5 (1): 1–11. https://doi.org/10.1007/s43681-024-00644-x.Search in Google Scholar

Von Eschenbach, W. J. 2021. “Transparency and the Black Box Problem: Why We Do Not Trust AI.” Philosophy & Technology 34 (4): 1607–22. https://doi.org/10.1007/s13347-021-00477-0.Search in Google Scholar

Wallach, W., and G. E. Marchant. 2018. An Agile Ethical/Legal Model for the International and National Governance of AI and Robotics. New York, USA: Association for the Advancement of Artificial Intelligence.Search in Google Scholar

Wang, L., and X. Ding. 2023. “The Development and Improvement of Civil Law in the Digital Age.” Journal of East China University of Political Science and Law 26 (2): 6–21.Search in Google Scholar

Weber, R. H. 2019. “Disruptive Technologies and Competition Law.” In New Developments in Competition Law and Economics, edited by K. Mathis and A. Tor, 223–40. Cham, Switzerland: Springer.10.1007/978-3-030-11611-8_11Search in Google Scholar

Welsch, W. 1996. Vernunft: Die Zeitgenössische Vernunftkritik und das Konzept der Transversalen Vernunft. Frankfurt am Main: Suhrkamp.Search in Google Scholar

Zeng, Y., E. Lu, and C. Huangfu. 2018. “Linking Artificial Intelligence Principles.” https://arxiv.org/abs/1812.04814.Search in Google Scholar

Zhang, J., and Z. M. Zhang. 2023. “Ethics and Governance of Trustworthy Medical Artificial Intelligence.” BMC Medical Informatics and Decision Making 23 (1): 7. https://doi.org/10.1186/s12911-023-02103-9.Search in Google Scholar

Zhang, Z., J. Zhang, and T. Tan. 2021. “Analysis and Strategy of AI Ethical Problems.” Bulletin of Chinese Academy of Sciences (Chinese Version) 36 (11): 1270–7.Search in Google Scholar

Zuiderwijk, A., Y. C. Chen, and F. Salem. 2021. “Implications of the Use of Artificial Intelligence in Public Governance: A Systematic Literature Review and a Research Agenda.” Government Information Quarterly 38 (3): 1–19. https://doi.org/10.1016/j.giq.2021.101577.Search in Google Scholar

Received: 2025-04-03
Accepted: 2025-04-03
Published Online: 2025-06-11
Published in Print: 2025-04-28

© 2025 the author(s), published by De Gruyter on behalf of Zhejiang University

This work is licensed under the Creative Commons Attribution 4.0 International License.

Downloaded on 13.10.2025 from https://www.degruyterbrill.com/document/doi/10.1515/ijdlg-2025-0005/html?lang=en
Scroll to top button