Home How Digital Power Shapes the Rule of Law: The Logic and Mission of Digital Rule of Law
Article Open Access

How Digital Power Shapes the Rule of Law: The Logic and Mission of Digital Rule of Law

  • Xiaoxia Sun

    Xiaoxia Sun is the Dean of the Digital Law Research Institute of Zhejiang University. He has served in various leadership positions, including as the Dean of Guanghua Law School at Zhejiang University and later as Dean of the Law School at Fudan University. Sun is a Changjiang Scholar and enjoys a special allowance from the State Council. His research focuses on legal theory, philosophy of law, public law principles, and procedural theory. He is also an expert in digital law, leading research efforts in the field of digital governance and smart justice. As the founding Director of the Digital Law Research Institute, Xiaoxia Sun has spearheaded collaborative projects between academia and the judiciary, working towards digital transformation in China’s legal system. His numerous books and over 100 scholarly articles contribute significantly to the fields of legal theory and digital law.

    and Yang Xiao

    Yang Xiao is Research Fellow at Zhejiang University. His research interests lie in digital law, international law, and empirical legal studies.

    ORCID logo EMAIL logo
Published/Copyright: October 28, 2024

Abstract

The rise of digital technologies has led to the emergence of digital private and public powers, which pose significant societal risks, challenge human rights, and reshape the rule of law. Digital power, as a new form of power, possesses inherent legal characteristics from both factual and normative perspectives. Therefore, it is crucial to integrate digital power into legal studies and frameworks. While subject to legal regulation, digital power also has the potential to address the limitations of human law, enhance human rights, and strengthen the rule of law. It is therefore argued in this study for the necessity of expanding legal studies from a focus on algorithm research to the broader study of digital power. This study also highlights the unique mission of the digital rule of law, to harness digital power in shaping a future legal system that empowers and promotes societal well-being.

In traditional social order, state power has always been the sole dominating force. Consequently, the term “power” in legal contexts typically refers to state power, or public power, and the traditional concept of power order pertains to the division of state power. The notion of controlling power through law dates back to ancient Greek and Roman times. With the advent of modern revolutions, the principles of the rule of law and human rights emerged, establishing the rule of law as a mechanism to balance power and protect human rights. Thus, human rights, public power, and the rule of law have become the three fundamental elements of governance. Despite advancements in science and technology since the Industrial Revolution, no technological force has significantly altered this triangular order. Initially, the Internet was perceived merely as a tool for social communication, with sociologists expressing concerns that it might exacerbate social isolation and atomization. For example, in the first edition of his book “Sociology” published in 1989, Anthony Giddens discussed the impact of globalization and, by the fourth edition, he had included discussions on the new impacts of the Internet and information technology (Giddens and Birdsall 2001). Although Giddens viewed the Internet as a medium and communication tool similar to television, he expressed concerns about “losing our identity in cyberspace,” questioning whether “computer technology will dominate us rather than us controlling it.” This perspective was undoubtedly prescient, but did he foresee the future of technological development as it appears today?

In today’s digital age, individual freedom is often overshadowed by technological dominance. The rise of digital technologies has given birth to digital economies, societies, and governments. While these developments bring benefits to social life, they also exert a dominant influence on human rights, public power, and the rule of law. As we know, different categories of power and control have a great influence on law and legal discourse (Cheng and Machin 2022). In other words, technology is shaping new forms of power and new forms of the rule of law. This paper explores two core questions: First, why should digital power be incorporated into legal studies and become a legal concept with legal significance? Second, what are the logic and mission of digital rule of law? The first question establishes the foundation for the second, and the second logically follows from the first. To address these questions, this paper will examine the following five aspects:

  1. How does the application of algorithms lead to algorithmic power and expand into “digital power”?

  2. How do the two forms of digital power participate in and influence the triangular order composed of human rights, public power, and the rule of law?

  3. From a risk perspective, how are the two forms of digital power currently threatening human rights and shaping the rule of law?

  4. From a legal standpoint, should the rule of law adopt a purely cautious and passive regulatory approach towards technology? Is there potential for digital power to shape a benevolent rule of law?

  5. What is the mission of digital rule of law? and what vision does it hold for the future?

This paper aims to take a “wide-angle” perspective to comprehensively examine the positive and negative impacts of digital power on digital rule of law and to discuss the significance and mission of digital rule of law.

1 Transition: The Application of Algorithms Leading to Algorithmic Power and Expanding into “Digital Power”

Tracing the research trajectory from algorithms and algorithmic power to digital power reveals the significance and necessity of studying digital power. The exploration of digital technology and power in Chinese academia began in the late 20th century, focusing initially on digital television (Lu and Zhao 1999). By 2007, scholars recognized the independent reality of technological power (Liu 2007). The term “digital power” appeared in media studies in 2010 (Gao 2010), but legal academia only began focusing on algorithmic power in 2018 (Zhang 2018a). Scholars like Zheng Ge and Zhang Linghan emphasized the need for new rights protections due to intelligent algorithms (Zhang 2019; Zheng 2018). From 2018, over 50 articles on “algorithmic power” have been published on the China National Knowledge Infrastructure (CNKI). Among these articles, only a few scholars discuss algorithms from the perspective of philosophical subjectivity and rights protection (Duan 2018). However, the number of articles advocating for the governance of algorithms as a form of power is gradually increasing (Du and Wang 2023). These legal articles on “algorithmic power” generally address legal issues such as algorithmic black boxes, algorithmic discrimination, invasion of personal privacy, and procedural justice of algorithms.[1]

However, it must be acknowledged that legal scholars and judges often find themselves powerless when confronted with the algorithmic black box. They can only approach these issues through the traditional legal pathway, known as procedural thinking. This involves transforming technical problems into issues of due process for examination. In recent years, some scholars have astutely positioned algorithmic power as a matter of due process, thus defining the legal problem consciousness and “agenda” from the perspective of the “best mode of legal intervention.” This approach narrows the scope of legal research on this issue and objectively lightens the “burden” on legal scholars, acknowledging that technological issues often pose challenges beyond the capacity of legal scholarship. For instance, Chen Jinghui argues that “if the importance of algorithmic power is recognized, then algorithms are no longer merely speech or trade secrets but rather a matter of due process; correspondingly, breaking the algorithmic black box is not the optimal approach of intervention. Instead, the basic requirements of due process should be used to intervene in algorithms.” (Chen 2020) Algorithms “not only challenge the fundamental attribute of procedural constraints on arbitrariness but also impact the intrinsic values of participation, transparency, neutrality, and fairness in procedural justice.” Therefore, “the theory and institutional connotations of procedural justice need to be updated and advanced on the traditional basis.” (Guo and Yong 2023).

Limiting the issue of algorithmic power to the scope of due process regulation is feasible. However, the question arises: can legal scholarship focus solely on regulating digital technology issues from the perspective of procedural legitimacy? This is debatable. Such an approach may overlook significant risk areas in the societal application of digital technologies. As the scope of digital technology applications expands, the focus should not only be on the algorithms themselves but also on whether and how algorithms should be applied in certain fields – questions that even algorithm engineers find challenging to address. For instance, when applying algorithmic technology to automatic surveillance for crime prevention, engineers are responsible for ensuring the accuracy and transparency of the algorithms. However, they cannot make decisions about which street corner or building should be monitored. This is because algorithms are merely one aspect of digital technology, and algorithmic power is just one facet of digital power. The reach of digital power is broader and more extensive, with more complex manifestations. Therefore, it is necessary to shift the focus from the technology itself to its societal impacts, conducting a comprehensive and in-depth analysis of the actual changes that technological applications bring to society. This shift allows for a re-examination of what is referred to as “digital power.”

Firstly, social sciences should focus on the societal effects of digital technology applications, particularly their influence in various social domains. At its core, digital technology issues are technical problems, but the true research objectives can only be reflected by making the application of digital technology the object of social science research. Domestically and internationally, the concept most frequently used in related research is “algorithm.” Therefore, when referring to the dominance of algorithms, we can continue to use the term “algorithmic power.” However, as algorithmic technology becomes widely applied in social life, the connotation of the term “algorithm” has evolved. In 2019, scholars in philosophy and social sciences reintroduced the term “digital power,” proposing that “on a micro level, digital power governs the behavior of actors and users within digital networks, converting their activity traces into big data networks. On a macro level, the big data networks formed by user behavior traces have become a new form of capital, surpassing industrial and financial capital, known as digital capital.” (Lan 2019) From 2020 to 2023, there has been an increase in papers mentioning the concept of “digital power,” with at least five papers using “digital power” as a keyword in their titles.[2] In 2023, legal scholars studying the application of digital technology on the 12,345 government platform used “digital power” as a keyword in their research (Wang and Dong 2024). The “digital power” mentioned here refers to the special administrative power generated when the government uses digital technology to provide services in an administrative context. However, this does not capture the full scope of digital power. A broader examination within the social sciences reveals a more comprehensive perspective. For example, some articles analyze and discuss the concept of digital power from political science or political sociology perspectives, asserting that “digital power, as a new form of power, has emerged and further transformed the organizational and legal forms of existing power structures.” (Li and Zhao 2023) Increasingly, social science scholars are tending to use the term “digital power” rather than “algorithmic power,” and this trend is expected to continue.

Yes, social science scholars internationally have indeed used the concept of “digital power” or similar terms. The concept of ‘data power’ originally emerged in the 1980s in the context of library data storage on CDs. In recent years, international legal scholars have continued to employ related concepts. For example, in 2019, Orla Lynskey used the term “data power” in her influential article, “Grappling with ‘Data Power’: Normative Nudges from Data Protection and Privacy.” (Lynskey 2019) However, does “data power” merely refer to big data power? Clearly not. Scholar Isabel Hahn explicitly pointed out that “yet the ‘Data Power’ referred to in this article is not about Big Data: though Big Data certainly contributes to the establishing of Data Power, not all companies that employ the use of Big Data technologies have Data Power. In other words, Data Power is about more than the use of novel technologies to gain new insights into data: it is about the control over data flows between actors in the digital environment that certain companies have.” Hahn further emphasized that it is insufficient to focus only on concepts such as “gatekeepers” or “digital platforms” alone because they are too broad and do not address the notion that it is control over data which poses potential regulatory issues. The focus on Data Power, therefore, is an attempt at highlighting the problematic consequences for the individual and the broader digital ecosystem that such control may lead to (Hahn 2021). This argument effectively translates “data power” into “digital power,” shifting the focus from purely technical issues to social issues and revealing a “digital ecosystem” that cannot be fully understood through the lens of algorithmic power alone. This is a highly insightful perspective. Isabel Hahn also identified three defining characteristics of data power: first, data power is ubiquitous in the digital environment; second, the vast and varied data leads to data power’s control over users; and third, data power refers to the ability to aggregate data across different datasets (Hahn 2021). This means that digital power has an extremely broad scope of pervasive control and a strong data aggregation capability. Isn’t this sufficient to constitute a form of digital power that rivals traditional state power?

Secondly, while algorithms form the foundation of digital technology, they are not its entirety. As digital technology continues to evolve, more technologies beyond algorithms are likely to emerge. Based on existing understanding, computing power refers to the ability to process information and data to produce predetermined outcomes (Wu 2023). Reasoning is a form of computation, with algorithms serving as the methods of reasoning. If the initial premise is incorrect, the reasoning results will also be flawed. In recent years, with the advent of technologies like AlphaGo, AlphaFold, and particularly large language models such as ChatGPT, artificial intelligence has demonstrated what are referred to as “emergent capabilities.” According to Professor Fei Wu, an AI research expert at Zhejiang University, the characteristics of artificial intelligence can be broadly summarized as follows: it possesses deep learning capabilities, can process sequential data like natural language using recurrent neural network models, performs sequence learning, utilizes the Transformer architecture centered on self-attention, and is supported by powerful computational resources, thereby exhibiting strong content synthesis abilities (Wu 2023). From this, it is evident that models based on algorithms possess content synthesis capabilities that surpass the mere computational or reasoning abilities of the past. Therefore, Professor Wu stated that Chat-GPT has propelled artificial intelligence to a new level of content synthesis, “shaping a new paradigm of content production and becoming a powerful tool for intelligent digital communication,” and “promoting breakthroughs in areas such as language generation and conversational AI.” (Wu 2023) Unlike previous algorithms or reasoning, Chat-GPT can enhance its learning through human feedback, meaning that “it incorporates human feedback on the content synthesized by the model as supervised information, fine-tuning the model’s parameters to improve the accuracy and fluency of its language responses.” (Wu 2023) It is evident that Chat-GPT engages in learning, thinking, and dialogue in a manner similar to humans. Some have even boldly predicted that AI now has a capability called “grokking,” showing the ability to generalize from data it has never encountered before. In this context, ChatGPT can also be viewed as a ‘floating signifier,’ a term whose meaning shifts based on its usage (Cheng and Liu 2024). Its evolving capabilities, powered by deep learning, enable it to generate diverse forms of content and adapt to various scenarios, making it more than just a static algorithmic tool. Simply put, Chat-GPT does not rely solely on algorithms, as algorithmic technology is not the entirety of digital technology, and thus algorithmic power does not equate to digital power.

As a conservative social science discipline, law has always lagged behind technology in its development. Although law cannot keep pace with technology, it can anticipate the effects of technological applications. From the perspective of the metaverse, technology is not limited to digital technology but also includes brain science, cognitive science, and neural technologies, among others. Neural technologies combined with intelligent technologies pose a serious threat to human rights and freedoms (Li 2023). Biotechnology and neural technologies may not always be algorithmic technologies, but biotechnology inevitably involves digital technology. Thus, algorithms are not the only technological power that influences the world’s power structure. To some extent, we should still return to the broader concept of technology, where the power derived from technological applications constitutes technological power. However, this paper focuses solely on digital technology.

Ultimately, before algorithms are applied to social contexts, they are merely a form of technology and do not constitute what is termed “algorithmic power.” As some scholars have pointed out, algorithms are clearly not power in themselves, they are tools through which humans exercise power and need to be combined with objectives to form algorithmic power (Ouyang 2023). It is only when algorithms are applied to human relationships and acquire social attributes that they evolve into forces with the characteristics of “power,” thereby forming a digital social-ecosystem. At this point, “algorithms” are no longer purely technical but become the core force within “digital power,” necessitating legal constraints. Although algorithms are the core technological force within digital power, digital power is neither equivalent to nor limited to “algorithmic power.” Some scholars have rightly defined algorithmic power in legal terms as a procedural issue. The opacity of algorithmic black boxes is clearly a matter of due process; infringements on personal privacy can also be seen as due process issues. However, research by administrative law scholars indicates that algorithmic power is not just a procedural issue but also a fundamental one. They argue that “algorithms are not commands made by computers themselves, nor are they codes written by computers autonomously. From the perspective of implementation, algorithms require human-set rules, which are then coded by technicians. Thus, algorithms inherently have a human element.” As early as 2001 to 2002, scholars noted that the operation of “digital government” has two major technical characteristics: the rigid proceduralization and rigid modeling of government operations. However, the overall human operation of “digital government” means that the government can still trample on or evade the inherent rigid regulations of “digital government” and abuse power (Liang 2001, 2002). When algorithms are deeply integrated with administrative power, what needs to be regulated by law is not the algorithmic technology itself but algorithmic administrative power. This means shifting the focus from “technical regulation” to “power constraint” when studying algorithms. “Administrative algorithms require administrative agencies to set rules, which are then implemented by technical companies or specific personnel. The actors and responsible entities behind these actions are administrative agencies.” (Cha 2023) Thus, algorithmic power combined with state power is no longer merely a matter of procedure. If we equate digital power solely with algorithmic power and then equate algorithmic power with procedural issues, we risk overlooking many substantive issues. These include the legitimacy of pre-set rules in administrative algorithms (such as the allocation of power, rights, obligations, and responsibilities), monopolies created by artificial intelligence in finance and consumer sectors, infringements or control over public and personal safety interests by algorithms, the usurpation and sharing of state power by algorithms, biases in judicial processes caused by algorithms, and misjudgments in judicial decisions.

Algorithmic power is merely a subset of digital power, and digital power surpasses any previous form of technological power. During the era of the steam technology revolution, technological power did influence political power, but it did not reach the extent to which digital power impacts society and political power today. In summary, the reason I use the term “digital power” instead of “algorithmic power” stems from this distinction. So, what kind of new power does digital technology usher in? In a review of British scholar Jamie Susskind’s book “Future Politics: Living Together in a World Transformed by Tech,” it is noted that “algorithms, as a form of machine formula and language, are shaping the underlying logic of codified law and the new type of digital power.” (Cao 2023) Digital technology is the most extensive extension of human capability to date, being widely applied in various life and production scenarios and profoundly changing human life. Digital technology is predominantly used in business operations and state management. Under the influence of digital technology and driven by commercial and public interests, both commercial forces and state power have been significantly expanded. The “power” of digital technology, when combined with the administrative power of government governance, has given rise to a new governance technology and model, which some scholars refer to as “technocracy.”

Designers and users of digital technology often embed their own motivations, purposes, and values into the technology, transforming it into a tool that aligns with their requirements. This, in turn, enhances their influence and control over other entities through the technology. Thus, both the tool itself and the process of its application collectively constitute digital power. Digital power is a form of power that is self-generated and self-empowered through the use of digital technology. Currently, it is a concept within the realm of general social sciences rather than a legal concept, and certainly not a statutory power in the strict sense of positive law. However, technology, as a non-state form of dominion, has grown so powerful that it can rival state power, compelling individuals, markets, and societies, and even requiring states to harness this technological power. It is evident that digital power has acquired most of the characteristics of “power” in the legal sense, such as dominance, coercion, expansiveness, exclusivity, and authority. At this point in the discussion, we can define “digital power” as follows: it is a unidirectional coercive power derived from digital technology, centered on digital design, computation, and application. It is a generalized technical capability that, through digital technology, imposes binding obligations on members or units within collective organizational systems, thereby transforming the digital ecosystem into a rule of law order. Digital power is actively participating in and profoundly influencing the order constituted by human rights, public authority, and law.

As technology becomes increasingly integrated into society, “algorithmic power” has evolved into “digital power” with legal significance. Consequently, the focus of legal research has shifted, prompting the argument that “digital power” should be incorporated into the legal domain. To determine whether a “new power” in the social sciences should be included within the scope of legal power, and whether it possesses legal characteristics that could elevate it to a statutory legal concept, it is crucial to assess whether there is a need for more stringent and specialized regulatory measures to govern this new power. This involves elevating the regulation of enterprises to a higher level to establish a more optimized rule of law structure. Specifically, this assessment involves both factual and normative dimensions. From a factual perspective, does digital power exhibit the characteristics of power? Does it generate widespread social impact rather than merely affecting individual entities? From a normative perspective, does digital power involve new relationships of rights and obligations and issues of responsibility allocation? Does it necessitate new definitions of legal liability? These questions will be addressed in subsequent sections.

2 Power and Desire: Two Forms of Digital Power Impacting Human Rights

The rationale for incorporating digital power into the legal domain and recognizing it as a concept with legal significance stems from its factual characteristics as a form of power and its tangible societal impact. My argument is based on two typical application areas of digital technology: commerce and the state. Through these two domains, we can distinguish between two forms of digital power – digital private power and digital public power – each of which participates in and influences social order. Current realities demonstrate that digital technologies, including algorithms, when applied in commercial and state contexts, transcend their role as mere technical tools and become forms of social power. How, then, does digital power manifest? It is often said that today’s technology is experiencing unchecked growth, characterized by two mechanisms: first, market-driven technological development motivated by profit; second, technological advancement facilitated by public power and administration. Scholars examining algorithms have noted that the widespread application of algorithmic empowerment has led to an increase in social power and rights, but this increase is not evenly distributed, exacerbating the imbalance among private rights, public power, and private power (Zhou 2019). Recently, constitutional scholars have introduced the concept of “digital private power,” highlighting its features such as dominance and resource monopoly, quasi-regulatory roles, and quasi-state characteristics. These traits disrupt the public-private dichotomy in constitutional theory, the third-party effect of fundamental rights, and the concept of the nation-state (Yang 2021). Meanwhile, international scholars have observed a similar phenomenon and developed a comparable concept. In a paper published in 2021, the author argues that today’s tech giants undermine the rule of law by assuming the trappings of the state – one even has its own “supreme court” – while avoiding accountability. This “corporate power” has become the new Leviathan, reflected in mimicking state functions, digital colonialism, evading regulation, preventing collective action, and exerting ubiquitous control (D’Cunha 2021). Thus, digital private power and digital public power together constitute the digital power of our era.

Let’s first examine digital private power. When digital technology is utilized by online platforms, it gives rise to digital private power, which can also be referred to as commercial digital power or “digital capital power.” (Zhang 2021) The application of technology in the commercial realm has always been extensive because technology inherently carries commercial attributes, marked by market and profit incentives. Similarly, digital technology today has become a focal point for commerce and markets, with its primary applications spanning e-commerce, digital platform-dominated markets, the Internet of Things in daily life, cryptocurrencies, the use of robots in finance, blockchain applications in corporate governance, and big data applications in healthcare. Undoubtedly, these applications of digital technology can empower humanity and bring shared benefits to consumers’ lives, forming the market nature of technology supply and consumption. As Marta Infantino has noted, the widespread use of automated quantification and big data in digital insurance presents not only regulatory challenges and risks but also numerous consumer benefits (Infantino 2024).

What are the characteristics of the self-interest that drives digital private power? It can be said that self-interest represents a weak form of self-regulation. So, where does this self-interest originate? This question hinges on: whose interests does the digital power of commercial platforms ultimately serve? In the application and consumption of digital technology, who bears more responsibility and risk? Who enjoys greater benefits and advantages? Some scholars have pointed out that the application of digital technology may ultimately serve corporate interests rather than consumer interests. In their article “How Digital Assistants Can Harm Our Economy, Privacy, and Democracy,” Maurice E. Stucke, a professor at the University of Tennessee College of Law, and Ariel Ezrachi, the director of the Centre for Competition Law and Policy at the University of Oxford, argue that while digital assistants can certainly offer great value, a closer look reveals how – in an algorithm and data–driven world – a dominant digital assistant may ultimately serve the interests of corporations rather than consumers. Such assistants may be used to establish a controlled and manipulated personalized environment in which competition, welfare, privacy, and democracy give way to corporate interests (Stucke and Ezrachi 2017).

How is this self-interest realized? This involves the forms and mechanisms of platform power. Scholars have identified three forms of platform power: the first is “gatekeeper power,” which arises from the fact that some companies effectively serve as the infrastructure of the digital marketplace, controlling the technology that other businesses rely on to operate in the networked economy. The second form of power is leveraging. Platforms not only act as critical infrastructure but also integrate across markets. This allows a platform to use its dominant position to establish an advantageous position in an independent or ancillary market. Cross-market integration refers to platforms directly competing with companies that use their infrastructure, creating a core conflict of interest and incentivizing platforms to prioritize their own goods and services over those offered by third parties. For example, Amazon sells clothing designed by independent designers while also selling its own branded clothing. An independent study found that Amazon tends to prioritize promoting its own brand and restricts competitors’ access to prominent promotional areas on its website. The third form of power is the power to develop and utilize information. This power stems from the various forms of data that platforms collect about consumers and business users. In some cases, platforms also track user activity on third-party websites and applications. They can utilize this data in various ways, such as altering the information users see based on their profiles. For instance, Facebook might only show certain job advertisements to younger employees or certain housing advertisements to non-minority users. Platforms can also use this data to engage in first-degree price discrimination, charging each consumer a different price (Khan 2018). These three primary forms of platform power – gatekeeper power, leveraging power, and information power – grant platforms a dominant and controlling position.

Some scholars in China believe that Marx’s critique of capital power is fundamental to understanding the operation of digital capital power. In the mode of production under digital capitalism, digital capital continues the power attributes of traditional capital. Digital capitalists privatize data as a production factor and continuously exploit digital labor through a series of methods such as monopolistic expansion, platform control, digital panopticon surveillance, and ideological control, thereby elevating their power above that of society as a whole (Xiang 2023). Some scholars argue that a few super digital platforms and the internet companies behind them possess unique power distinct from traditional media and social organizations. Legally, only the government holds power, while private entities in society only enjoy rights. However, the embedding of digital platforms into the social structure and the integration of social resources have granted them a dominant position in practical terms, thereby acquiring what can be legally termed “Private Power.” In the practice of digital platforms, this power manifests as market access rights, resource allocation rights, and rule-making rights, thus forming a platform-dominated governance structure for content and services (Wang and Peng 2023). Based on this special nature of power, some scholars have analyzed from the perspective of “behavior,” arguing that “data behavior” should be included in the “economic regulation path.” In fact, they point out an important phenomenon – the dominance of digital technology over market entities and market order has reached a level that cannot be regulated by traditional civil law and existing economic law. Therefore, they emphasize the organic integration of “existing systems” and “new systems.” (Zhang 2023).

Many scholars, both Chinese and international, regard the dominance of platforms that combine digital technology and commerce as a form of privatized power. From the perspective of its coercive dominance in digital technology, it can actually be considered “digital power”; from its composition in conjunction with commerce, it is essentially a form of “digital private power.” The form of infringement by digital private power differs from the traditional corporate power’s infringement on individual rights, as it infringes on human rights, which are considered basic rights. The holders of digital power have become the subjects of human rights violations, and the human rights being violated here are referred to as “digital human rights” formed under digital conditions. The private desires that drive digital private power are characterized by irrationality and weak self-control. However, when digital private power is controlled by governmental public power and law, it becomes a controllable power. This provides the first reason for incorporating digital power into the law, making it a legal and jurisprudential concept. It is the initial idea that justifies “digital human rights” and serves as the first logical starting point for proposing “digital rule of law.”

Secondly, let’s consider digital public power. In recent years, states have not only emphasized the regulation of digital technology but also its application in social governance. Both administrative and judicial fields have seen the phenomenon of digital technology assisting the operation of public power. In these areas, digital power is no longer just a technical force; its application depends on public power, thereby carrying the characteristics of state power. With the support of digital technology, state power has, on one hand, exhibited more intensified public power characteristics, often referred to as the “digital Leviathan.” For example, the use of digital technology by police agencies in some countries to prevent and investigate cases is a typical combination of digital technology and public power. On the other hand, the use of digital technology can also decompose the inherent attributes and drawbacks of public power. For instance, digital technology can promote the decentralization and flattening of power, avoiding inefficiency, inequality, and rent-seeking behaviors. It is conceivable that digital technology can both strengthen or optimize state power and decompose or weaken it. It may bring about improvements in governance efficiency, while also posing significant risks to individual rights or human rights. Currently, digital technology in China is primarily applied in administrative fields (such as electronic traffic police and smart city management), social management fields (such as facial recognition for community access control), and judicial fields (such as AI-assisted judicial systems). The main risks it brings include: the risk of digital power in automated public decision-making, the risk of digital power in government surveillance of individuals, the auxiliary or advisory digital power risk of judicial AI, and the substantive or decisive digital power risk of judicial AI. In summary, the new type of public power generated by the state’s application of digital technology in the operation of public power can be termed “digital public power.”

Given that digital private power has private desires, does digital public power also have “public desires”? What are the characteristics of the “public desires” of digital public power? In principle, “public desires” are controllable, just as public power is controllable, because rule-of-law countries often have rational decision-making mechanisms and policy advancement mechanisms. However, the “public desires” of digital public power are exceptionally complex: First, it is well-known that “management values speed,” a goal vigorously pursued in government management. The more efficient government management is, the more it tends to expand its own power. Second, digital technology accelerates the realization of this desire for power expansion. Third, the personal desires of power holders are highly intertwined with the desires of public power. This is because “abstract public power must be concretized into specific positions and entrusted to power holders, establishing a ‘principal-agent’ relationship between the public and public officials.” Therefore, “the conflict between the public nature of public power and the subjective initiative of power holders is difficult to avoid at present, rooted in the separation of ownership and holding of public power.” (Fan 2009) Fourth, the justification for public power, such as the “safety” of the majority, becomes the most favorable excuse for the expansion of digital public power. For example, digital public power that involves surveillance of individuals is justified by the need for order and safety, especially in the current world where terrorism is rampant, making the uncontrollability of safety even more severe. In conclusion, in today’s digital age, digital safety and digital efficiency become strong motivations and reasons for public power to utilize digital technology. This highlights the dual characteristics of controllability and complexity behind the “public desires” of digital public power. Digital public power may turn the government into a “digital Leviathan.” Therefore, public discussion should not be limited to the new economic, work, and social relationship forms brought by digital technology but should also address the new digital government. This provides the second reason for incorporating digital power into the law, making it a legal and jurisprudential concept. It is the second idea that justifies digital human rights and the second logical starting point for proposing “digital rule of law.”

3 Risk: Two Types of Digital Power are Threatening Human Rights and Reshaping the Rule of Law

Considering the social impact of digital power, has it already generated widespread potential social risks? Will these risks lead to a redistribution of power, rights, and responsibilities? This needs further argumentation. As previously mentioned, the self-control of private desires is poor, but as long as the government and the law are effective, external control can be exercised over them. “Public desires” are controllable because rule-of-law countries have rational decision-making mechanisms and policy advancement mechanisms. However, in situations where there is a conflict between public goals and means, “public desires” can also expand infinitely. Therefore, the risks brought by digital power can be divided into risks of digital private power based on private desires and risks of digital public power based on “public desires.”

3.1 Risks of Digital Private Power Based on Private Desires

First, digital private power poses a threat to consumers of digital technologies. This risk has garnered significant attention from legal scholars both domestically and internationally. The primary manifestations of this risk include privacy threats, black-box operations, discrimination or inequality, and price discrimination based on big data (Greenstein 2022). These risks directly threaten the personal and property rights of consumers of digital technologies. Nicolas Suzor argues that the governance of digital platforms raises key constitutional issues. He believes that using the rule of law framework helps evaluate the legitimacy of online governance, allowing societies to set limits on the autonomy of these platforms. The principles of the rule of law offer the language needed to express these concerns and support the development of “digital constitutionalism,” which aims to establish and enforce appropriate standards for digital governance (Suzor 2018). The basic business model of the digital economy is to attract user attention to obtain and utilize data, leading to the extreme conclusion that “we are not users, but products.” (Zheng 2022) Many scholars have realized that in the digital economy, “winner takes all” has become the new rule. Social status and wealth are no longer acquired solely through labor and investment but increasingly through the possession of information, data, and the dominance of algorithms. The digital divide is gradually transforming into a gap in social stratification, making it difficult for those at the bottom to move upward. This situation has led to a reality where “all the data is generated by us, but the ownership does not belong to us.” (Ma 2018) More seriously, the “information cocoon” effect has an imperceptible impact on people’s free will. The power preferences of algorithm controllers will encrypt this self-enclosed organizational structure. The process of digital individuals “spinning cocoons” is simultaneously a process of digital rights erosion. Information filtering mechanisms (including algorithm recommendations) ensure that we only see what we want to see, hear opinions we agree with, and make friends with like-minded people. These viewpoints, once repeatedly reinforced, deepen over time, ultimately forming an “echo chamber” where only one’s own voice is heard. The information cocoon effect continually strengthens people’s own viewpoints, excluding the possibility of error, leading to cognitive biases and emotional responses. This, in turn, reinforces individual differences and diversity, which to some extent can undermine social consensus, causing social conflicts and divisions (Yao 2022).

Second, digital private power poses a threat to national power and public security. This threat primarily manifests in the form of digital technologies facilitating network fraud, cyber surveillance, and cyber attacks on unspecified subjects through the internet and the Internet of Things (IoT). These activities can lead to a loss of control over public security by national authorities. For example, cross-border cybercrime has severely impacted public security, characterized by a well-established online black market, the increasing use of advanced technological tools, and stronger links to illicit financial flows (Zhao and Cheng 2024). Furthermore, the amount of data controlled by governments is significantly less than that held by digital technology companies, and the computational power of governments is also weaker compared to these companies, placing them in an unequal position. In some cases, the use of digital technology products in everyday life can result in the erosion of both individual rights and governmental power. For instance, a scholar believes that private digital money poses an unprecedented threat to monetary sovereignty, endangering systemic stability and ultimately undermining democratic decision-making (Martino 2024). Additionally, some scholars have highlighted that smart cars come with significant risks, including identity theft due to data breaches and the loss of autonomy for consumers due to surveillance by governments or companies. These risks are primarily borne by the consumers of smart cars. Given that consumers bear most of the costs, who then reaps most of the benefits? Clearly, it is the digital technology companies (Zhang 2018b). Chinese scholars have pointed out that the inherent characteristics of the digital economy and its extensive penetration through capital operations in various social fields have led to a weakening trend in the micro-power of governments, negatively impacting the authority of public power and even interfering with the social order of fairness and justice (Xing 2021). If data cannot be adequately protected and is illegally exploited, it could harm national sovereignty, security, and development interests. Additionally, some scholars have recognized the threat digital private power poses to future politics. One scholar believes that “algorithmic politics is not just about left-right elections or manipulating public opinion; to a certain extent, it is also changing the form of future democracy. Algorithms, as a form of machine formula and language, are shaping the foundational logic of codified laws and new types of digital power. The integration of code language and power expression is creating a new type of power politics characterized by digitalization, privatization, and automation of force. A new power configuration mechanism, transitioning from a technological singularity to a political singularity, is taking shape.” (Cao 2023).

Third, existing laws find it difficult to regulate digital private power. To address this risk, special legal measures need to be taken. Why must the regulation of digital private power be special? It is already a consensus that digital private power must be constrained by traditional civil law. So why does it still require special legal regulation? Firstly, the voluntariness of civil acts as prescribed by law has been diminished by digital technology – people do not clearly realize that they have already given up certain rights and freedoms. In reality, it is the forced application of digital technology that leads to this apparent voluntariness, autonomy, or self-governance. Secondly, digital private power not only infringes on personal data, information, and privacy but also poses risks to the established legal order. This is because digital platforms often occupy positions of technological monopoly, knowledge monopoly, information monopoly, and even market monopoly. For instance, several years after the emergence of the main cryptocurrency, Bitcoin, the European Central Bank explained how the current legal framework applies to cryptocurrencies. However, three years later, no meaningful measures had been taken by any EU institutions, including the parliament (Gikay 2018). This demonstrates the difficulty of regulating cryptocurrencies under existing laws. In 2018, a scholar studied the legal application of cryptocurrencies in the EU. centralization and the creation of state cryptocurrency as possible solutions moving forward and examines their strengths and challenges (Gikay 2018). A well-known article in 2019 clearly argued that companies possessing data power should be regarded as “utilities” and subjected to special regulation (Lynskey 2019).

The legal responsibilities arising from the aforementioned three types of risks should primarily be borne by the users of the technology. However, technology providers also bear certain responsibilities. How to assign these responsibilities and how both parties should bear them requires legal allocation. Therefore, in the context of digital private power, the legal field needs to focus on deeper issues. Firstly, in which new areas might digital private power pose threats to rights? Secondly, which aspects of legal regulation of digital private power are ineffective? This involves examining how digital private power challenges traditional laws, whether traditional laws can fully address the risks posed by digital private power without undergoing transformation, and what effective rules can be developed to regulate digital private power. Thirdly, in the context of affirming and advocating for innovation in the digital economy, what attitude should legal policies adopt towards digital private power?

3.2 Risks of Digital Public Power Based on “Public Desire”

In this society where digital public power is increasingly strengthening, contradictions between public goals and means have emerged. This situation not only threatens a series of individual rights but also challenges the existing rule of law and the baseline of human rights. This has led to a paradox where the state must build a digital government while simultaneously defending against it. The government’s use of digital technology for social monitoring and governance can result in risks in three typical scenarios:

First, there are human rights risks arising from the application of new investigative technologies. Scholars have studied existing digital investigative methods in the criminal field, including network searches, personal location tracking, remote online extraction of electronic data, and the retrieval of biometric data from third parties. They have analyzed the specific risks associated with these methods, including threats to citizens’ privacy rights, personal information security, data rights, and equality rights. The study identifies two types of “public desire” behind digital public power: the expansionist desire of investigative authorities and the personal voyeuristic desire of individual investigators (Zheng 2023). The former represents a universal expansion of power, while the latter combines public power with the personal desires of power holders. When investigative agencies can easily obtain behavioral identification data from third parties (such as enterprises or social management departments), thereby avoiding resistance from data subjects, the degree of power restraint is significantly weakened – this is when “public desire” finds a “substitute.” (Zheng 2023) “Behavioral recognition would allow a city to safeguard large areas without the need for personnel, thus maximizing safety and minimizing employee costs.” (King 2014) These employee costs include the cost of hiring investigative personnel and government management staff. Reducing administrative costs is also a form of “public desire.”

In 2007, the city of Pittsburgh in the United States formulated a plan to connect CCTV cameras for monitoring different areas. This was still a traditional monitoring method, yet it raised concerns about future applications of “behavioral identification.” A law professor at the University of Pittsburgh wrote that behavioral identification technology shares similarities with the more widely known facial recognition technology. He argued that before implementing behavioral identification technology in Pittsburgh, rules for the application of this new technology must be established to address general privacy issues inherent in its use, as mentioned in the Fourth Amendment of the U.S. Constitution. The public should be aware that the use of such technology in public places must be restricted. Private enterprises and other third parties should only release the behavioral data they collect based on a valid search warrant issued by law enforcement. Without a search warrant, individuals should not be tracked over large geographic areas. Law enforcement should design procedural guidelines to instruct third-party companies on when and how to use the collected information (King 2014). The risks arising from the digital public power generated by the government’s application of digital technology should, of course, be legally borne by the government.

Second, the lack of explainability in digital decision-making by the government may lead to a crisis of public trust and even the risk of power becoming uncontrollable. The black box of public power decision-making is further complicated by a technical barrier due to the difficulty of explanation. Scholars have explored the automation of government decision-making and identified two types: pre-programmed rules and predictive inferencing. Pre-programmed rules involve complex expert systems understood by only a few, leading to low transparency. Predictive inferencing, based on machine learning, is even less transparent and harder to understand, making it potentially unsuitable for decisions that significantly affect individuals’ lives and freedoms (Zalnieriute, Moses, and Williams 2019). Scholars have developed a method called “Counterfactual Explanations,” which allows for explanations without opening the black box, and is considered a new lightweight explanation method (Wachter, Mittelstadt, and Russell 2018). In the UK, several scholars, in collaboration with the Information Commissioner’s Office, have developed a new approach whereby the government can construct explanations for such digital decisions based on review clues or sources. In an experimental decision-making scenario, they tool the decision pipeline to record review clues or sources, categorize relevant explanations according to audience and regulatory purposes, build a prototype for explanation generation, and deploy the entire system in an online demonstrator. However, it is reported that this project still has new risks in terms of privacy, fairness, and bias (Huynh et al. 2021). In the study of digital decision-making, if the biggest challenge is the explainability of decisions, it indicates that government power holders themselves find it difficult to operate digital power. Consequently, digital power may surpass government power and, in turn, human rights. Here, the responsibility for explaining digital technology should be assigned to the technology application entity, i.e., the government, with joint liability borne by both the government and the technology provider.

Thirdly, the risks stem from the government’s alternating use of digital public power and digital private power to exercise control. Some African scholars have found that in Africa, the joint actions of the state and social media companies have continuously assaulted people’s right to free speech online. Despite the advent of the digital age, the state has persistently employed measures to restrict free speech, including shutting down the internet, enacting repressive national security laws, censoring the internet, and conducting digital surveillance. Additionally, social media companies have unfettered power over user-generated content on their platforms, and this discretionary power to limit content continues to threaten people’s right to free speech (Ayalew 2021). This research specifically points out that, “Like sovereign states, social media companies are asserting leviathan power over user-generated content in their platforms. Their ever-increasing power is now evident from the arbitrary and controversial take down decisions they make in their spaces. They are applying unruly standards on user-generated content and invariably employ aggressive practices of content moderation” (Ayalew 2021). Chinese scholars have also recognized that some companies have increasingly embedded themselves in people’s daily lives, with the power they wield possessing quasi-public power characteristics or some attributes of public power (Liu 2020b; Qi 2018). Many institutions exhibit a blend of public and commercial characteristics, making it challenging to distinguish between the two (Ding 2020). This situation indicates that platforms, while holding digital private power, also act as substitutes for the state in exercising digital public power. Therefore, in such cases, both the government and platforms are responsible entities.

From the above analysis, it can be seen that at the regulatory level, digital public power involves new issues of power, rights, and responsibility distribution, and it necessitates redefining the subjects of legal accountability. Digital public power brings forth several significant legal questions: How can we prevent digital public power from infringing upon human rights or individual rights? How can we avoid the threat that digital power poses to the rule of law? What are the positive and negative impacts of digital technology on the legal system? Is it possible to achieve interdisciplinary integration between digital technology and law, leveraging technological capabilities to create more advanced forms of the rule of law, or even higher levels of legal governance, in certain areas?

4 Attitude: The Possibility of Digital Power Shaping a Benevolent Rule of Law

Due to the division of labor and specialization between technology and law, legal studies in the digital age have shown a concern for order and an unfamiliarity with technology. Consequently, most legal scholars adopt a cautious and vigilant attitude towards technology, which leads them to emphasize the need for stronger regulation of algorithmic and digital technologies, and to propose ways to regulate them. How do scholars from other social sciences view digital technology? Most of these scholars hold an optimistic attitude. For instance, economists have recognized the advantageous position of digital finance in China’s economy (Jing and Sun 2019; Zhang et al. 2019; Zhao, Zhang, and Liang 2020);[3] cultural scholars have highlighted the “digital empowerment” in the relationship between digital technology and cultural development, believing that “digitalization has become one of the main ways to protect and disseminate intangible cultural heritage” (Jiang 2021; Ma, Tula, and Xu 2019); educational scholars have concluded that “the empowerment of digital technology can effectively promote the high-quality development of higher education” (Wang 2023); and not to mention the advantages that digital new media communication technologies bring to the development of journalism. In short, the majority of social science scholars are optimistic about digital technology, with only a few expressing concern.

Thus, in China, compared to scholars in other disciplines, most legal scholars hold a general attitude of concern and precaution towards digital technology and digital power. Therefore, the majority of them suggest that digital power should be prevented and regulated legally. Only a minority of legal scholars have an optimistic attitude towards this issue. The reason for this lies in the academic responsibilities of legal scholars. However, is this precautionary attitude, which outweighs optimism, somewhat biased? Legal scholars are equally unfamiliar with digital technology as scholars from other social sciences. Yet, compared to other disciplines, legal scholars exhibit a greater sense of concern about digital technology. This approach, which focuses on “regulation” out of “concern” rather than on “development” out of “optimism,” is at least a collective bias.

So, what attitude should law take towards digital technology and digital power? A preliminary observation suggests that the spectrum of attitudes among legal scholars domestically and internationally appears as follows: there are the resolute opposition and restriction faction akin to European scholars, the general concern and regulation faction akin to Chinese scholars, and of course, there are also the neutral faction and the proactive promotion of digital power faction. Should there be another proactive faction that supports mitigating harm and promoting good, emphasizing the coexistence of regulation and guidance? The effectiveness and success of future digital rule of law depend on our attitude. Persistent vigilance and caution can only lead to passive regulation, lacking active guidance towards good. While multiple attitudes coexisting is normal, Chinese legal studies should at least avoid the European-style pessimism and caution. Therefore, this paper advocates for a rational attitude that maintains both caution and optimism, and looks at digital technology and digital power more from the positive goal of “mitigating harm and promoting good.”

Overall, the two types of digital power today bring certain risks and could pose even greater risks in the future. These digital powers have become the invincible strong forces in the digital social order, jointly influencing individuals within society. While people today enjoy the immense benefits brought by digital technology in their lives and work, they also feel the oppression and fatigue that digital technology imposes on personal freedom and well-being. Can this be separated from the manipulation and control exerted by digital power? The coercive and dominant influence and control of digital power result in a more asymmetrical encroachment and violation of the rights of participants in the digital social order. Today, rather than considering platform digital power as equal rights in the market and allowing it to operate unchecked, we should reveal and be wary of its monopolistic nature. Additionally, digital power encompasses many aspects that we have not yet realized. We should be alert to the fact that digital power could have a deeper impact and control over individual rights. The extensive and deep application of emerging contemporary technologies in society has brought digital technology close to or directly impacting every individual. With the continuous development and application of biomedical technology, information technology, big data technology, and internet technology, personal data collection and utilization technologies such as organ transplants, gene editing, in vitro fertilization, facial recognition, iris recognition, and fingerprint recognition not only harm the human living environment but also attempt to infringe on personal bodily integrity, freedom of movement, mental and physical health, and human dignity. This is a severe reality, and these issues are spreading in breadth and depth. Technology might destroy humanity’s unique, irreplaceable precious assets, including life, body, freedom, health, dignity, and wisdom. If we do not carefully guard against and dispel the modern myths brought by technology, humanity’s technological creations might trap us in various self-made predicaments, where our precious assets could be destroyed through technological abuse. Therefore, if we do not reflect on these issues from a human-centered, humane perspective, and do not validate them from the legal values of humanism, the relationship between technology and humanity, law and human rights, and power and rights will become increasingly strained. However, the attitude of legal scholars towards digital power should not be solely one of concern; they should also optimistically see the other side of digital power – its innovative, inclusive, and universally beneficial public characteristics, and its positive and developmental aspects towards “good.” Hence, digital power has the potential to reshape a more “benevolent” rule of law.

Firstly, the rights corresponding to digital power should be enriched with additional content. On July 6, 2012, the UN Human Rights Committee adopted the Resolution on the Promotion and Protection of Human Rights on the Internet during its 20th session. This marked the beginning of global recognition of internet-related human rights and the establishment of protection mechanisms (United Nations Human Rights Council 2016). The resolution extends the rights from the Declaration of Human Rights, the International Covenant on Civil and Political Rights, and the International Covenant on Economic, Social, and Cultural Rights to the Internet, emphasizing that “the rights of people offline must be protected online.” (Korniienko et al. 2021) While in China, some scholars have discussed digital rights, arguing that “digital rights are essentially a set of independent, emerging rights with new modes of empowerment, rights structures, and operational logic, rather than just ‘traditional rights with digital content.’” They propose a rights framework comprising the right to digital survival, digital personality rights, the right to algorithmic due process, and digital property rights. The paper argues that, from a connotative perspective, digital rights ultimately protect the individual’s interest in autonomous decision-making regarding the application of digital technology and the “freedom of action” in digital spaces. From an extensional perspective, digital rights are closely and complexly related to concepts such as digital human rights, data (information) rights, and algorithm rights. In a digital society, digital rights can be refined into three forms: “negative digital rights,” “positive digital rights,” and “instrumental digital rights.” These three forms respectively shape the choice function at the conceptual level of rights, the protection function at the discourse level of rights, and the normative function at the institutional level of rights (Luo 2023). I agree with this perspective and also support the concept of digital human rights. “Digital human rights can be categorized into four secondary rights: the right to digital survival, the right to digital freedom, the right to digital equality, and the right to digital remedy, further developing into an open rights system.” (Gao 2023) In traditional legal domains, the relationship between power and rights is often one of mutual gain and loss. However, in the digital age, an interesting phenomenon is that as digital power expands, certain human rights also see significant expansion. This phenomenon indicates that both the subjects of rights and the subjects of power are often beneficiaries of digital technology; therefore, the subjects of rights sometimes do not resent the intrusion and threats posed by digital power. Further observation reveals that beyond traditional rights, there exist some emerging rights that, while being restricted and diminished, also gain legal recognition and protection. This developmental characteristic is what defines digital rights.

Second, digital power can decompose and optimize overly centralized government power. Excessive government power sometimes leads to over-intervention in the market and excessive regulation of society. This phenomenon is indeed alleviated by the intervention of digital power. Not only can digital public power achieve this effect, but digital private power can also attain this effect. The secret lies in the fact that digital technology possesses a decentralizing function. Therefore, well-developed digital technology can promote equal opportunities in society, achieve autonomy for individuals and society, share government management responsibilities, reduce the burden of government regulation, and improve government management efficiency. Digital private power can play a major role here. For example, in the emerging financial sector deeply integrated with blockchain technology, some scholars have noted that it is necessary to “introduce a decentralized autonomous organization governance model to adjust rigid regulatory thinking and collaborate with public power agencies through flexible means to achieve business regulation, data regulation, and technology regulation in this industry.” (Liu 2023) However, digital private power ultimately serves private interests. While it can do good, it is also bound to do harm. Therefore, some scholars have pointed out that “technology giants and a few knowledge elites who control core artificial intelligence technologies and massive amounts of data, under the dominance of capital and data logic, have gained the power to dispose of individual information and even control political agendas. The discourse power originally held by individuals dispersed in cyberspace is converging towards new centers of capital and knowledge, making the power structure in the AI era show obvious centralization characteristics.” (Ye and Xu 2019) Therefore, can we construct a new type of rule of law that adapts to the digital age, one that can both address the threats posed by digital private power and utilize digital power to decompose state power, forming a new order between digital power and state power, and alleviating issues of excessive centralization, over-intervention, and rent-seeking corruption?

Third, optimism about digital technology is also reflected in the potential for integrating technology and law to construct a more sophisticated framework than traditional rule of law. Take the application of artificial intelligence in the judicial field as an example. In 2022, OpenAI’s ChatGPT was introduced, and a judge in Colombia used it to draft a court ruling while it was available for free use. This was evidently the world’s first ruling drafted using ChatGPT. The ruling included the chatbot’s complete responses as well as the judge’s own insights into the applicable legal precedents. Although the judge stated that using AI was merely to “extend the arguments of the adopted decision,” it indeed can “speed up the drafting of rulings, and its responses are fact-checked.” (Klingensmith 2023) This could indicate that AI is increasingly being applied in legal and judicial contexts. When asked to define legal terms contained in regulations, ChatGPT excelled, providing answers consistent with those from U.S. federal and state courts. In another test, ChatGPT criticized its own legal capabilities and acknowledged its limitations, stating, “It is unethical for me to provide legal advice because I am not a qualified legal professional.” (Klingensmith 2023) After conversing with ChatGPT, the tester noted that although some answers were “imperfect and sometimes problematic,” the chatbot might not be “ready for prime time” yet, but “it doesn’t seem far off.” The tester pointed out that “as a machine learning system, ChatGPT may lack the nuanced understanding and judgment of a human lawyer in interpreting legal principles and precedents.” (Klingensmith 2023) In 2023, we saw the emergence of AGI (Artificial General Intelligence) language models. We have yet to fully grasp their potential applications in the judicial field and the extent to which their technological capabilities might surpass previous models. However, we can affirm that the research and development of judicial AI based on large language models hold considerable promise.

Additionally, towards the end of 2023, a noteworthy piece of news emerged from the United Kingdom. A cross-jurisdictional judicial panel in the UK developed a “Guideline” for the application of artificial intelligence in the judiciary, marking the first such guideline in human history. The UK judiciary’s official website released this “AI Judicial Guideline” on December 12, 2023. The first paragraph of the announcement stated, “The use of artificial intelligence across society continues to increase, and its relevance to the courts and tribunal system is also increasing. All judicial officers must be aware of the potential risks. As emphasized in the guideline, it is particularly important to be aware that the public versions of these tools are inherently open, and therefore private or confidential information should not be input into them.” The guideline is described as the first step in a series of future efforts to support the judiciary’s interaction with artificial intelligence. “All work will be reviewed as technology continues to develop,” and an “FAQ document to support the guidance.” (Judiciary of England and Wales 2023) This statement cautiously points out three layers of meaning: First, there is a growing relevance between the judiciary and artificial intelligence. Second, it emphasizes the potential risks of applying artificial intelligence in the judiciary, such as threats to information confidentiality. Third, it affirms the guideline’s support for “interaction between the judiciary and AI” and presciently notes that this is the “first step in future work.” The six-page guideline initially states, “This guidance has been developed to assist judicial office holders in relation to the use of Artificial Intelligence (AI). It sets out key risks and issues associated with using AI and some suggestions for minimising them. Examples of potential uses are also included.” It also advises, “Before using any AI tools, ensure you have a basic understanding of their capabilities and potential limitations.” (Judiciary of England and Wales 2023) In a country as traditionally conservative as the UK, this guideline represents a significant breakthrough and a cautiously optimistic indication of the future.

Beyond the concept of “mitigating harm,” how can digital technology and digital power be harnessed to “promote good” and thereby enhance human welfare? This is a higher-level issue that digital rule of law faces as artificial intelligence enters the AGI era. The optimistic outlook on the future of digital technology stems from its immense potential; regulated technology can indeed “promote good,” enhancing human welfare and even advancing the “common good” of humanity. Therefore, digital technology can also support the implementation of the rule of law and the protection of human rights. Contemporary emerging technologies can benefit humanity by enhancing human capabilities and can create opportunities for independent innovation in Chinese legal studies. Our legal studies should embrace new technology with an open attitude.

The attitude of legal studies is closely linked to the rule of law’s stance on digital power. Legal scholars tend to be relatively rational, while national functional departments or local governments often exhibit irrational attitudes influenced by “public desires” in their decision-making. Therefore, the government should heed the opinions of legal scholars. All forms of power should be confined within the cage of institutional regulation, and this applies to both digital private power and digital public power. The social nature of digital technology determines the origin of the rule of law for digital power, and the emergence of digital power necessitates its inclusion within the scope of order by the rule of law, underscoring the necessity of digital rule of law. Thus, the concept of “digital rule of law” is inevitable. The commonality between digital power and traditional power lies in their subjection to the control of the rule of law; their difference is that, while digital power is subject to the control of the rule of law, it can also improve the rule of law and benefit humanity. Utilizing digital power can establish a more optimized structure of the rule of law, shaping a legal system that can “empower and promote good.” For instance, generative AI presents novel challenges to the current legal framework. Its governance should uphold the principle of balancing safety with development, promoting an AI code of ethics centered on human values (Li, Cai, and Cheng 2023). This approach will facilitate the establishment of a legal system capable of guiding the responsible use of AI and fostering the application of generative AI within the context of the digital rule of law, particularly in judicial settings, ensuring that it “empowers and promotes the common good.”

5 Mission: The Significance of Digital Rule of Law

From the development trends of two types of digital power, we can deduce the future development trends of the rule of law and the inevitability and significance of the emergence of digital rule of law. From the perspective of different tasks and missions, digital rule of law, like the rule of law in education, finance, and taxation, is rapidly emerging as a field-specific rule of law in a new industry. However, digital power is ubiquitous and widely influences human social life. In every field, there is substantial involvement of digital technology and digital power. Therefore, digital rule of law comprehensively and ubiquitously overlays and integrates with traditional rule of law. This is the main reason why digital rule of law has rapidly risen with the development of digital technology. Currently, humanity faces a dilemma with the rapid rise of digital technology: we cannot abandon digital power, nor can we allow it to run unchecked. Thus, people are turning to law and the rule of law for solutions. So, what are the differences between digital rule of law and traditional rule of law? Or, what special issues should digital rule of law focus on? On one hand, based on the dual characteristics of the pros and cons of digital technology applications, digital power, like public power, needs to be confined within the cage of institutional regulations. This is the negative aspect of the rule of law path and model. On the other hand, the law can leverage digital technology to empower humanity or expand human welfare, which is the positive aspect of the rule of law path and model. Based on this, we can easily identify three major missions in an optimized rule of law order structure, ranked from lower to higher: restraining the misuse of technology, optimizing the rule of law model, and promoting the beneficial use of technology. It is necessary to elaborate on these three missions.

Restraining the misuse of technology refers to identifying the technological risk points of digital power and regulating them in a timely manner. This regulation of technology falls under the negative approach of “restraining the misuse of technology.” Such regulation can be achieved through laws or through digital technology itself. Digital risks differ from other traditional risks in that the subjects threatened or protected by these risks cannot directly perceive their existence. “Digital risks are very unique in nature as they do not physically compromise our survival – this can be seen with environmental, health, or security risks. On the contrary, digital risks affect people’s rights, political freedom (including the very functioning of democracy), and, ultimately, human dignity, in addition to data privacy and information security. “(Fernandez 2023) This unique nature of digital risks makes it more difficult for citizens to recognize them.

The risks of digital power must be identified from within digital technology itself. Here are four examples: Some scholars cite cases like autonomous vehicle accidents to argue that algorithmic liability should differ from human liability, arguing that the framework for algorithmic infringement should be significantly different from that for human infringement. Based on this, they suggest establishing a “reasonable algorithm” standard for the tortious acts of algorithmic decision-makers, similar to the “reasonable person” or “reasonable professional” standards applied to human tortfeasors (Chagal-Feferkorn 2018). Secondly, as the drawbacks and risks of algorithmic power have already been identified, some scholars keenly observe that big data and data quality also pose risks. They argue that the law should provide a standard or framework for data quality. This scholar believes that data power is closely related to the acquisition and utilization of knowledge and data. However, in debates about data power, the issue of data quality has rarely been mentioned, primarily due to the current lack of legal standards regarding data quality. The first regulatory attempts to address this issue are hidden in Article 6 of the EU Data Protection Directive and Article 28b of the German Federal Data Protection Act (BDSG) regarding scoring. Thus, with the help of preliminary research in computer science and sociology, we can establish a provisional and fragmented framework for the legal standards of data quality (Hoeren 2017). Thirdly, the issue of causality in algorithms remains unresolved. There are core challenges related to causality and intent in the context of the AI black box problem. Can current legal principles be applied to this? In 2018, Yavar Bathaee proposed a “Sliding Scale System” in his article “The Artificial Intelligence Black Box and the Failure of Intent and Causation.” He argued that this is a better method for addressing causality and intent tests – relaxing liability requirements when AI operates autonomously or lacks transparency, while retaining traditional intent and causality tests when humans supervise AI or when AI is transparent (Bathaee 2018). Fourthly, some international scholars have proposed more cutting-edge risks that are currently difficult to regulate. They believe that digital risks are not limited to infringing on our privacy but may also affect our deeper free will. “Digital technologies fight to capture our attention and can trap us in a certain ideological frame or “filter bubble.” It is a “friendly Big Brother” that knows us better than we know ourselves and can condition our thoughts and opinions. Even more invasive technologies are being developed.” (Fernandez 2023) It is evident that digital power, like a Leviathan, has the characteristic of continuous reinforcement, making the challenge of “mitigating harm” even more difficult.

Optimizing the rule of law means discovering and establishing new values within digital legal practice, updating the concept of the rule of law, and optimizing future legal systems. Digital legal practice has comprehensively covered our lives, and while we adhere to the traditional value system of the rule of law, should we also endow it with new value connotations? For example, a judge discussed new principles for using artificial intelligence in the judiciary. He believes that the work of courts and judges should be constrained by appropriate procedural standards, as stipulated in Article 6 of the European Convention on Human Rights. Additionally, the Council of Europe has established ethical principles for the use of artificial intelligence in the judiciary. These five “ethical principles” include respect for fundamental rights, the principle of equal treatment, data security, transparency, and the principle that AI can be controlled by users. Similarly, the article “Kafkaesque AI?” argues that establishing standards for the use of artificial intelligence in judicial decision-making requires reference to machine ethics, procedural justice theory, the rule of law, and due process principles. The article discusses rights such as fairness, accountability, the right to be heard, and dignity and respect. Furthermore, the article proposes potential solutions to existing problems. Therefore, its conclusion is that AI decision-making by humans does not necessarily have to be Kafkaesque. Many solutions can not only improve the shortcomings of current AI technology but also enrich and strengthen the legal system (Kemper 2020). These new ethical principles actually represent an update in the judicial philosophy of courts and judges, with the fundamental starting point being new issues concerning human rights in the concept of the rule of law in the digital age. Consequently, the traditional concept of the rule of law will also evolve.

Should digital rule of law establish new fundamental principles based on human rights? Chinese scholars have keenly recognized that digital power concerns human rights. Professor Zhang Wenxian pointed out that “‘digital human rights’ have simultaneously become the most important emerging rights.” “‘Digital human rights’ demand that digital technology be people-centered and that the power and authority of human rights be used to strengthen the ethical constraints and legal regulations on the development and application of digital technology. Proposing the concept of ‘digital human rights’ is a strategic necessity to lead the new generation of human rights.” (Zhang 2020) This viewpoint has received responses from many scholars, though some argue that digital human rights are not a new generation of human rights (Liu 2021). Gao Yifei believes that “digital human rights are neither a new category parallel to existing human rights nor can they be simply classified as new content under a certain human rights value. Instead, they emphasize reaffirming the value of human autonomy in the application of technology in the digital age, under the premise of appropriately distinguishing between human rights and rights, and using human rights as a value criterion for evaluating or guiding the application of digital technology.” (Gao 2022) He further suggests that digital human rights should be transformed from a value concept into institutional norms integrated into the existing legal system. Other scholars argue that “the subjects of ‘digital human rights’ include both individuals and collectives, while the obligors primarily refer to enterprises and certain public institutions with digital power, with the two presenting a deeply intertwined relationship of confrontation and cooperation.” (Ding 2022) These studies focus on the values and fundamental principles of digital rule of law. Whether or not digital human rights constitute a new generation of human rights, the issue of digital human rights is an objective reality that we must pay attention to.

Promoting technology for good aims to enhance human welfare by fostering the positive use of digital technology through legal means and optimizing the structure of the rule of law. Just as technology experts apply digital technology in the healthcare field, as digital technology continues to develop, should legal scholars and scientists consider how digital technology can bring more benefits to the law and the people? In 2018, a scholar highlighted the weaknesses of current legal technology and envisioned its future. He argued that conventional legal technologies are often ineffective for ordinary citizens, who struggle to utilize their legal rights against the powerful, who are better positioned to benefit from such technologies. Instead, he advocated for focusing on transformative technologies. Artificial legal cognition can support the powerless by automating the identification of legal issues and enabling collective legal action (Gowder 2018). Currently, some industries in China have started using digital technology for contract compliance reviews to predict legal risks. In 2017, international scholars proposed using artificial intelligence and algorithms to protect personal privacy (Scripa 2017). For example, international scholars have researched the application of digital technology in rule-making, such as widely providing materials related to rule-making to the public through online platforms, enabling large-scale public participation in the rule-making process (Moxley 2016). Additionally, scholars have studied the design of the Council of Europe’s electronic voting protocol, exploring issues of confidentiality and verifiability in electronic voting systems (Muller-Torok, Bagnato, and Prosser 2020). More cutting-edge research in this field includes studies on public participation in the rule-making process through electronic platforms. Researchers in this project believe that electronic rule-making platforms facilitate public deliberation on policy or regulatory proposals, allowing decision-makers to weigh relevant considerations and promoting a digital consultation process for public review of policy or regulatory suggestions (Perez 2020).

If we boldly envision the future with a science fiction mindset, what would be the greatest benefit to humanity brought by a more optimized legal order structure centered on digital governance? Starting from the inherent drawbacks of human legal systems, we can imagine this promising future. Traditional legal theory tells us to follow the “general principles of law.” This principle emphasizes that rules are rules, characterized by a “one-size-fits-all” approach, which inevitably leads to unreasonable or even unjust outcomes in individual cases. This is the drawback of formalist rule of law and the challenge of substantive rule of law. How can we overcome these drawbacks of formalist rule of law? How can we achieve individual justice in specific cases without leading to the abuse of so-called substantive rule of law? This is a significant challenge in human law enforcement and judiciary, and it is possible to fundamentally address these challenges using digital technology. Today, low-cost information collection and intelligent evaluation tools have emerged. As some scholars have analyzed, “scoring the daily behavior of the public” can “help incorporate more detailed behaviors into management.”[4] While this inevitably poses risks to individual rights, it also potentially brings benefits to the rule of law. Since 2019, the academic community has been discussing the topic of “Personalized Law.” Research on “personalized law” has been conducted in the United States, with multiple related papers published since 2019 in the Chicago Law Review.[5] This concept originates from the book “Personalized Law: Different Rules for Different People” by Professors Ben Shahar and Ariel Porat. Professor Cary Coglianese from the University of Pennsylvania commented that the book starts from an important recognition of the limitations of general rules and depicts a hopeful future vision where these limitations are overcome through algorithmic systems, thereby making personalized decisions based on legal objectives. For example, to personalize the existing rule of “commercial airline pilots retiring at 65,” it could be modified to “commercial airline pilots must undergo regular health checks starting at age 40 and retire at 65.” Here, we can see that the authors envision a world with fewer universal rules and more personalized decisions. With the current advancements in predictive analysis tools, such as machine learning or artificial intelligence, these tools are beginning to be allowed to make more precise and personalized decisions in various environments (Coglianese 2022). Mireille Hildebrandt distinguishes between code-driven and data-driven regulation as novel forms of legal regulation and proposes the concept of ‘agonistic machine learning’ as a means to bring data-driven regulation under the rule of law (Hildebrandt 2018). Cary Coglianese concludes that Ben Shahar and Ariel Porat articulate a vision of establishing a more personalized, effective, and just legal system based on data-driven, finely-tuned legal obligations. Whether society can achieve this hopeful vision depends on its ability to overcome the challenges of personalized law – but not necessarily to completely overcome these challenges. Professor Dan L. Burk from the University of California, in his article “Algorithmic Fair Use,” believes that, “like other human artifacts, the marginal cost of law benefits from economies of scale; standardized, one-size-fits-all laws can be economically formulated and promulgated. The law may be like a tailored suit, with some adjustments made by courts or other adjudicators at the end of the supply chain. But even moderate judicial adjustments can greatly increase the cost of applying the law, and rare cases of customized regulation are even more socially costly. Technological advancements may significantly reduce the cost of customized regulation. The potential of ‘personalized law’ depends on the development of ubiquitous data collection and algorithmic data processing, as well as the significant reduction in real-time communication costs.” (Hildebrandt 2018) He also acknowledges that the extensive and growing literature on algorithmic regulation has warned us about the inherent pitfalls of relying on such technologies, including false objectivity, reduced decision-making transparency, and design biases, which will inevitably affect users’ expectations of their management processes. If there exists what Dan L. Burk calls reliable algorithms, then the pilot implementation of “personalized law” in certain areas of law allows us to boldly foresee a new framework of personalized and humanized digital governance or future rule of law. In this sense, digital governance already encompasses the full connotations of “computational law” and “future rule of law.”

6 Conclusions

In summary, legal studies should not only focus on digital technology itself but also extend to the positive and negative effects that digital technology produces in its social applications. Only then can substantive issues be uncovered. Legal studies must closely monitor the unique regulatory challenges posed by the socialization of technology – specifically, digital power in both the commercial and state domains, referred to as digital private power and digital public power. Digital power can pose certain risks to fundamental human rights and also threaten to disrupt the legal order. As a new type of power, digital power is both a factual and normative legal issue and should thus be included in the domain of legal studies. Consequently, it should also be incorporated into the legal conceptual system to be regulated by law. Hence, the concept of “digital governance” has emerged. Legal scholars should not be concerned with technological progress itself but with how to use digital technology and the two types of digital power it brings to mitigate harm and promote good, thereby shaping a future rule of law that can “empower and promote good.” We have reason to believe that digital technology, when constrained by law and ethics, can enhance human welfare in the future. This is the important mission and promising future of China’s digital governance.

Additionally, there are two critical issues in the technological experimentation of digital governance that deserve our attention. First, the significant subject of digital governance cannot be separated from the collaboration between legal scholars and scientists. Without the participation of scientists, the design of digital legal tools is almost impossible to complete. Without the participation of legal scholars, the positioning of digital legal tools can only be as auxiliary tools. Based on today’s Chat-GPT generative language models, is it possible to design a more specialized, legally professional, and ethically aligned language model? This depends on the depth of legal scholars’ intellectual involvement. Second, the development of the rule of law in China has certain characteristics, such as the need for top-down systemic promotion. Another opportunity for the development of the rule of law in China is the digital governance development opportunity brought about by the government’s positive attitude towards digital technology (relative to Europe). If we can effectively combine these opportunities and characteristics, is it possible for China’s autonomous legal studies and rule of law to rise in the field of digital governance?


Corresponding author: Yang Xiao, Guanghua Law School, Zhejiang University, Hangzhou, China, E-mail:

About the authors

Xiaoxia Sun

Xiaoxia Sun is the Dean of the Digital Law Research Institute of Zhejiang University. He has served in various leadership positions, including as the Dean of Guanghua Law School at Zhejiang University and later as Dean of the Law School at Fudan University. Sun is a Changjiang Scholar and enjoys a special allowance from the State Council. His research focuses on legal theory, philosophy of law, public law principles, and procedural theory. He is also an expert in digital law, leading research efforts in the field of digital governance and smart justice. As the founding Director of the Digital Law Research Institute, Xiaoxia Sun has spearheaded collaborative projects between academia and the judiciary, working towards digital transformation in China’s legal system. His numerous books and over 100 scholarly articles contribute significantly to the fields of legal theory and digital law.

Yang Xiao

Yang Xiao is Research Fellow at Zhejiang University. His research interests lie in digital law, international law, and empirical legal studies.

  1. Research ethics: Not applicable.

  2. Informed consent: Not applicable.

  3. Author contributions: All authors have accepted responsibility for the entire content of this manuscript and approved its submission.

  4. Use of Large Language Models, AI and Machine Learning Tools: None declared.

  5. Conflict of interests: The authors state no conflict of interest.

  6. Research funding: None declared.

  7. Data availability: Not applicable.

References

“Artificial Intelligence (AI) - Judicial Guidance.” https://www.judiciary.uk/guidance-and-resources/artificial-intelligence-ai-judicial-guidance/ (accessed February 21, 2024).Search in Google Scholar

Ayalew, Y. E. 2021. “From Digital Authoritarianism to Platforms’ Leviathan Power: Freedom of Expression in the Digital Age Under Siege in Africa.” Mizan Law Review 15 (2): 455–92. https://doi.org/10.4314/mlr.v15i2.5.Search in Google Scholar

Bathaee, Y. 2018. “The Artificial Intelligence Black Box and the Failure of Intent and Causation.” Harvard Journal of Law and Technology 31 (2): 889–938.Search in Google Scholar

Burk, D. L. 2019. “Algorithmic Fair Use.” The University of Chicago Law Review 86 (2): 283–307.Search in Google Scholar

Busch, C. 2019. “Implementing Personalized Law: Personalized Disclosures in Consumer Law and Data Privacy Law.” The University of Chicago Law Review 86 (2): 309–31.10.2139/ssrn.3181913Search in Google Scholar

Cao, K. 2023. “Algorithmic Power and Future Democracy: The Political Effects and Regulation of Digital Technology—An Examination Based on ‘The Power of Algorithms: How Humans Coexist Together?’.” United Front Studies (2): 125.Search in Google Scholar

Casey, A. J., and A. Niblett. 2019. “A Framework for the New Personalization of Law.” The University of Chicago Law Review 86 (2): 333–58.Search in Google Scholar

Cha, Y. 2023. “The Administrative Law Attributes and Regulation of Algorithms.” Law and Social Development 6: 168–70.Search in Google Scholar

Chagal-Feferkorn, K. 2018. “The Reasonable Algorithm.” University of Illinois Journal of Law, Technology and Policy 2018 (1): 111–2.Search in Google Scholar

Chen, J. 2020. “The Legal Nature of Algorithms: Speech, Trade Secrets, or Due Process?” Comparative Law Research 2: 120–32.Search in Google Scholar

Cheng, L., and X. Liu. 2024. “Unravelling Power of the Unseen: Towards an Interdisciplinary Synthesis of Generative AI Regulation.” International Journal of Digital Law and Governance 1 (1): 29–51. https://doi.org/10.1515/ijdlg-2024-0008.Search in Google Scholar

Cheng, L., and D. Machin. 2022. “The Law and Critical Discourse Studies.” Critical Discourse Studies 20 (3): 243–55. https://doi.org/10.1080/17405904.2022.2102520.Search in Google Scholar

Coglianese, C. 2022. “Moving Toward Personalized Law.” The University of Chicago Law Review Online: 1–14.Search in Google Scholar

D’Cunha, C. 2021. “A State in the Disguise of a Merchant: Tech Leviathans and the Rule of Law.” European Law Journal 27 (1-3): 109–31. https://doi.org/10.1111/eulj.12399.Search in Google Scholar

Ding, X. 2020. “On the Legal Regulation of Algorithms.” China Social Sciences 12: 151.Search in Google Scholar

Ding, X. 2022. “On the New Characteristics of ’Digital Human Rights’.” Science of Law 6: 52.Search in Google Scholar

Du, Z., and B. Wang. 2023. “Algorithm Governance in the Age of Artificial Intelligence: Power Expansion and Risks.” Hunan Social Sciences 5: 84–93.Search in Google Scholar

Duan, W. 2018. “The Algorithmic Power of Data Intelligence and its Boundary Correction.” Exploration and Free Views 10: 94.Search in Google Scholar

Fan, J. 2009. “Absolute Power, Absolute Corruption.” Administration and Law (6): 13.Search in Google Scholar

Fernandez, J. V. 2023. “The Risk of Digitalization: Transforming Government into a Digital Leviathan.” Indiana Journal of Global Legal Studies 30 (1): 4–6.10.2979/gls.2023.a886160Search in Google Scholar

Gao, S. 2010. “From Centralization to Decentralization: A Review and Commentary on Media Transformation Studies.” Journal of Zhejiang University of Media and Communications 4: 21.Search in Google Scholar

Gao, Y. 2022. “Why Human Rights Are Important in the Digital Age: On Digital Human Rights as a Value System.” Modern Law Science 3: 150.Search in Google Scholar

Gao, Y. 2023. “Systematic Development of the Normative Structure of Digital Human Rights.” Law Research (2): 37.Search in Google Scholar

Giddens, A., and K. Birdsall. 2001. Sociology, 4th ed., 596–8. Cambridge: Polity Press.Search in Google Scholar

Gikay, A. A. 2018. “Regulating Decentralized Cryptocurrencies Under Payment Services Law: Lessons from European Union Law.” Case Western Reserve Journal of Law, Technology and the Internet 9 (1): 1–35.Search in Google Scholar

Gowder, P. 2018. “Transformative Legal Technology and the Rule of Law.” University of Toronto Law Journal 68, no. S1: 82–105. https://doi.org/10.3138/utlj.2017-0047.Search in Google Scholar

Greenstein, S. 2022. “Preserving the Rule of Law in the Era of Artificial Intelligence (AI).” Artificial Intelligence and Law 30 (3): 291–323. https://doi.org/10.1007/s10506-021-09294-4.Search in Google Scholar

Grigoleit, H. C., and P. M. Bender. 2021. “The Law Between Generality and Particularity. Chances and Limits of Personalized Law.” In Algorithmic Regulation and Personalized Law: A Handbook, edited by C. Busch, and A. De Franceschi, 115–36. Oxford: Bloomsbury Publishing PLC.10.17104/9783406779336-115Search in Google Scholar

Guo, C., and Q. Yong. 2023. “Procedural Justice of Algorithms.” Journal of China University of Political Science and Law 1: 164–80.Search in Google Scholar

Hahn, I. 2021. “Purpose Limitation in the Time of Data Power: Is There a Way Forward?” European Data Protection Law Review 7 (1): 33–44. https://doi.org/10.21552/edpl/2021/1/7.Search in Google Scholar

Hildebrandt, M. 2018. “Algorithmic Regulation and the Rule of Law.” Philosophical Transactions of the Royal Society A 376: 20170355. https://doi.org/10.1098/rsta.2017.0355.Search in Google Scholar

Hoeren, T. 2017. “Big Data and the Legal Framework for Data Quality.” International Journal of Law and Information Technology 25 (1): 26–37.10.1093/ijlit/eaw014Search in Google Scholar

Hu, L. 2019. “The Source of Power in the Digital Society: The Reproduction of Scores, Algorithms, and Norms.” Shanghai Jiao Tong University Law Review 1: 34.Search in Google Scholar

Huynh, T. D., N. Tsakalakis, A. Helal, S. Stalla-Bourdillon, and L. Moreau. 2021. “Addressing Regulatory Requirements on Explanations for Automated Decisions with Provenance—A Case Study.” Digital Government: Research and Practice 2 (2): 4–10. https://doi.org/10.1145/3436897.Search in Google Scholar

Infantino, M. 2024. “A Comparative Study of Automated Quantification in Digital Insurance.” International Journal of Digital Law and Governance 1 (1): 1–27. https://doi.org/10.1515/ijdlg-2024-0003.Search in Google Scholar

Jiang, X. 2021. “Technology and Culture in the Digital Age.” Social Sciences in China 8: 4.Search in Google Scholar

Jing, W., and B. Sun. 2019. “The Promotion of High-Quality Economic Development by the Digital Economy: A Theoretical Analysis Framework.” Economist 2: 66–73.Search in Google Scholar

Kemper, C. 2020. “Kafkaesque AI? Legal Decision-Making in the Era of Machine Learning.” Intellectual Property and Technology Law Journal 24 (2): 251.10.31228/osf.io/4jzk2Search in Google Scholar

Khan, L. M. 2018. “Sources of Tech Platform Power.” Georgetown Law Technology Review 2 (2): 325–34.Search in Google Scholar

King, J. D. 2014. “Behavioral Recognition: Computer Algorithms Alerting Law Enforcement to Suspicious Activity.” Pittsburgh Journal of Technology Law and Policy 15 (1): 101–2.10.5195/TLP.2014.159Search in Google Scholar

Klingensmith, M. W. 2023. “Let’s Talk, ChatGPT: What Will the Judiciary’s Future Look Like?” Florida Bar Journal 97 (3): 27–8.Search in Google Scholar

Korniienko, P. S., V. Plakhotnik, H. O. Blinova, Z. O. Dzeiko, and G. O. Dubov. 2021. “Contemporary Challenges and the Rule of Law in the Digital Age.” Estudios de Economía Aplicada 39 (9): 25. https://doi.org/10.25115/eea.v39i9.5773.Search in Google Scholar

Lan, J. 2019. “Ontology of Social Existence in the Digital Age.” People’s Tribune · Frontiers (14): 32.Search in Google Scholar

Li, H., and Z. Zhao. 2023. “The Rise, Expansion, and Governance of Digital Power.” Jianghan Tribune (9): 113–9.Search in Google Scholar

Li, X. 2023. “Neurotechnology and Neuro-rights in the Era of the “Metaverse”.” Eastern Jurisprudence (6): 77–82.Search in Google Scholar

Li, J., X. Cai, and L. Cheng. 2023. “Legal Regulation of Generative AI: A Multidimensional Construction.” International Journal of Legal Discourse 8 (2): 365–88. https://doi.org/10.1515/ijld-2023-2017.Search in Google Scholar

Liang, M. 2001. “A Brief Discussion on the Technical Regulation of ‘Digital Government’ Operation.” China Public Administration 6: 20–1.Search in Google Scholar

Liang, M. 2002. “On the Legal Regulation of ‘Digital Government’ Operation.” China Public Administration 4: 31.Search in Google Scholar

Lin, J., and H. Luo. 2023. “The Position and Approach of Social Law in the Governance of Digital Power.” Southeast Academic Research (6): 228–36.Search in Google Scholar

Liu, D. 2020a. “Technical Due Process: The Dual Variation of Procedural Law and Algorithms in the Era of Artificial Intelligence.” Comparative Law Research 5: 64–79.Search in Google Scholar

Liu, G. 2007. “On the Path Choice for Developing China’s Technological Power.” Journal of Guangdong University of Technology (Social Science Edition) 4: 38.Search in Google Scholar

Liu, Q. 2020b. “The Public Nature of Online Platforms and its Realization: From the Perspective of Legal Regulation of E-Commerce Platforms.” Chinese Journal of Law 2: 43–5.Search in Google Scholar

Liu, Y. 2023. “Industry Self-Regulation of Decentralized Finance: A Path of Collaborative Regulation.” Modern Economic Research 9: 119.Search in Google Scholar

Liu, Z. 2021. “On ’Digital Human Rights’ Not Constituting the Fourth Generation of Human Rights.” Legal Studies 1: 21–33.Search in Google Scholar

Lu, Y., and Y. Zhao. 1999. “The Development of Digital Television in the United States: Between Power Structures and Commercial Interests.” Journalism and Communication Studies (3): 49–53.Search in Google Scholar

Luo, Y. 2023. “On Digital Rights: Theoretical Interpretation and System Construction.” E-Government (5): 50.Search in Google Scholar

Lynskey, O. 2019. “Grappling with ‘Data Power’: Normative Nudges from Data Protection and Privacy.” Theoretical Inquiries in Law 20 (1): 189–220. https://doi.org/10.1515/til-2019-0007.Search in Google Scholar

Ma, C. 2018. “The Social Risks of Artificial Intelligence and its Legal Regulation.” Science of Law (6): 50.Search in Google Scholar

Ma, X., Tula, and Y. Xu. 2019. “The Current State of Digital Development of Intangible Cultural Heritage.” Science China Information Sciences 2: 121.Search in Google Scholar

Martino, E. D. 2024. “Monetary Sovereignty in the Digital Era: The Law & Macroeconomics of Digital Private Money.” Computer Law & Security Review 52: 105909. https://doi.org/10.1016/j.clsr.2023.105909.Search in Google Scholar

Miao, Z., J. Chen, Z. Nie, and Y. Yu. 2023. “Artificial Intelligence, Digital Power, and Great Power Rivalry.” Information Security and Communication Confidentiality (8): 2–9.Search in Google Scholar

Moxley, L. 2016. “E-Rulemaking and Democracy.” Administrative Law Review 68 (4): 668–72.Search in Google Scholar

Muller-Torok, R., D. Bagnato, and A. Prosser. 2020. “Council of Europe Recommendation CM/Rec (2017) 5 and e-Voting Protocol Design.” Masaryk University Journal of Law and Technology 14 (2): 275–302. https://doi.org/10.5817/mujlt2020-2-6.Search in Google Scholar

Ouyang, E. 2023. “The Negation of Algorithmic Power.” Journal of Southwest University of Political Science and Law (4): 138–140.Search in Google Scholar

Perez, O. 2020. “Collaborative e-Rulemaking, Democratic Bots, and the Future of Digital Democracy.” Digital Government: Research and Practice 1 (1): 1–13. https://doi.org/10.1145/3352463.Search in Google Scholar

Qi, Y. 2018. “On the Change of Legal Scenarios in the Age of Artificial Intelligence.” Science of Law 4: 39.Search in Google Scholar

Scripa, A. 2017. “Artificial Intelligence as a Digital Privacy Protector.” Harvard Journal of Law and Technology 31 (1): 217–36.Search in Google Scholar

Stucke, M. E., and A. Ezrachi. 2017. “How Digital Assistants Can Harm Our Economy, Privacy, and Democracy.” Berkeley Technology Law Journal 32 (3): 1239.Search in Google Scholar

Suzor, N. 2018. “Digital Constitutionalism: Using the Rule of Law to Evaluate the Legitimacy of Governance by Platforms.” Social Media + Society 4 (3): 2056305118787812. https://doi.org/10.1177/2056305118787812.Search in Google Scholar

United Nations Human Rights Council. 2016. Resolution on the Promotion, Protection and Exercise of Rights on the Internet. Geneva: United Nations Human Rights Council.Search in Google Scholar

Verstein, A. 2019. “Privatizing Personalized Law.” The University of Chicago Law Review 86 (2): 551–80.Search in Google Scholar

Wachter, S., B. Mittelstadt, and C. Russell. 2018. “Counterfactual Explanations without Opening the Black Box: Automated Decisions and the GDPR.” Harvard Journal of Law and Technology 31 (2): 841–88.10.2139/ssrn.3063289Search in Google Scholar

Wang, X. 2023. “Digital Transformation and High-Quality Development in Higher Education: Coupling Logic and Implementation Path.” Social Science Front 1: 240.Search in Google Scholar

Wang, J., and A. Dong. 2024. “The Legal Status of the 12345 Platform: An Organizational Law Perspective on Digital Rule-of-Law Government.” Journal of Xinjiang Normal University (Philosophy and Social Sciences Edition) 3: 25–35.Search in Google Scholar

Wang, P., and X. Peng. 2023. “Content Governance of Western Digital Platforms: Understanding Platform Power in International Internet Communication.” Modern Communication (Journal of Communication University of China) 4: 73.Search in Google Scholar

Wu, F. 2023. Introduction to Artificial Intelligence, 162–93. Beijing: Higher Education Press.Search in Google Scholar

Xiang, D. 2023. “The Operational Logic of Digital Capital Power—From the Perspective of Marx’s Critique of Capital Power.” Contemporary World and Socialism 2: 115.Search in Google Scholar

Xing, C. 2021. “Ethical Reflections on the Weakening of Government’s Micro Power in the Context of the Digital Economy.” Theoretical Horizon (4): 90.Search in Google Scholar

Yang, X. 2021. “Digital Private Power: Constitutional Connotations, Constitutional Challenges, and Constitutional Coping Strategies.” Huxiang Forum (2): 86.Search in Google Scholar

Yao, S. 2022. “Power Control and Rights Emergence in Digital Governance.” Theory and Reform 3: 132–3.Search in Google Scholar

Ye, J., and Q. Xu. 2019. “Decentralization and Centralization: The Power Paradox in the Age of Artificial Intelligence.” Journal of Shanghai University (Social Sciences Edition) 6: 1.Search in Google Scholar

Yuan, Z. 2023. “Research on Legal Issues of Regulatory Responsibility for Generative Artificial Intelligence.” Law Journal (4): 119–30.Search in Google Scholar

Zalnieriute, M., L. B. Moses, and G. Williams. 2019. “The Rule of Law and Automation of Government Decision-Making.” The Modern Law Review 82 (3): 425–55. https://doi.org/10.1111/1468-2230.12412.Search in Google Scholar

Zhang, L. 2019. “The Rise, Alienation, and Legal Regulation of Algorithmic Power.” Studies in Law and Business 4: 63–75.Search in Google Scholar

Zhang, S. 2018a. “Breaking the Black Box: Regulation of Algorithmic Power and Mechanisms for Achieving Transparency in the Smart Media Era.” China Publishing (7): 49.Search in Google Scholar

Zhang, S. 2018b. “Who Owns the Data Generated by Your Smart Car.” Harvard Journal of Law and Technology 32 (1): 299–320.Search in Google Scholar

Zhang, S. 2023. “Economic Regulation of Data Behavior.” China Law Review 6: 111.10.1163/25427466-06020001Search in Google Scholar

Zhang, W. 2020. “No Numbers, No Human Rights.” Journal of Cyber Information Law Studies (1): 3.Search in Google Scholar

Zhang, Y. 2021. “Data Capital Power: An Important Dimension of Critique of Digital Modernity.” Journal of Southwest University (Social Science Edition) 1: 44–7.Search in Google Scholar

Zhang, J., and L. Pan. 2019. “The ‘Thucydides Trap’ in the Legal Governance of Artificial Intelligence and its Solution.” Science and Law 5: 43–52.Search in Google Scholar

Zhang, X., G. Wan, J. Zhang, and Z. He. 2019. “Digital Economy, Inclusive Finance, and Inclusive Growth.” Economic Research Journal 8: 71–86.Search in Google Scholar

Zhao, Y., and L. Cheng. 2024. “A Bibliometric Study of Research Trends in Cross-Border Cybercrime.” International Journal of Legal Discourse 9 (1): 1–31. https://doi.org/10.1515/ijld-2024-2001.Search in Google Scholar

Zhao, T., Z. Zhang, and S. Liang. 2020. “Digital Economy, Entrepreneurial Activity, and High-Quality Development: Empirical Evidence from Chinese Cities.” Management World 10: 65–76.Search in Google Scholar

Zheng, G. 2018. “The Law of Algorithms and the Algorithm of Law.” China Law Review (2): 66.Search in Google Scholar

Zheng, G. 2022. “The Legal Configuration of the Digital Society.” Zhejiang Social Sciences 1: 151–2.Search in Google Scholar

Zheng, X. 2023. Criminal Procedure Reform in the Digital Age, 128–89. China: Law Press.Search in Google Scholar

Zhou, H. 2019. “Algorithmic Power and its Regulation.” Law and Social Development 6: 113–26.Search in Google Scholar

Received: 2024-10-06
Accepted: 2024-10-07
Published Online: 2024-10-28
Published in Print: 2024-10-28

© 2024 the author(s), published by De Gruyter on behalf of Zhejiang University

This work is licensed under the Creative Commons Attribution 4.0 International License.

Downloaded on 3.12.2025 from https://www.degruyterbrill.com/document/doi/10.1515/ijdlg-2024-0017/html
Scroll to top button