Home Integration of Soft and Hard Laws: Profiling Legal Protection for “AI for All”
Article Open Access

Integration of Soft and Hard Laws: Profiling Legal Protection for “AI for All”

  • Yang Xiao

    Yang Xiao is Research Fellow at Zhejiang University. His research interests lie in digital law and international law.

    ORCID logo EMAIL logo
    and Xiaoxia Sun

    Xiaoxia Sun is the Dean of the Digital Law Research Institute of Zhejiang University. He has served in various leadership positions, including as the Dean of Guanghua Law School at Zhejiang University and later as Dean of the Law School at Fudan University. Sun is a Changjiang Scholar and enjoys a special allowance from the State Council. His research focuses on legal theory, philosophy of law, public law principles, and procedural theory. He is also an expert in digital law, leading research efforts in the field of digital governance and smart justice. As the founding Director of the Digital Law Research Institute, Xiaoxia Sun has spearheaded collaborative projects between academia and the judiciary, working towards digital transformation in China’s legal system. His numerous books and over 100 scholarly articles contribute significantly to the fields of legal theory and digital law.

Published/Copyright: April 25, 2025

Abstract

This paper analyzes the concept of “AI for All” (AIFA), exploring its theoretical foundations, challenges, and pathways for legal implementation. AIFA encompasses three key dimensions: “AI Technology for All”, “AI Justice for All”, and “AI for All Countries”. However, the rise of generative AI – with costly subscription models – threatens these ideals, exacerbating economic and cognitive disparities. Currently, AIFA governance relies on non-binding soft laws, which lack enforceability, while hard laws remain underdeveloped. To prevent AIFA from a utopian vision descending into a dystopian reality, this paper argues for a dual approach: (1) expanding soft laws to explicitly address affordability, literacy, and international cooperation, and (2) tailoring hard laws based on national AI capacity – ranging from pricing regulations in core AI states to global cooperation for peripheral nations. By integrating soft law’s ethical guidance with hard law’s enforcement, AI governance can balance innovation with inclusivity, ensuring AI benefits all rather than deepening inequalities.

1 Introduction

“AI for All” broadly refers to the vision that everyone – regardless of geography, socioeconomic status, age, or gender – should share in AI’s benefits. The Indian government framed “AI for All” as the benchmark for future AI development, focusing on inclusive growth and access to AI in key sectors (NITI Aayog 2018a). Internationally, UNESCO echo this vision: “The vision of ’AI for all’ must ensure that everyone can partake in the ongoing technological revolution and reap its benefits” (UNESCO 2024). Beyond the direct concept of “AI for All,” related concepts, such as “inclusive AI” and “accessible AI,” “justice AI”, complement and overlap with “AI for All.” Inclusive AI focuses on fairness, representation, and non-discrimination in AI systems.[1] Accessible AI emphasizes usability specifically for people with disabilities (Morris 2020). However, distinctions between these terms are often relative, and they frequently overlap or are used interchangeably (Avellan, Sharma, and Turunen 2020). Clearly delineating “AI for All” remains a research gap.

While some consensus has been achieved regarding “AI for All”, little legislation to protect the idea, leaving to a kind of contradiction in status quo. Documents such as the Ethical Guidelines for Artificial Intelligence, the Global Digital Compact, and China’s Ethical Norms for a New Generation of Artificial Intelligence have all included the notion of “AI for All” as a fundamental principle. However, these remain at the level of declarative soft law, lacking legal protections. Meanwhile, the digital divide in the AI era continues to widen, exacerbating wealth inequality and the Matthew effect. A case in point is OpenAI’s membership strategy: its most advanced models require a $200/month subscription; the Deep Research feature – capable of functioning at a PhD-assistant level – is available only twice per month for free users but up to 100 times for paying users at the $200/month tier. OpenAI is further exploring a “Pro Max” tier at $2,000/month. With advanced AI models now surpassing even skilled engineers in capability, the gap between free and cutting-edge models is becoming increasingly significant. Consequently, the ability to utilize these advanced AI tools will profoundly influence social mobility.

Against this backdrop of rapid generative AI and AGI advancements, coupled with emerging paid subscription models and signs of a widening digital divide in the AI era, this paper researches three questions:

  1. What’s the meaning of “AI for All”, what’s the theoretical foundation of it?

  2. Is the current situation – characterized mainly by soft-law declarations advocating “AI for All” – sufficient?

  3. How can we establish legal protections for the principle of “AI for All,” effectively integrating soft and hard laws to bridge the gap between innovation and development?

2 Literature Review

This section will provide a brief introduction to current research on “AI for All,” as well as soft and hard laws associated with it. Specifically, it will first elaborate on the definition and necessity of “AI for All”; second, it will clarify the relationship between soft laws and hard laws.

2.1 The Definition of “AI for all”

Shawn Schuster described “AI for All” as “everyday people can benefit from Artificial Intelligence” (Schuster 2023). However, while most scholars mention the term “AI for All” in their papers, they do not focus on its meaning but rather discuss related derivative questions. Many scholars conflate or equate “AI for All” with diversity and inclusion in AI (D&I in AI), arguing that AI should be free from bias, discrimination, and perceived untrustworthiness (Avellan, Sharma, and Turunen 2020; Shams, Zowghi, and Bano 2023a; Shams, Zowghi and Bano 2023b; Shams, Zowghi, and Bano 2023c; Zowghi and Bano 2024). And defined “inclusive AI” as the “inclusion of humans with diverse attributes and perspectives in the data, process, system, and governance of the AI ecosystem”. Diversity refers to the “representation of the differences in attributes of humans in a group or society” (Zowghi and Bano 2024). In sum, compared to concepts such as inclusive AI (Fosch-Villaronga and Poulsen 2022), diversity AI (Shams, Zowghi, and Bano 2025) or accessibly AI, few scholars have systematically explored the idea of “AI for All.”

2.2 The Necessity of “AI for all”

“AI for All” is a necessary condition for social justice. A scholar through the lens of the theory of social justice known as the capability approach to conceptualize AI’s transformative role and its socio-political implications, concludes that AI itself should be considered among the conditions of possession and realization of the capabilities it transforms. In other words, access to AI – in the many forms this access can take – is necessary for social justice (Buccella 2023). Another scholar, using the veil of ignorance to determine principles for aligning AI systems, found that participants expressed a preference for the concept of “AI for all” – a principle that prioritizes the most vulnerable groups when designing AI assistants (Weidinger et al. 2023). A scholar further analyzes the applicability of Rawls’s principles of fair equality of opportunity and the difference principle to AI. Regarding fair equality of opportunity, AI should not merely pursue formal fairness; rather, it should substantively reduce social inequalities, such as by enhancing social mobility. Concerning the Difference Principle, AI systems should prioritize improving the well-being of society’s most disadvantaged groups, rather than solely optimizing overall efficiency (Gabriel 2022).

2.3 The Relationship Between Soft and Hard Laws

Disagreements prevail in the existing scholarship on their relationship between soft and hard laws, including the ontology of soft laws and hard laws, as well as their connections and distinctions. In terms of the ontology, positivist legal scholars tend to deny the concept of “soft law,” since law by definition is “binding” (Weil 1983). Rational institutionalist scholars respond that “the term ‘binding agreement’ in international affairs is a misleading hyperbole” (Lipson 1991). Constructivist scholars, in contrast, focus less on the binding nature of law at the enactment stage, and more on the effectiveness of law at the implementation stage, addressing the gap between the law-in-the-books and the law-in-action; they note how even domestic law varies in terms of its impact on behavior, so that binary distinctions between binding “hard law” and nonbinding “soft law” are illusory (Trubek, Cottrell, and Nance 2006).

Regarding the connections and distinctions between soft law and hard law, can be divided into domestic and international level. At the domestic level, soft law is often applied in specific situations, such as emergencies, when there is insufficient time or when it is inconvenient to enact hard law (Daly 2021). Furthermore, soft law influences behavior by informing the public and political institutions about intentions and policy preferences (Posner and Gersen 2008). At the international level, soft laws have become increasingly prevalent and important, significantly affecting international relations (Guzman and Meyer 2010). Functionalist scholars argue that hard law increases the costs for states violating legal commitments (Raustiala and Victor 1998; Trubek, Cottrell, and Nance 2006), whereas soft law provides greater flexibility, lower costs, and reduced sovereignty costs in sensitive areas (Abbott and Snidal 2000; Lipson 1991; Sindico 2006).

3 Systemizing “AI for All”

3.1 The Content of “AI for All”

In contrast to concepts such as “inclusive AI” or “accessible AI,” there has not yet been a comprehensive or systematic definition of “AI for All,” and at times, the term is even conflated with “inclusive AI,” “accessible AI,” or “human-centered AI.” Therefore, it is necessary to formulate a more systemic framework to clarify what “AI for All” means. Building on numerous papers, protocols, and contracts – across international, domestic, authoritative, and private sectors – directly or in directly express the idea of “AI for All”, this part aims to systematize the idea of “AI for All” by dividing it into three dimensions: AI technology for all, AI justice for all, and AI for all countries.

3.1.1 AI Technology for All

“AI technology for all” implies that AI should be both open and affordable to everyone. Openness suggests that AI should be made as open-source as possible and not monopolized by a small number of technology giants; at the same time, it necessitates improving the public’s understanding of AI. Affordability means that the cost of deploying and using AI should not be set so high that only a small elite can bear it. This approach is reflected in various initiatives. For example, the nonprofit organization AI4ALL focuses on educating and empowering underrepresented groups to ensure broad benefits of AI (AI4ALL 2025); Greece launched a similar “AI for All” initiative to train civil servants in both basic and advanced AI skills (Panagopoulos 2024). and the Bill & Melinda Gates Foundation has emphasized “ensuring access to AI for all,” particularly in low- and middle-income countries (Bill and Melinda Gates Foundation 2025). Furthermore, many nations’ policies and legal frameworks stress making AI technology open and affordable, reflecting the principle of “AI technology for all.” For example, the European Union has funded initiatives such as “AI4EU” to encourage the development of inclusive AI applications and, through its digital policies, has encouraged member states to strengthen AI-related education and training in order to narrow digital divides (European Commission 2019). Beyond EU-wide legislation, individual member states have referred to ensuring that “no one is left behind” in their national AI strategies, as exemplified by Finland’s free AI education program for its entire population (European Commission 2019). The Global Digital Compact likewise emphasizes open data, open AI models, open standards, and open content, ensuring that societies and individuals can use digital technologies for their developmental needs (United Nations 2024). The Asilomar AI Principles also underscore that AI technologies should benefit and empower as many people as possible (Future of Life Institute 2017).

3.1.2 AI Justice for All

“AI justice for all” underscores both formal justice and substantive justice. Formal justice requires that AI not discriminate against specific populations, demanding the prevention of bias and the fair treatment of all users, while substantive justice calls for AI not to exacerbate social inequality but instead to provide equitable opportunities for people of all social classes, races, genders, and economic backgrounds. This broader notion of substantive justice aligns with ideas from thinkers such as John Rawls, who emphasizes fairness in opportunity distribution (Rawls 1999). India’s 2018 National AI Strategy, spearheaded by the NITI Aayog, exemplifies this emphasis on substantive justice: its slogan “AI for All” aims to use AI in driving sustainable and inclusive growth that transcends class, geography, and economic constraints (NITI Aayog 2018b). Besides, many discussions similarly demonstrated the notion of “AI justice for all”. The 2019 “Beijing Consensus on AI and Education” encourages countries to ensure that the design of AI adheres to ethical norms, avoids discrimination, and remains fair, transparent, and auditable, thereby reflecting the significance of justice in AI for all (UNESCO 2019). In 2019, the OECD adopted AI governance principles, and were subsequently recognized and adopted by the G20. The very first principle is “inclusive growth, sustainable development, and well-being,” urging stakeholders to promote AI in a responsible manner that benefits all and highlights substantive justice (OECD 2019). IEEE’s “Global Initiative on Ethics of Autonomous and Intelligent Systems,” launched in 2016, published its first version of “Ethically Aligned Design” in 2019. This guide promotes human-centered, welfare-oriented AI design concepts and emphasizes fairness, transparency, accountability, and thus both include the notion of formal and substantive justice for “AI for All” (Institute of Electrical and Electronics Engineers 2019). In the same year, the EU High-Level Expert Group on AI proposed a “human-centric AI” approach that respects European values and human rights, its guidelines for “Trustworthy AI” outline seven key requirements, in which diversity and non-discrimination, as well as social well-being, echo “AI justice for all” (High-Level Expert Group on Artificial Intelligence 2019).

3.1.3 AI for All Countries

“AI for All countries,” especially developing countries, is equally critical. Global justice theorists like Thomas Pogge and Charles Beitz believed the need for equitable distribution of resources and technology across national boundaries to prevent new forms of deprivation in less developed regions (Pogge 2002). UNESCO’s vision of “AI for All” similarly includes preventing the exacerbation of technological disparities within and among nations (UNESCO 2024). One global organization has sought to ensure no country is left behind by establishing an AI for developing countries forum, illustrating that such initiatives aim to make AI accessible and beneficial on a worldwide scale (AI for Developing Countries Forum 2025). The notion of “AI for all countries” is also reflected in several areas. Emerging multilateral collaborations such as the Global Partnership on AI (GPAI), established by dozens of countries in 2020, aim to ensure that AI benefits different cultures and economies through cross-border cooperation (Global Partnership on Artificial Intelligence n.d.). The African Union has formulated a regional AI strategy framework to harness AI’s potential in education, agriculture, and other sectors for promoting inclusive development (African Union 2024). The Global Digital Compact likewise highlights the need to “build capacity, especially in developing countries, and facilitate access to, development of, and management of AI systems for sustainable development.” (United Nations 2024).

3.2 Theoretical Foundations for “AI for All”

The concept of “AI for All” hinges not only on policy initiatives but also on robust theoretical underpinnings. This part would illuminate why AI should be made economically accessible (Law and Economics), how fairness can be upheld (justice theory), and the broader global implications of AI deployment (world-systems theory) in theory.

3.2.1 Law and Economics

Originating with Ronald Coase, Law and Economics also provides a legitimacy for “AI for All.” In the view of Judge Richard Posner, one of the leading scholars in this field, Law and Economics is a discipline that uses economic principles to explain legal issues (Posner 2014). In fact, the economic analysis of law primarily aims to improve market efficiency and reduce transaction costs. Yet, greater efficiency does not necessarily mean a better society (Sen 1992). Consequently, while the perspective of economic analysis is neither a sufficient nor a necessary condition for “AI for All,” this section argues that, even from within its own framework, “AI for All” has economic legitimacy because it can reduce market externalities and enhance market efficiency.

From the standpoint of Law and Economics, whether a particular service should be supplied by the market or regulated by the government depends on whether it constitutes a public or quasi-public good, and whether there is a market failure necessitating government intervention. For centuries, the market economy has worked wonders. Under Coase’s theorem, in a world with zero transaction costs, the market naturally arrives at an optimal allocation, and the law itself has little to do (Coase 1960). Therefore, absent compelling reasons, a service should generally be left to the market – unless it constitutes a public or quasi-public good or unless there is a market failure. The former rationale derives from Paul Samuelson’s paper, “The Pure Theory of Public Expenditure,” while the latter follows Pigou’s explanation in The Economics of Welfare, where government intervention may enhance overall social welfare in the presence of negative externalities that cause market failure (Medema 2020). Thus, one should first determine whether AI belongs to the category of public or quasi-public goods. If it does, then government intervention is warranted. If it does not, generative AI services should in principle be left to the market, unless there are market defects that hamper efficient supply. When the market cannot correct such supply inefficiencies and negative externalities on its own, the government should intervene to provide or regulate these services or products.

First, it is acknowledged that, at present, generative AI services do not possess the non-rivalrous and non-excludable attributes of public or quasi-public goods. Unlike typical public goods such as national defense or public healthcare systems, generative AI services are often provided by private entities and can be made exclusive through access controls, payment mechanisms, and similar measures. They may also be subject to congestion or degradation in quality if overused, reflecting their rivalrous and excludable nature. For example, certain OpenAI models require user registration and membership subscriptions, indicating that they are not non-rivalrous; server capacity is limited, so an overload of users could result in outages, showing that they are not non-excludable. Therefore, generative AI services do not fall under the category of public or quasi-public goods. In principle, these services tend to be supplied by the market. However, if the market’s pure pursuit of profit ignores the broader public interest, it may lead to inefficient supply or harm to social welfare.

Second, in the long run, failing to implement AI for All will exacerbate existing societal divisions and creating economic negative externalities, ultimately undermining the environment for efficiency and innovation. From an externality theory perspective, externalities arise when the production or consumption of a good or service generates benefits or costs for others that are not fully reflected in market transactions (Medema 2020). If generative AI services depend entirely on market supply based purely on ability to pay, certain social groups may be excluded or underserved. This exclusion, while potentially efficient from a narrow, short-term private market perspective, can contribute to negative externalities in the broader economy. Examples include widening the digital divide or marginalizing underdeveloped regions and vulnerable populations, which can lead to deeper social inequality (Ziewitz, Fourcade, and Boyd 2021). As structural inequalities intensify, this may compromise social stability. Such instability, can ultimately increase business operating costs, deter long-term investment, hinder overall productivity, and thus deteriorate the very market environment necessary for sustained efficiency and innovation. Market mechanisms alone often lack sufficient incentives to fully address these externalities, suggesting a potential need for government intervention, regulation, or other collective actions to mitigate these risks.

Finally, as Figure 1 illustrates, when initiatives promoting broader access and utilization of AI, aligning with the principle of “AI for All”, are implemented, the marginal social benefit (MSB) of AI usage likely exceeds the private benefit captured by market transactions alone. Consequently, the socially optimal level of AI adoption (where MSB intersects supply) is greater than the market equilibrium – reflecting uninternalized positive externalities. If service providers proactively work towards the goals implicit in “AI for All,” whether motivated by policy guidance or their own initiatives (for example, by making foundational aspects of the technology more open, keeping prices for basic access affordable, disseminating knowledge widely, or providing training), these actions can help realize and internalize positive externalities, generating substantial societal benefits beyond the direct users. When more people gain meaningful access to generative AI services, they can drive innovation in healthcare, education, agriculture, environmental protection, and other fields, which in turn yields widespread economic and societal gains.

Figure 1: 
Positive externalities on AI for all.
Figure 1:

Positive externalities on AI for all.

3.2.2 Justice Theory

Justice theories provide a moral foundation for “AI for All,” insisting that AI development and use be fair and inclusive. Both formal justice (impartial and consistent rule application) and substantive justice (fair outcomes regarding distribution of benefits and burdens) are relevant. Philosophers like John Rawls and Amartya Sen offer principles to evaluate AI ethics and policy.

Rawls’ theory of justice as fairness emphasizes equitable distribution of social goods and opportunities. Two key Rawlsian ideas apply to AI ethics: fair equality of opportunity and the difference principle. Fair opportunity means positions and services (like AI-driven benefits) should be open to all under conditions of equality, which in AI translates to non-discrimination by algorithms and equal access to AI tools. The difference principle holds that inequalities are acceptable only if they benefit the least advantaged members of society (Westerstrand 2024). In an AI context, this implies AI systems and policies should be designed to improve outcomes for marginalized groups rather than worsen gaps. So, if AI increases productivity or wealth, Rawlsian justice would argue for mechanisms that ensure disadvantaged communities share in those gains. For example, AI decision systems should not only avoid bias but proactively help those worse off – echoing Rawls’ focus on uplifting the worst-off (Gabriel 2022).

Beyond Rawls, Amartya Sen and Martha Nussbaum’s capabilities approach enriches the conversation by stressing that real freedoms – capabilities – should be expanded across communities. The capabilities approach holds that true well-being lies not in what people have, but in people are able to do and capable of achieving (Robeyns 2017). This approach shifts the focus from abstract resources to people’s actual freedoms and abilities to achieve well-being. “AI for All” from Sen’s perspective means ensuring that AI technology expands people’s capabilities – e.g. the ability to get an education, communicate, or obtain healthcare – for everyone, not just the elite, to ensure people have the real capability to benefit from AI in their daily lives. Sen warns that pure economic growth or tech advancement can mask growing inequalities in capabilities. In the AI era, there’s a risk of a large underclass dominated by a techno-elite if access to AI tools and digital skills is uneven (Yoon 2021). We see this in concerns that advanced AI could primarily empower wealthy corporations or countries, leaving others behind. Capabilities approach in AI ethics thus advocates for policies that empower marginalized populations with AI. The capabilities approach in AI ethics therefore advocates for policies that empower marginalized populations with AI – by investing in AI literacy, multilingual AI assistants, or affordable AI services that enhance the practical reasoning of marginalized groups, expanding their ability to perceive, imagine, and think. As one scholar notes, Sen’s framework can guide us to establish social conditions where all persons can flourish in the age of AI (Yoon 2021). In summary, justice demands that AI development be assessed by how well it expands human freedoms and reduces capability deprivation.

Taken together, justice theory validates the moral imperatives behind “AI for All”: not only should AI be non-discriminatory, it should actively work to redress inequalities across diverse social, economic, and cultural contexts.

3.2.3 World-Systems Theory

While legal economics and justice theory mainly shed light on domestic contexts, world-systems theory places “AI for All” within a global and historical framework. Originating from the work of Immanuel Wallerstein, world-systems theory dividing the world into core, semi-periphery, and periphery nations (Wallerstein 2004). It provides a macro-level perspective on AI accessibility and technological divides between developed and developing countries. World-systems theory exposes how core nations unjustly maintain their advantages to extract benefits, highlighting the necessity of “AI for All Countries”. The meaning of “AI for All Countries” emphasizes the responsibility of AI-leading core nations toward peripheral and semi-peripheral countries and serves as a crucial idea for breaking the cyclical inequalities embedded in the global AI system.

In world-systems terms, AI increasingly become a new mechanism of unequal exchange between core and periphery, reinforcing dependency. Wallerstein’s analysis sees wealthy core countries dominating high-tech innovation and benefiting disproportionately, while poorer periphery countries provide raw materials or cheap labor and lag in development (Hopkins and Wallerstein 1982). This dynamic is evident in AI today. The US concentrate AI research labs, cutting-edge tech companies, and infrastructure, exporting AI products and services, whereas many developing nations mainly consume AI technologies or supply data and cheap data annotators. A study by the World Economic Forum points out that the economic and social benefits of AI are geographically concentrated primarily in the Global North (Yu, Rosenfeld, and Gupta 2023). Another study provides empirical evidence of this disparity, noting that AI capital stock from 1995 to 2020 is heavily concentrated in developed regions, exacerbating wealth inequality (Smith and Lee 2024). Thus, the “AI divide” mirrors the digital divide: core countries “dominate in … technological advancement, while periphery countries are often exploited for their resources” (including data and cheap labor) (Couldry and Mejias 2019).

Giovanni Arrighi’s work on long-term economic cycles and hegemonic powers can be used to explain how the AI revolution could be seen as part of a new cycle of accumulation: today’s leading powers (the U.S.) compete for dominance in AI, while other nations risk falling behind (Arrighi 1990). However, Arrighi also noted opportunities for semi-periphery states to rise by leveraging new technology (Arrighi 2007). Nevertheless, world-systems theorists would caution that without deliberate intervention, AI’s benefits will concentrate in existing centers of power. The flow of AI talent and capital is mostly from periphery to core. Global income and knowledge gaps may widen as AI increases productivity primarily in tech-savvy economies. Arrighi might interpret initiatives like China’s massive AI investments as attempts to restructure the system, potentially creating a more multipolar tech world (Arrighi 2007). But for many low-income countries, the worry is becoming even more peripheral in an AI-driven global economy.

4 Challenges Facing “AI for all”

This section discussed the current challenges of AI for All in the era of generative AI. First, it introduced the background – how generative AI challenges the concept of AI for All – then argued that neither soft laws nor hard laws can effectively solve these issues currently.

4.1 Background: How Generative AI Challenges “AI for All”

The emergence of generative AI technologies could lead to an emerging “AI divide” (Carter, Liu, and Cantrell 2020) and perpetuate structural inequalities among users with different sociodemographic backgrounds (Cotter 2022; Gran, Booth, and Bucher 2021) This part will examine how, in the era of generative AI, these factors contribute to the “AI divide,” thereby intensifying the wealth gap and diminishing social mobility, and erode “AI for all”.

4.1.1 Price Cost Eroding “AI for all”

Over the past few years, Generative AI has advanced significantly, marking a paradigm shift in both capability and social impact. Unlike earlier AI systems, which were limited to specific tasks such as facial recognition or recommendation algorithms, Gen AI can perform generative tasks, such as writing novels and generating code. This leap in generality and autonomy is exemplified by models such as OpenAI’s GPT-4.5 and O3, Anthropic’s Claude 3.7, and Google’s Gemini 2.5 Pro, which demonstrate reasoning, creativity, and decision-making capabilities that rival or surpass human expertise. Moreover, state-of-the-art models exhibit remarkable proficiency in complex tasks; for example, OpenAI’s “deep research” function autonomously conducts comprehensive internet research, achieving a 26.6 % accuracy rate on the “Humanity’s Last Exam” benchmark, comparable to human performance (TechRadar 2025). As Gen AI capabilities continue to evolve rapidly, even more advanced systems are under development.

However, the price of access to most advanced generative AI tools has also undergone a significant leap. While some advanced generative AI models remain open-source or freely accessible to the public – among which DeepSeek R1 stands out prominently – venture capitalist Marc Andreessen described its emergence as the “Sputnik moment” in the AI field. Nevertheless, DeepSeek R1 still trails behind the most cutting-edge large models (Mozur 2025). The most cutting-edge models are unaffordable for the majority of people, posing a substantial challenge to the vision of AI technology for all. The advanced capabilities of Gen AI models – such as OpenAI’s GPT-4.5 and O1 pro – have been matched by steeply tiered pricing that entrenches inequality. OpenAI’s 200/month “Pro” tier, for instance, grants 120 “deep research” queries, while free users are limited to two – a disparity that extends to API costs, where high-volume access to models like ChatGPT 4.5, 150 dollars per million output tokens. Even mid-tier subscriptions (e.g., Grok at 30/month, ChatGPT plus at 20/mouth) remain unaffordable for billions globally, particularly in regions where 20 represents a substantial share of income. And OpenAI is exploring the pro max subscription-2000/month level. As showed in Table 1, even the mid-tier subscriptions divided about 2–15 % of their mouth income. As for “pro” lever, occupied 20%–150 % for developing countries’ people’s income, it’s impossible for most developing countries people to afford.

Table 1:

The subscription costs ($20 for Plus, $200 for Pro) were divided by monthly normal GNI per capita and multiplied by 100 to get the percentage of income. GNI data come from World Bank of 2023.

Country Plus share (%) Pro share (%)
USA 0.3 3
China 1.8 18
India 9.4 94
Japan 0.6 6
Germany 0.43 4.3
Brazil 2.6 25.8
South Africa 3.7 37
Nigeria 12.7 127
Bangladesh 8.3 83
Pakistan 16.4 164

As a result, the leap in capability of advanced Gen AI also brings about a leap for the person who uses it (Cheng and Gong 2024), significantly impacting the ideal of “AI for all”. When it comes to “AI justice for all,” this trend deviates from the principle of substantive justice: creating a capability caste system, wealthy users and corporations harness Gen AI for innovation, education, and market dominance, while marginalized groups – students, small businesses, and low-income nations – are relegated to slower, less intelligent, outdated, or restricted versions. Individuals with access to the most advanced Gen AI can transcend conventional skill boundaries and even evolve into “super-individuals.” For example, OpenAI’s Deep Research function can effectively replace a research assistant, enabling users to complete literature reviews or data analyses within minutes – far outpacing the productivity of human researchers (Reuters 2025). According to a now-outdated McKinsey study, earlier versions of generative AI could already boost information retrieval efficiency by 400 % (McKinsey Global Institute 2023). Plus, research also indicates that Gen AI significantly enhances learning capacity, and managerial skills, widening capability gaps exponentially rather than incrementally (Cornell University Center for Teaching Innovation 2024). This exponential advantage consolidates existing social hierarchies, reflecting the broader dynamics of digital private power concentration (Sun and Xiao 2024). A wealthy student paying 200 dollars per month for AI-based assistance can produce work once requiring a professor’s guidance, while an entrepreneur uses AI to generate patent-level solutions – turning technical privilege into capital gains. These “super-individuals” are not created by innate talent or diligence but by the ability to pay, stifling social mobility and reinforcing a cycle in which “capital empowerment” fuels “technology empowerment,” leading to “capability enhancement” and ultimately more “capital accumulation.”

4.1.2 Literacy Divide and Infrastructure Condition Undermining “AI for all”

A critical yet underrecognized obstacle to “AI for all” lies in cognitive disparities – the gap between those who understand Gen AI’s potential and those who remain unaware of its existence or utility. A study by the St. Louis Fed shows that high-education workers use depth 3.2 times more than low-education groups, leading to a 15 % increase in the productivity gap (Federal Reserve Bank of St. Louis 2025). Gen AI is forging a new digital divide concerning not only who can log on or possesses technical digital skills but also who can comprehend, create, or control AI technologies (Hendawy 2024). This knowledge gap is compounded by AI literacy: While tech-literate populations leverage AI for tasks ranging from coding to career advancement, marginalized group often lack even basic awareness of these tools. Research have shown, the most vulnerable groups with the lowest levels of AI knowledge and AI skills were mostly older, with lower levels of education and privacy protection skills, than the average users (Wang, Boerman, Kroon, Mller, and H de Vreese 2024). Without targeted literacy programs, this cognitive barrier entrenches a hierarchy of opportunity, where the technologically initiated reap AI’s benefits, leaving others further behind in education, employment, and civic participation.

Plus, the promise of “AI for all” collapses where basic infrastructure is absent. Globally, 2.6 billion people remain offline (International Telecommunication Union 2023), – preconditions for using Gen AI. Even when devices are available, computational demands pose hurdles: running models like GPT-4.5 requires stable high-speed connectivity, which remains inaccessible in areas with bandwidth limitations. Without universal infrastructure investment, Gen AI’s transformative potential remains confined to urban, affluent enclaves, deepening global inequities.

In short, traditional digital divides – concerning both AI literacy and digital infrastructure – are simultaneously creating and exacerbating a new “AI divide,” making previously disadvantaged groups even more vulnerable. This significantly undermines the three core ideals behind “AI for all”: Firstly, individuals remain unaware of AI technology and have no means to access it, let alone benefit from “AI for all.” Secondly, marginalized communities become further disadvantaged, making substantive justice impossible. Thirdly, at present, it’s more accurate to describe the situation as “AI for certain countries” rather than “AI for all countries.”

4.2 Soft Laws Are Not Enough

Soft laws are the main source of “AI for all” concept, with many of its guiding principles articulated through non-binding frameworks and ethical guidelines. However, relying on soft law alone is clearly insufficient.

As showed in Table 2, soft laws concerning “AI for all” can be divided into two level: domestic and international. In terms of domestic level, various countries have published soft law documents related to “AI for all”, focusing on fairness, equality, and non-discrimination. At the international level, there is a general trend toward the sustainable development of AI-related goals. One key difference between domestic-level AI soft law and international-level discussions is that the North-South AI divide is a major concern in international papers, especially in United Nations documents. Actually, most “AI for all” issues can only be resolved at the international level. For instance, regarding “AI technology for all,” the most advanced gen AI models are concentrated in the United States. Some American AI companies provide free access to both cutting-edge and slightly less advanced models under certain conditions. As a result, in most countries, their domestic AI companies struggle to establish technological barriers and lack the ability to charge users for their gen AI models. Consequently, citizens of other countries are left with two choices: either pay American AI companies or use less advanced models. Regarding “AI justice for all,” achieving both procedural and substantive justice requires international cooperation. The standard for procedural justice must be unified to prevent a global race to the bottom in regulation. Likewise, substantive justice cannot be realized without support from developed countries for developing nations. Finally, “AI for all countries” is inherently an international issue. Developing countries already face disadvantages in infrastructure, funding, talent development, and technological reserves. Without international support, AI technology will remain confined to certain nations, further widening the digital divide.

Table 2:

Main soft laws concerning AI for all.

Level Country/organization Document name (Year) Main content concerning AI for all
Domestic United States Blueprint for an AI Bill of Rights (2022) Emphasizes non-discrimination, privacy, equal opportunity.
National AI R&D Strategic Plan (2023) Ensures AI advances equity and prevents inequality.
European Union Ethics Guidelines for Trustworthy AI (2019) Fairness is key ethical principles. It refers to the equal and just distribution of benefits and costs, as well as ensuring freedom from unfair bias, discrimination, and stigmatization.
China Ethical Norms for New Generation AI (2021) Promotes fairness and justice, promote fair sharing of AI benefits by all of society, fully respect and help vulnerable groups
Japan Japanese Society for AI Ethical Guidelines (2017) Acknowledging AI may bring about additional inequality and discrimination; ensuring AI can be used by humanity in a fair and equal manner.
South Korea National Guidelines for AI Ethics (2021) Inclusive AI, minimizing bias, equal benefits for all.
International OECD OECD AI Principles (2019 Updated 2024) Emphasizing AI should benefit humanity and the planet by promoting inclusive growth and sustainable development.
United Nations General Assembly A/RES/78/168 (2024) Urge support for developing countries to ensure inclusive and equitable access to AI, bridge the digital divide, and align with sustainable development goals.
G20 G20 AI Principles (2019) Focus on the responsible development of AI, emphasize inclusivity, ensure AI benefits everyone, and reduce inequalities.
World Economic Forum A Blueprint for Equity and Inclusion in AI (2022) Provides an inclusive strategy for the entire AI lifecycle, ensuring fair access to AI and skill development, benefiting vulnerable groups.
United Nations Global Digital Compact (2024) Achieving inclusive and equitable access to artificial intelligence and digital technologies.

In the jurisprudential context, soft laws whether in domestic or international governing “AI for All” encounter significant challenges, including inadequate substantive fairness (for certain countries), non-binding enforcement mechanisms, and excessive abstraction. Firstly, fairness emerges as a shared attribute across documents from different countries, albeit with distinct interpretations. The United States adopts a procedural approach emphasizing equality of opportunity and prevention of explicit discrimination, yet demonstrates minimal engagement with structural inequalities inherent in AI systems. This paradigm proves particularly problematic given the socioeconomic realities of advanced AI models. The high costs associated with accessing advanced AI models – a critical barrier unaddressed by U.S. guidelines – risk exacerbating wealth disparities under such opportunity-focused frameworks. Current U.S. soft law provisions permit this outcome by prioritizing formal justice over distributive equity, effectively legitimizing market-driven inequalities through regulatory inaction. In contrast, the European Union and China not only stress formal equality but also emphasize substantive equity, advocating for fairness in outcomes. Secondly, these documents are characterized by their non-legally binding nature. One research compared 22 mainstream AI ethics guidelines and found that most of them are merely symbolic, lacking binding force, and argued that they need to be reinforced by legal regulations and independent oversight (Hagendorff 2020). As a result, none of the guidelines under review impose mandatory legal obligations. Certain documents explicitly affirm their non-binding status, as evidenced by disclaimers such as, “Adoption of these principles may not meet the requirements of existing statutes, regulations, policies, or international instruments, or the requirements of the Federal agencies that enforce them” (White House Office of Science and Technology Policy 2022). This lack of enforceability means that the articulated visions and norms can be disregarded, rendering the lofty declarations contained within them largely utopian. Such a situation precipitates a structural collective action dilemma: the absence of binding constraints enables enterprises that flout principles to secure greater profits and competitive advantages. To remain viable in the market, other enterprises are pressured to follow suit, culminating in a “race to the bottom” wherein adherence to ethical standards is universally forsaken. This predicament cannot be ameliorated through moral persuasion alone; it demands the establishment of legally binding regulations endowed with coercive authority. Thirdly, the content of these AI ethical guidelines tends to remain abstract and theoretical (Cheng, Han and Nasirov 2024; Cheng and Liu 2023). They frequently adopt a philosophical narrative style, describing idealistic visions of a utopian future in generalized and somewhat vague terms, while lacks concrete, actionable measures for bridging the gap between aspirational discourse and practical implementation. Although connections between the high cost of advanced models and the “AI for All” concept can be inferred at an abstract level, this requires interpretative effort. These soft law norms do not directly address specific challenges, such as the concrete impacts of the high cost of advanced models, nor do they provide specific guidance or policies to mitigate these effects. This deficiency extends to other barriers, including cognitive and usability barriers as well as infrastructure barriers.

4.3 Hard Laws: Remaining Scarcity

Some believe that ethical AI principles alone are insufficient to ensure that companies use AI responsibly. Instead, a strict governance framework should be implemented to reduce AI-related risks (Eitel-Porter 2021). However, enforceable legal regulations for “AI for all” are largely absent. Even when focusing on domestic AI legislation across various countries, truly hard legal frameworks are rare, let alone regulations specifically promoting “AI for all”.

Firstly, the absence of hard legal frameworks for “AI for all” is primarily evident in the absence of comprehensive AI-specific legal regulations. Most countries are still exploring AI governance and have not yet formed systematic hard-law frameworks for AI regulation (Fang and Perkins 2024), much less specific provisions for AI for all.

Secondly, even countries emphasizing “AI for all” through soft-law guidelines and introducing AI-related “hard laws”, their hard laws still make few responses regarding “AI for all”. For example, the EU’s Ethics Guidelines for Trustworthy AI (2019) stress “fairness as one of four key ethical principles for Trustworthy AI,” stating that “substantial fairness relates to the equal and just distribution of benefits and costs, and ensuring freedom from unfair bias, discrimination, and stigmatization.” Yet, the provisions explicitly related to “AI for all” within the EU AI Act remain relatively limited. Provisions concerning “AI justice for all” in the EU AI Act primarily focus on formal justice rather than substantive justice. Regulations center around preventing AI discrimination, biases, and safeguarding privacy. For instance, Article 5 prohibits fundamentally unjust or equality-harmful AI uses; Article 10 on Data and Data Governance requires datasets for training, validation, and testing to be representative and error-free to avoid discrimination. Although the Act itself lacks extensive regulations on privacy protection, other related regulations like GDPR potentially provide privacy safeguards. However, regarding substantive justice, regulations remain notably insufficient. Protections for disadvantaged and marginalized groups are minimal, only including Article 16(l), which requires providers of high-risk AI systems to “ensure compliance with accessibility requirements,” and Article 4, mandating that AI providers and deployers enhance AI literacy among staff and users. There are no explicit measures detailing how to provide affordable AI solutions for economically disadvantaged groups. The Act also lacks measures to systematically solve more implicit and structural social inequalities through AI. Furthermore, explicit provisions encouraging open-source AI models, algorithms, and data are notably insufficient, lacking substantial support for open-source communities and public resource sharing.

Several factors contribute to the disconnect in the transition from soft law to hard law regarding “AI for all”:

Firstly, slow legislative processes contrasted with rapid technological advancements. One key reason for the soft law–hard law disconnect is the regulatory lag between fast-moving AI technology and the slow pace of legislation. Law often trails technological innovation, leaving a gap in governance as new risks emerge (Joskow and Noll 1981). In the AI context, scholars observe that “the pace of development of AI far exceeds the capability of any traditional regulatory system to keep up” (Marchant 2020). Formal lawmaking – from drafting bills to passing legislation or agency rules – takes years, while AI capabilities evolve in months. This pacing problem means “AI for all” may lack binding rules at the moment they’re most needed. For example, the EU’s effort to enact a comprehensive AI law (the EU AI Act) began in 2021 but is only reaching approval stages by 2024. Regulatory lag suggests that without adaptive, faster mechanisms, hard law will chronically catch up “after the fact,” struggling to address AI’s latest challenges.

Secondly, the abstract and non-binding nature of soft law facilitates easier adoption. Because they lack legal force, soft law measures face fewer political hurdles – no need for lengthy legislative votes or international treaties. This makes them agile tools for emerging tech governance (Frontier Economics 2023). This abstract nature facilitates passage but also means these principles often stay on paper without concrete implementation mechanisms.

Thirdly, disagreements on pathways to achieve “AI for all”, particularly between market-driven and government-driven approaches. Some experts argue that premature or heavy regulation could impede innovation in AI, suggesting that “AI for all” might be achieved through competition and corporate responsibility without strict laws. The U.S. policy approach has largely leaned this way: it encourages multi-stakeholder collaboration on AI guidelines and involves industry leaders in drafting policies, rather than imposing top-down rules early (Walter 2024). On the other side, many academics and civil society voices contend that government intervention is necessary to guarantee inclusivity and public values in AI (Wilczek, Thsler-Kordonouri, and Eder 2024). They point out that some scholars believed that laissez-faire development has led to AI tools with bias and disparate impacts, and that without binding rules, firms might under-invest in costly fairness measures.

5 Mechanism to Integrate Soft and Hard Laws for “AI for All”

This mechanism requires a dual approach: leveraging the guiding role and ethical pressure of soft laws by expanding the connotation of “AI for All”, while simultaneously harnessing the mandatory force of hard laws to strike a balance between market dynamics and “AI for All” based on national realities.

5.1 Role of Soft Laws: Guiding Principles and Ethical Pressure

Only soft laws are clearly not enough, yet they indeed serve unique functions. Soft laws can exert moral pressure on relevant parties and influence the creation of hard laws (Abbott and Snidal 2000). Given the new challenges arising from the rapid advancement of generative AI, soft laws should continue to fulfill their roles by expanding the content of “AI for All,” thereby contributing to its practical implementation.

Firstly, the principle of “affordability” in “AI for All” should be explicitly included in soft law provisions. “AI technology for all” means more than merely universal access; it should incorporate the capability approach and Rawls’ difference principle, offering varied degrees of benefits tailored to the actual circumstances of different users to achieve substantive justice. Currently, major AI companies like OpenAI and Google, provide free access only to second-tier models, while genuinely advanced services are either expensive or severely usage-limited. On one hand, soft laws should encourage developers and gen AI service providers to offer discounted or free access to AI services in low-income regions, and for disadvantaged groups. For example, enterprises should consider to adopt global pricing strategies adjusted according to purchasing power parity (PPP). Even state-of-the-art AI models should offer users certain levels of free access. On the other hand, businesses should establish AI-focused public welfare programs and collaborate with academic institutions, libraries, and non-profit organizations to provide low-cost or free advanced functionalities for educational and research purposes. Extending access to state-of-the-art AI models broadly across international and local communities can maximize AI’s positive externalities.

Secondly, enhancing AI literacy and infrastructure should be identified clearly as one of the primary goals within soft law frameworks. Currently, soft law guidelines on AI ethics often address AI literacy and infrastructure superficially and abstractly. Soft laws should establish specific provisions dedicated to AI literacy, encouraging collaboration among enterprises, universities, and governments to implement comprehensive public training and awareness programs. Integrating “basic AI cognition” into mandatory or elective courses at all educational levels should be advocated. For instance, civil society organizations should be encouraged to disseminate recommendations for the correct use of large language models. A relevant example is the American Medical Association (AMA), which has already published guidelines on the responsible use of ChatGPT and similar models (American Medical Association 2023). Particular attention must be directed toward vulnerable populations, elderly individuals, and those with limited educational backgrounds. Community groups and non-profit organizations should be encouraged to host offline training sessions, providing introductory AI skills and demonstrations for residents in rural areas, thereby bridging the existing cognitive divide.

Thirdly, international cooperation must be reinforced through soft law guidelines concerning AI. AI should not remain an exclusive domain of wealthy nations; global AI soft law guidelines should strengthen advocacy by emphasizing capacity building and promoting multilateral and bilateral cooperation. Countries with advanced AI capabilities should be encouraged to provide comprehensive public resources to developing nations, including infrastructure, open-source models, datasets, governance frameworks, and safety tools. In infrastructure development, advanced nations should support developing countries in establishing data centers, high-performance computing facilities, and robust network infrastructure, particularly prioritizing increasing Africa’s share of global data center capacity. Regarding talent cultivation, soft law can advocate for establishing a global AI education network to nurture local AI expertise in developing nations, encompassing technical research, application development, and governance management. Concerning technology transfer, advanced countries should be encouraged to share applicable technologies with developing nations, carefully balancing intellectual property protection with technological inclusivity. Regarding institutional capabilities, assistance should be provided to developing nations in establishing AI governance frameworks tailored to their national contexts, thus enhancing their capacity to participate effectively in global AI governance. Concurrently, developing countries can leverage initiatives such as the Global Initiative on AI Governance and the United Nations resolution on “Strengthening International Cooperation in AI Capacity Building” to prioritize capacity building as a critical aspect of international cooperation.

5.2 Role of Hard laws: Balancing Innovation, Market Efficiency, and Inclusivity

To advance “AI for All” through hard law, differentiated institutional designs must be developed based on the varying positions of countries within the global AI technology system, while preserving innovation vitality and market efficiency. Therefore, this section first explores a set of potential measures for the hardening of soft law, and then provides a categorized analysis based on the state of AI technological development in different countries.

5.2.1 Measures for Implementing Hard Law

Currently, generative AI services are neither be seen as “public goods” nor “quasi-public goods” – markets remain the primary drivers of innovation and supply. However, “AI for All” carries significant public value, and overreliance on market forces risks exacerbating structural inequalities and other negative externalities. Consequently, the critical role of hard law lies in two dimensions: first, clarifying the boundaries of governmental public responsibility to avoid oversimplifying AI as a “free public good,” and second, deploying incentive-based legal mechanisms to correct market failures, social inequalities, and other negative externalities, thereby promoting broader and more affordable access to AI services for the public.

Regarding AI price barriers, multiple hard law measures could be considered. Firstly, incorporating AI services into government-guided pricing should be considered. Various countries have historically employed price regulations for essential public commodities and services. For instance, in the U.S., the Hepburn Act (1906) established maximum railroad freight rates. In China, the Pricing Law mandates governmental pricing or guided pricing for critical commodities affecting national welfare, including medicines, energy, and transportation. Similarly, Japan’s Price Control Act permits governmental regulation of commodity and service pricing during emergencies. Other nations such as Germany, France, and the UK also regulate pricing in sectors like energy, telecommunications, railways, and healthcare to varying extents. For AI services, if their societal and economic significance continues to grow, governments may consider subjecting them – in whole or in part – to guided pricing. Specific measures could include setting price ranges based on per capita income levels or AI’s impact on public services and safety, thereby preventing exclusionary costs from barring public access to core AI applications. Secondly, promoting “AI for All” through government procurement could be implemented. Most countries have government procurement laws regulating public purchasing behaviors. AI procurement agreements could help negotiate better pricing through economies of scale (McGee 2025). By procuring advanced large-model APIs and then providing them to education, healthcare, and small and medium-sized enterprises, this approach would exert less market impact compared to government-mandated guided pricing.

To enhance citizens’ AI literacy and bridge the AI divide, multiple hard law approaches exist. Firstly, in public service assurance law, free AI services could be provided through public cultural institutions. Various national legal frameworks regulate public service facilities like museums and libraries, covering their construction, management, operations, funding, and public accessibility. For example, the U.S. Museum and Library Services Act of 1996 outlines funding and operational guidelines to ensure public access. Under the “AI for All” framework, countries could revise or reinterpret such laws to empower public institutions – using fiscal funds – to procure commercial AI services and offer free or low-cost access to the public. This approach would alleviate the financial burden on low-income or marginalized groups, enabling them to utilize advanced AI models through public facilities without requiring expensive personal devices or subscriptions. Secondly, in education law, revising or interpreting educational legislation to formally integrate “AI literacy” and “digital skills” within the scope of the “right to education” is advisable. Educational laws should explicitly incorporate “understanding and applying generative AI” into defined educational objectives. For general adult education and lifelong learning frameworks, community colleges, vocational training centers, and online platforms should be encouraged to provide AI literacy programs, ensuring ongoing learning opportunities across individuals’ life stages. For example, California’s recent law requiring schools to teach students about AI signifies a growing recognition of the importance of equipping young people with these essential skills (Jones 2024). Furthermore, legislation governing educational facilities should include broadband networks, smart terminal devices, and cloud computing resources as essential educational infrastructure, clarifying spending proportions or minimum standards in terms of network coverage rates and per capita device ownership.

5.2.2 Differentiated National Pathways: From Core to Peripheral AI Technology States

The effectiveness of hard law in realizing “AI for All” depends on a nation’s possession of AI technological capacity. In other words, universal access to AI presupposes the existence of domestic capacity to provide “AI for all”. If advanced large language models reside outside a nation’s borders, and domestic markets and industries lack influence, efforts toward “AI for all” will falter. For this reason, while outlining potential hard-law institutional approaches, this section also categorizes four types of nations based on their distinct inclusivity challenges and corresponding response strategies. Drawing on world-systems theory and the current global division of labor in AI technology, nations can be categorized into Core, Near-Core, Semi-Core, Semi-Peripheral, and Peripheral AI technology states. Countries at different tiers face stark disparities in technological sovereignty, market scale, international competitiveness, and regulatory capacity when advancing “AI for All” through hard law, necessitating distinct legislative priorities (Table 3).

Table 3:

Differentiated national tiers and hard-law strategies for “AI for All”.

Tier Key countries/regions Hard-law strategies for “AI for all”
Core AI States United States Domestic hard law can effectively promote “AI for all”: Covers AI Technology for All and AI Justice for All.
Near-Core AI States China Domestic hard law can advance “AI for all”: Addresses AI Technology for All and AI Justice for All.
Semi-Core AI States UK, Canada, Germany, France, Japan, South Korea Limited domestic hard-law impact on “AI for all”: Pricing and open-source policies for advanced models are dictated externally; long-arm jurisdiction may ineffective. AI Justice measures can still influence fairness.
Semi-Peripheral States India, Russia, Brazil, UAE (emerging economies) Weak domestic hard-law impact: Advanced model pricing and open-source policies are controlled externally; long-arm jurisdiction faces challenges. AI Justice frameworks may partially address equity.
Peripheral States Most developing nations (Africa, Oceania, Latin America) Domestic hard law fails to advance “AI for all”: No control over pricing or open-source policies for advanced models; long-arm jurisdiction is nearly impossible. Only reliance on international cooperation.

Firstly, for core AI technology state, The U.S. possesses the world’s most robust AI ecosystem, leading in large-scale model development, chip design and manufacturing, foundational research, and industrial applications, with full AI sovereignty. It can effectively implement “AI for All” through domestic hard law – for example, by adopting government-guided pricing, implementing targeted subsidies, strengthening antitrust and data governance, and mandating AI literacy in education laws. The legislative challenge lies in balancing two goals: maintaining incentives for innovation and global competitiveness in the tech sector while preventing the monopolization of technological dividends by a few corporations or privileged groups, which would undermine societal equity.

Secondly, China, the Near-Core AI Technology State, ranks second globally in foundational model development, domestic AI chip R&D, academic output, and application scale, with partial AI sovereignty. However, it remains dependent on foreign suppliers for certain high-end chips and core algorithms. Domestic hard law can play a significant role in shaping national AI capacity and improving accessibility, particularly in balancing innovation incentives with inclusive access.

Thirdly, these semi-core AI Technology States nations (e.g., South Korea, Japan, Germany) exhibit active R&D and industrial ecosystems, moderate capabilities in large-scale model design and chip manufacturing. While excelling in specific dimensions, none matches the U.S. or China in comprehensive strength across models, chips, computing power, research, applications, and policy investment. For “AI for All” legislation, reliance on foreign core technologies and models complicates efforts to enforce pricing controls or subsidies, as decisions by multinational corporations often dictate outcomes. Nonetheless, domestic antitrust laws, data governance frameworks, and labor regulations can still promote fairness and affordability in AI adoption, albeit with less impact than in Core or Near-Core states.

Fourthly, for semi-peripheral AI Technology States, these nations (e.g., India, Brazil, Indonesia) lack advanced large-scale model design capabilities and are typically large emerging economies or developing countries actively deploying AI. While they possess some technical and talent infrastructure, their capacity for high-end chip production, cutting-edge research, or large-scale industrial AI applications remains limited. Hard law can regulate domestic AI pricing, public service access, and educational guarantees, but challenges persist due to reliance on foreign technology. For example, “long-arm jurisdiction” struggles to influence pricing or platform strategies of foreign firms in local markets. Thus, international cooperation – via regional or multilateral platforms – is critical to facilitate technology transfer, public AI infrastructure development, and global/regional coordination on AI education and training.

Finally, most developing nations fall into peripheral category, lacking advanced research institutions, AI industrial bases, or capabilities in foundational model development and chip manufacturing. Their AI adoption is limited to off-the-shelf solutions provided by foreign entities. Domestic hard law has minimal leverage to counter transnational monopolies or high-cost services, as they lack control over core technologies. The solution does not lie in hard law, but rather depends heavily on international organizations, multilateral aid, or treaties to integrate “AI for All” into global development agendas. This includes securing preferential or free access to AI tools for low-income nations, coupled with infrastructure and educational support. At the same time, efforts should also be made to expand electricity and internet access and provide training in AI skills (Nugroho and Cammelli 2024).

6 Conclusions

This paper has systematized the multifaceted concept of “AI for All” into a coherent theoretical and practical framework, emphasizing three interrelated dimensions: AI Technology for All, AI Justice for All, and AI for All Countries. By defining these dimensions clearly, the research delineates “AI for All” from related concepts like inclusive, accessible, or human-centered AI, addressing ambiguities prevalent in contemporary discourse.

The theoretical exploration grounded in Law and Economics, Justice Theory, and World-Systems Theory provides robust legitimacy to the “AI for All” initiative. Law and Economics underscores the need for governmental intervention when market mechanisms alone fail to tackle inequalities and externalities inherent in AI deployment. Justice Theory, drawing notably from Rawlsian and capabilities frameworks, reinforces the moral imperative for equitable AI distribution, stressing substantive over purely formal fairness. World-Systems Theory places “AI for All” within a global context, highlighting how AI can either perpetuate or mitigate inequalities between developed and developing nations, emphasizing the need for deliberate international policy interventions.

However, realizing “AI for All” faces critical barriers: the prohibitive costs of advanced generative AI models, gaps in AI literacy, and inadequate infrastructure. Soft law initiatives, though abundant, lack enforceability, specificity, and responsiveness to rapidly evolving generative AI technologies. Meanwhile, existing hard laws remain scarce, fragmented, and insufficiently attuned to the substantive dimensions of justice necessary for genuinely inclusive outcomes.

To bridge this gap, the paper proposes a comprehensive integration of soft and hard laws. Soft laws should be explicitly updated to emphasize affordability, literacy, and international cooperation, providing ethical guidance and public pressure to complement hard laws. Concurrently, hard laws should adopt market-friendly but socially responsible regulations – such as guided pricing mechanisms, public service assurances through institutions like libraries, and legislative frameworks mandating AI literacy as part of education rights – to balance innovation with inclusive accessibility. Moreover, by dividing countries into core, near-core, semi-core, peripheral, and semi-peripheral categories according to their technological strength in the global AI system, the paper emphasizes that national legislative approaches must consider each country’s position within the global AI technological hierarchy.

Ultimately, solving the challenges identified herein demands a strategic synthesis of ethical principles and enforceable regulations, ensuring that the transformative potential of AI technology genuinely benefits all sectors of society, both domestically and globally.


Corresponding author: Yang Xiao, Guanghua Law School, Zhejiang University, Hangzhou, China, E-mail:

About the authors

Yang Xiao

Yang Xiao is Research Fellow at Zhejiang University. His research interests lie in digital law and international law.

Xiaoxia Sun

Xiaoxia Sun is the Dean of the Digital Law Research Institute of Zhejiang University. He has served in various leadership positions, including as the Dean of Guanghua Law School at Zhejiang University and later as Dean of the Law School at Fudan University. Sun is a Changjiang Scholar and enjoys a special allowance from the State Council. His research focuses on legal theory, philosophy of law, public law principles, and procedural theory. He is also an expert in digital law, leading research efforts in the field of digital governance and smart justice. As the founding Director of the Digital Law Research Institute, Xiaoxia Sun has spearheaded collaborative projects between academia and the judiciary, working towards digital transformation in China’s legal system. His numerous books and over 100 scholarly articles contribute significantly to the fields of legal theory and digital law.

  1. Research ethics: Not applicable.

  2. Author contributions: The authors have accepted responsibility for the entire content of this manuscript and approved its submission.

  3. Conflict of interest: The authors states no conflict of interest.

  4. Research funding: None declared.

  5. Data availability: Not applicable.

References

Abbott, K. W., and D. Snidal. 2000. “Hard and Soft Law in International Governance.” International Organization 54 (3): 421–56. https://doi.org/10.1162/002081800551280.Search in Google Scholar

AI4ALL. 2025. AI4ALL: Opening Doors to Artificial Intelligence for Historically Excluded Talent. https://ai-4-all.org/.Search in Google Scholar

AI for Developing Countries Forum. 2025. AI for Developing Countries Forum. https://aifod.org.Search in Google Scholar

American Medical Association. 2023. ChatGPT and Generative AI: What Physicians Should Consider [PDF]. https://www.ama-assn.org/system/files/chatgpt-what-physicians-should-consider.pdf.Search in Google Scholar

Arrighi, G. 1990. “The Developmentalist Illusion: A Reconceptualization of the Semiperiphery.” In Semiperipheral states in the world-economy, edited by W. G. Martin, 11–42. Westport, CT: Greenwood Press.Search in Google Scholar

Arrighi, G. 2007. Adam Smith in Beijing: Lineages of the Twenty-First century. London: Verso.Search in Google Scholar

African Union. 2024. Continental artificial intelligence strategy. African Union. https://au.int/en/documents/20240809/continental-artificial-intelligence-strategy.Search in Google Scholar

Avellan, T., S. Sharma, and M. Turunen. 2020, January. “AI for All: Defining the What, Why, and How of Inclusive AI.” In Proceedings of the 23rd International Conference on Academic Mindtrek, 142–4.10.1145/3377290.3377317Search in Google Scholar

Bill & Melinda Gates Foundation. 2025. AI equity: Ensuring access to AI for all. Bill & Melinda Gates Foundation. https://www.gatesfoundation.org/ideas/science-innovation-technology/artificial-intelligence.Search in Google Scholar

Buccella, A. 2023. ““AI for All” Is a Matter of Social Justice.” AI and Ethics 3 (4): 1143–52. https://doi.org/10.1007/s43681-022-00222-z.Search in Google Scholar

Carter, L., D. Liu, and C. Cantrell. 2020. “Exploring the Intersection of the Digital Divide and Artificial Intelligence: A Hermeneutic Literature Review.” AIS Transactions on Human-Computer Interaction 12 (4): 253–75. https://doi.org/10.17705/1thci.00138.Search in Google Scholar

Cheng, L., and X. Gong. 2024. “Appraising Regulatory Framework towards Artificial General Intelligence (AGI) Under Digital Humanism.” International Journal of Digital Law and Governance 1 (2): 269–312. https://doi.org/10.1515/ijdlg-2024-0015.Search in Google Scholar

Cheng, L., J. Han, and J. Nasirov. 2024. “Ethical Considerations Related to Personal Data Collection and Reuse: Trust and Transparency in Language and Speech Technologies.” International Journal of Legal Discourse 9 (2): 217–35. https://doi.org/10.1515/ijld-2024-2010.Search in Google Scholar

Cheng, L., and X. Liu. 2023. “From Principles to Practices: The Intertextual Interaction between AI Ethical and Legal Discourses.” International Journal of Legal Discourse 8 (1): 31–52. https://doi.org/10.1515/ijld-2023-2001.Search in Google Scholar

Coase, R. H. 1960. “The Problem of Social Cost.” Journal of Law and Economics 3: 1–44. https://doi.org/10.1086/466560.Search in Google Scholar

Cornell University Center for Teaching Innovation. 2024. Generative Artificial Intelligence. Center for Teaching Innovation. https://teaching.cornell.edu/generative-artificial-intelligence (accessed March 19, 2025).Search in Google Scholar

Cotter, K. 2022. “Practical Knowledge of Algorithms: The Case of BreadTube.” New Media & Society 26: 2131–50. https://doi.org/10.1177/14614448221081802.Search in Google Scholar

Couldry, N., and U. A. Mejias. 2019. “Data Colonialism: Rethinking Big Data’s Relation to the Contemporary Subject.” Television & New Media 20 (4): 336–49. https://doi.org/10.1177/1527476418796632.Search in Google Scholar

Daly, S. 2021. “The Rule of (Soft) Law.” King’s Law Journal 32 (1): 3–13. https://doi.org/10.1080/09615768.2021.1885326.Search in Google Scholar

DeepSeek-AI. 2025. DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning [Preprint]. arXiv:2501.12948.Search in Google Scholar

Eitel-Porter, R. 2021. “Beyond the Promise: Implementing Ethical AI.” AI and Ethics 1 (1): 73–80. https://doi.org/10.1007/s43681-020-00011-6.Search in Google Scholar

Fang, A., and J. Perkins. 2024. “Large Language Models (LLMs): Risks and Policy Implications.” MIT Science Policy Review 5: 134–45. https://doi.org/10.38105/spr.3qrco9kp8x.Search in Google Scholar

Federal Reserve Bank of St. Louis. 2025. The impact of generative AI on work productivity. St. Louis Fed. https://www.stlouisfed.org/on-the-economy/2025/feb/impact-generative-ai-work-productivity (accessed March 19, 2025).Search in Google Scholar

Fosch-Villaronga, E., and A. Poulsen. 2022. “Diversity and Inclusion in Artificial Intelligence.” In Law and artificial intelligence: Regulating AI and applying AI in legal practice, edited by B. Custers, and E. Fosch-Villaronga, 109–34. Hague: T.M.C. Asser Press.10.1007/978-94-6265-523-2_6Search in Google Scholar

Frontier Economics. 2023. Innovative technology governance: Hard rules for soft laws. Frontier Economics. https://www.frontier-economics.com/uk/en/news-and-insights/articles/article-i21147-innovative-technology-governance-hard-rules-for-soft-laws (accessed March 19, 2025).Search in Google Scholar

Future of Life Institute. 2017. Asilomar AI Principles. https://futureoflife.org/open-letter/ai-principles/.Search in Google Scholar

Gabriel, I. 2022. “Toward a Theory of Justice for Artificial Intelligence.” Daedalus 151 (2): 218–31. https://doi.org/10.1162/daed_a_01911.Search in Google Scholar

Global Partnership on Artificial Intelligence. n.d. About GPAI. https://gpai.ai/about/.Search in Google Scholar

Guzman, A. T., and T. L. Meyer. 2010. “International Soft Law.” Journal of Legal Analysis 2 (1): 171–225. https://doi.org/10.1093/jla/2.1.171.Search in Google Scholar

Gran, A.-B., P. Booth, and T. Bucher. 2021. “To Be or Not to Be Algorithm Aware: A Question of a New Digital Divide?” Information, Communication & Society 24 (12): 1779–96. https://doi.org/10.1080/1369118x.2020.1736124.Search in Google Scholar

Hagendorff, T. 2020. “The Ethics of AI Ethics: An Evaluation of Guidelines.” Minds and Machines 30 (1): 99–120. https://doi.org/10.1007/s11023-020-09517-8.Search in Google Scholar

Hendawy, M. 2024. “The Intensified Digital Divide: Comprehending GenAI.” Internet Policy Review, https://policyreview.info/articles/news/intensified-digital-divide-comprehending-genai/1772.Search in Google Scholar

High-Level Expert Group on Artificial Intelligence. 2019. Ethics guidelines for trustworthy AI. Brussels: European Commission.Search in Google Scholar

Hopkins, T. K., and I. Wallerstein, eds. 1982. World-systems analysis: Theory and methodology. Thousand Oaks: SAGE Publications.Search in Google Scholar

Institute of Electrical and Electronics Engineers. 2019. Ethically aligned design: A vision for prioritizing human well-being with autonomous and intelligent systems, 1st ed. New York: IEEE.Search in Google Scholar

International Telecommunication Union. 2023, September 12. “Population of Global Offline Continues Steady Decline to 2.6 Billion People in 2023: Accelerating Progress Is Key in Race toward Universal and Meaningful Connectivity.” ITU. https://www.itu.int/en/mediacentre/Pages/PR-2023-09-12-universal-and-meaningful-connectivity-by-2030.aspx (accessed March 19, 2025).Search in Google Scholar

Jones, D. 2024, October 11. California law requires schools to teach students about AI. Government Technology. https://www.govtech.com/education/k-12/california-law-requires-schools-to-teach-students-about-ai.Search in Google Scholar

Joskow, P. L., and R. G. Noll. 1981. “Regulation in theory and practice: An overview.” In Studies in public regulation, edited by G. Fromm, 1–78. The MIT Press.Search in Google Scholar

Lipson, C. 1991. “Why Are Some International Agreements Informal?” International Organization 45 (4): 495–538. https://doi.org/10.1017/s0020818300033191.Search in Google Scholar

Marchant, G. 2020, February 10. Soft law as a complement to AI regulation. Brookings Institution. https://www.brookings.edu/articles/soft-law-as-a-complement-to-ai-regulation/(accessed March 19, 2025).Search in Google Scholar

McGee, M. 2025, January 25. What the rising costs of AI means for government. StateTech. https://statetechmagazine.com/article/2025/01/what-rising-costs-ai-means-government.Search in Google Scholar

McKinsey Global Institute. 2023. The economic potential of generative AI: The next productivity frontier. McKinsey & Company.Search in Google Scholar

Medema, S. G. 2020. Markets, morals, and policy-making: A new defense of economics. Oxford: Oxford University Press.Search in Google Scholar

Morris, M. R. 2020. “AI and Accessibility.” Communications of the ACM 63 (6): 35–7. https://doi.org/10.1145/3356727.Search in Google Scholar

Mozur, P. 2025. “Chinese AI firm DeepSeek shows promise but trails U.S. leaders.” The Wall Street Journal, https://www.wsj.com/tech/ai/china-ai-deepseek-chatbot-6ac4ad33.Search in Google Scholar

NITI Aayog. 2018a. National strategy for artificial intelligence. Government of India. https://www.niti.gov.in/sites/default/files/2023-03/National-Strategy-for-Artificial-Intelligence.pdf.Search in Google Scholar

NITI Aayog. 2018b. National strategy for artificial intelligence: AI for all. Government of India. https://www.niti.gov.in/national-strategy-artificial-intelligence.Search in Google Scholar

Nugroho, L., and G. Cammelli. 2024, December 12. Tipping the scales: AI’s dual impact on developing nations. World Bank Blogs. https://blogs.worldbank.org/en/digital-development/tipping-the-scales--ai-s-dual-impact-on-developing-nations.Search in Google Scholar

Organisation for Economic Co-operation and Development. 2019. OECD AI Principles. https://www.oecd.org/going-digital/ai/principles/.Search in Google Scholar

Panagopoulos, A. 2024, August 9. AI for All: An initiative of the Ministry of Interior in cooperation with Google and EKDDA. Digital Skills and Jobs Platform. https://digital-skills-jobs.europa.eu/en/inspiration/good-practices/ai-all-initiative-ministry-interior-cooperation-google-and-ekdda.Search in Google Scholar

Pogge, T. 2002. World poverty and human rights: Cosmopolitan responsibilities and reforms. Cambridge: Polity Press.Search in Google Scholar

Posner, E., and J. Gersen. 2008. Soft law [Working Paper No. 213]. University of Chicago Public Law & Legal Theory Working Paper.Search in Google Scholar

Posner, R. A. 2014. Economic analysis of law, 9th ed.. New York: Wolters Kluwer Law & Business.Search in Google Scholar

Raustiala, K., and D. G. Victor. 1998. “Conclusions.” In The implementation and effectiveness of international environmental commitments: Theory and practice, edited by D. G. Victor, K. Raustiala, and E. B. Skolnikoff, 659–707. Cambridge: MIT Press.Search in Google Scholar

Rawls, J. 1999. A theory of justice (Rev. ed.). Harvard University Press.10.4159/9780674042582Search in Google Scholar

Reuters. 2025, February 3. OpenAI Launches New AI Tool to Facilitate Research Tasks. https://www.reuters.com/technology/openai-launches-new-ai-tool-facilitate-research-tasks-2025-02-03/.Search in Google Scholar

Robeyns, I. 2017. Wellbeing, freedom and social justice: The capability approach re‐examined. Nottingham: Open Book Publishers.10.11647/OBP.0130Search in Google Scholar

Schuster, S. 2023. AI for all: How everyday people can benefit from artificial intelligence. Shawn M Schuster.Search in Google Scholar

Sen, A. 1992. “Reflections on the Meaning of Efficiency.” The Economic Journal 102 (410): 1–18.Search in Google Scholar

Sindico, F. 2006. “Soft Law and the Elusive Quest for Sustainable Global Governance.” Leiden Journal of International Law 19 (3): 829–46. https://doi.org/10.1017/s0922156506003608.Search in Google Scholar

Sun, X., and Y. Xiao. 2024. “How Digital Power Shapes the Rule of Law: The Logic and Mission of Digital Rule of Law.” International Journal of Digital Law and Governance 1 (2): 207–43. https://doi.org/10.1515/ijdlg-2024-0017.Search in Google Scholar

Smith, J., and K. Lee. 2024. “Artificial Intelligence and Wealth Inequality: A Comprehensive Analysis.” Technological Forecasting and Social Change 196: 121234.Search in Google Scholar

Shams, R. A., D. Zowghi, and M. Bano. 2023a. Challenges and Solutions in AI for All. [Preprint]. arXiv:2307.10600.Search in Google Scholar

Shams, R. A., D. Zowghi, and M. Bano. 2023b. AI for All: Operationalising Diversity and Inclusion Requirements for AI Systems. [Preprint]. arXiv:2311.14695.Search in Google Scholar

Shams, R. A., D. Zowghi, and M. Bano. 2023c. AI for All: Identifying AI Incidents Related to Diversity and Inclusion. [Preprint]. arXiv:2311.14696.Search in Google Scholar

Shams, R. A., D. Zowghi, and M. Bano. 2025. “AI and the Quest for Diversity and Inclusion: A Systematic Literature Review.” AI and Ethics 5 (1): 411–38. https://doi.org/10.1007/s43681-023-00362-w.Search in Google Scholar

TechRadar. 2025, March. “OpenAI’s Deep Research Smashes Records for the World’s Hardest AI Exam.” TechRadar. Retrieved from https://www.techradar.com/computing/artificial-intelligence/openais-deep-research-smashes-records.Search in Google Scholar

Toronto Declaration. n.d. Inclusive AI Must Be Shaped by a Diverse Range of Voices in All Areas from Strategy Development, Data Gathering, Algorithmic Design, Implementation, and User Impact, with the Goal of Respecting the Right to Live Free from Discrimination. Retrieved from https://www.torontodeclaration.org/declaration-text/english.Search in Google Scholar

Trubek, D. M., P. Cottrell, and M. Nance. 2006. “‘Soft Law,’ ‘hard Law,’ and European Integration: Toward a Theory of Hybridity.” In Law and new governance in the EU and the US, edited by G. de Búrca, and J. Scott, 65–94. Oxford: Hart Publishing.10.2139/ssrn.855447Search in Google Scholar

United Nations. 2024. Global Digital Compact: Open data, AI models, standards, and content for development. United Nations. https://www.un.org/global-digital-compact/.Search in Google Scholar

United Nations Educational, Scientific and Cultural Organization (UNESCO). 2019. Beijing Consensus on Artificial Intelligence and Education. https://unesdoc.unesco.org/ark:/48223/pf0000368303.Search in Google Scholar

United Nations Educational, Scientific and Cultural Organization (UNESCO). 2024. Third International UNESCO Model on AI Has Been Launched for Associations and Clubs of the UNESCO Movement. https://www.unesco.org/en/articles/third-international-unesco-model-ai-has-been-launched-associations-and-clubs-unesco-movement.Search in Google Scholar

Wallerstein, I. 2004. World-systems analysis: An introduction. Durham: Duke University Press.10.1215/9780822399018Search in Google Scholar

Wang, C., S. C. Boerman, A. C. Kroon, J. Möller, and C. H. de Vreese. 2024. “The Artificial Intelligence Divide: Who Is the Most Vulnerable?” New Media & Society 0 (0): 1–3, https://doi.org/10.1177/14614448241232345. https://journals.sagepub.com/doi/pdf/10.1177/14614448241232345.Search in Google Scholar

Walter, Y. 2024. “Managing the Race to the Moon: Global Policy and Governance in Artificial Intelligence Regulation—A Contemporary Overview and an Analysis of Socioeconomic Consequences.” Discovery Artificial Intelligence 4: 14. https://doi.org/10.1007/s44163-024-00109-4.Search in Google Scholar

Weidinger, L., K. R. McKee, R. Everett, S. Huang, T. O. Zhu, M. J. Chadwick, C. Summerfield, and I. Gabriel. 2023. “Using the veil of ignorance to align AI systems with principles of justice.” Proceedings of the National Academy of Sciences 120 (18): e2213709120.10.1073/pnas.2213709120Search in Google Scholar

Weil, P. 1983. “Towards Relative Normativity in International Law?” American Journal of International Law 77 (3): 413–23. https://doi.org/10.2307/2201073.Search in Google Scholar

Westerstrand, S. 2024. “Reconstructing AI Ethics Principles: Rawlsian Ethics of Artificial Intelligence.” Science and Engineering Ethics 30 (5): 1–21. https://doi.org/10.1007/s11948-024-00507-y.Search in Google Scholar

White House Office of Science and Technology Policy. 2022. Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People. https://bidenwhitehouse.archives.gov/ostp/ai-bill-of-rights/about-this-document.Search in Google Scholar

Wilczek, B., S. Thäsler-Kordonouri, and M. Eder. 2024. “Government Regulation or Industry Self-Regulation of AI? Investigating the Relationships between Uncertainty Avoidance, People’s AI Risk Perceptions, and Their Regulatory Preferences in Europe.” AI & Society, https://link.springer.com/content/pdf/10.1007/s00146-024-02138-0.pdf.10.1007/s00146-024-02138-0Search in Google Scholar

Yoon, I. S. 2021. “Amartya Sen’s Capabilities Approach: Resistance and Transformative Power in the Age of Transhumanism.” Zygon® 56 (4): 874–97. https://doi.org/10.1111/zygo.12740.Search in Google Scholar

Yu, D., H. Rosenfeld, and A. Gupta. 2023, January 16. The ‘AI Divide’ between the Global North and the Global South. World Economic Forum. https://www.weforum.org/stories/2023/01/davos23-ai-divide-global-north-global-south/.Search in Google Scholar

Ziewitz, M., M. Fourcade, and D. Boyd. 2021. “Artificial Intelligence, Algorithms, and Social Inequality: Sociological Contributions to Contemporary Debates.” Critical Sociology 47 (7–8): 1137–48.Search in Google Scholar

Zowghi, D., and M. Bano. 2024. “AI for All: Diversity and Inclusion in AI.” AI and Ethics 4 (4): 873–6. https://doi.org/10.1007/s43681-024-00485-8.Search in Google Scholar

Received: 2025-04-03
Accepted: 2025-04-03
Published Online: 2025-04-25

© 2025 the author(s), published by De Gruyter on behalf of Zhejiang University

This work is licensed under the Creative Commons Attribution 4.0 International License.

Downloaded on 8.9.2025 from https://www.degruyterbrill.com/document/doi/10.1515/ijdlg-2025-0003/html
Scroll to top button