Home The Global Institutional Governance of AI: A Four-Dimensional Perspective
Article Open Access

The Global Institutional Governance of AI: A Four-Dimensional Perspective

  • Rostam J. Neuwirth

    Rostam J. Neuwirth is Professor of Law and Head for Department of Global Legal Studies at the University of Macau. Previously, he taught at the West Bengal University of Juridical Sciences (NUJS) in Kolkata and the Hidayatullah National Law University (HNLU) in Raipur (India) and worked as a legal adviser in the Department of European Law of the International Law Bureau of the Austrian Federal Ministry for Foreign Affairs. He received his PhD degree from the European University Institute (EUI) in Florence (Italy) and also holds a master’s degree in law (LLM) from the Faculty of Law of McGill University in Montreal (Canada). As an undergraduate he studied at the University of Graz (Austria) and the Université d’Auvergne (France). He is the author of the books ‘The EU Artificial Intelligence Act: Regulating Subliminal AI Systems’ (Routledge 2023) and ‘Law in the Time of Oxymora: A Synaesthesia of Language, Logic and Law’ (Routledge 2018) as well as numerous other publications that focus on contemporary global legal problems by exploring the intrinsic linkages between law, on the one hand, and language, cognition, art, culture, society, and technology, on the other.

    ORCID logo EMAIL logo
Published/Copyright: March 8, 2024

Abstract

The present debate about the governance of artificial intelligence (AI) is dominated by a narrative of a “global race toward the regulation of AI.” Such a narrative bears serious dangers and should be rephrased as the “race toward the global regulation of AI” to adequately address the cross-cutting, cross-boundary, and cross-cultural nature of these technologies. If the debate about the future regulation of AI is to efficiently address the serious dangers and potentially existential risks related to AI, then it should be tied to other global governance issues, such as those summarized by the United Nations Sustainable Development Goals (SDGs). For this endeavor to be successful, the substantive questions of regulation must be combined with efforts to reform the present international system with a view to establishing a more efficient and coherent global institutional framework. It is important to be mindful of past obstacles in the reform of existing international organizations and to avoid the need for another global cataclysm to trigger institutional reform; thus, the article follows the idea that cognitive change leads to the transformation of international organizations. As both a technology aimed to replicate the human mind and an example of an important linguistic trend of a rise in essentially oxymoronic concepts, AI is deemed to provide the right point of departure to ponder future modes of human cognition – modes that reflect Einstein’s description of a world as a “four-dimensional space – time continuum,” – which may help to imagine the contours of a future global institutional framework.

1 Introduction

The path which opens immediately before us in the future is that of applying the conception of four-dimensional space to the phenomena of nature, and of investigating what can be found out by this new means of apprehension. [1]

The past decade has witnessed a global race to the development of artificial intelligence (AI). The narrative of this global AI development race has primarily been dominated by a fierce competition between the United States (US) and the People’s Republic of China (PRC).[2] However, the narrative of a new global arms race in the field of AI has been increasingly criticized, as it could well turn into a race swerving to the edge of a precipice because of the inherent dangers as well as potential existential risks related to AI.[3] As reflected in the adoption of the United Nations Educational, Scientific and Cultural Organization (UNESCO) Recommendation on the Ethics of AI in November 2021, the initial enthusiasm about the benefits of AI is complemented by a realistic sense of the ethical concerns and actual risks of AI.[4] At the same time, awareness has grown that nonbinding ethical recommendations or principles alone will not guarantee a safe development of AI.[5] Based on these ethical concerns, the regulation of AI has gained greater significance. In addition, the global AI race narrative is seen as posing serious dangers to the safe development of such technologies.[6]

As a result, the global race for the development of AI has entered the legal domain and turned into a parallel global race for the regulation of AI.[7] This race formally started with the European Union’s launch of a proposal for an Artificial Intelligence Act (AIA) in April 2021, which constituted a comprehensive and horizontal approach to the regulation of AI.[8] The AIA even proposes to ban certain AI practices because they pose unacceptable risks.[9] The PRC then followed a sectoral approach by adopting three laws related to AI, with a specific focus on algorithm-generated recommendations, deep synthesis technology, and generative AI.[10] Last, the global race to the regulation of AI heated up with the adoption of an executive order by US President Biden.[11]

At the same time, serious problems were identified with the current narrative of a so-called “global race toward the regulation of AI,” which was even framed as a “battle of digital empires” to regulate technology.[12] This rhetoric seems wrong and counterproductive for several reasons and should, therefore, be replaced by one calling for a “race toward the global regulation of AI” instead. A first strong reason is that digital sovereignty marks an oxymoron as it binds together the seemingly incompatible concepts of “sovereignty,” which is based on territorial jurisdiction, and of “digital,” which refers to a boundless cyberspace.[13] In short, it means that, at best, the digital space constitutes a single empire that cannot be coherently regulated without sufficient levels of mutual coordination and cooperation between the different countries or jurisdictions. Second, the regulatory lacunae are growing as time is short and the development of AI is proceeding at an accelerating pace. Third, the current narrative will only contribute to greater degrees of fragmentation between national and regional AI laws, which will eventually increase the risk of conflicts between countries as well as their relevant laws.[14] Fourth, the rhetoric also contradicts the cross-cutting, cross-border, and cross-cultural nature of AI and related technologies.[15] Unlike past technological development races, the present AI race is closely related to other technologies, such as the fifth generation of cellular wireless (5G), the Internet of Things (IoT), neurotechnologies, or augmented and virtual reality technologies, which can no longer “be fully cultivated in the same local environment.”[16] Last and most importantly, emphasizing the global aspects in the future of AI regulation is more conducive to an important aspect of any future debate on global governance, namely global institutional governance.

In other words, the quest for optimal solutions for the global governance and regulation of AI should be closely tied to questions of institutional aspects of AI governance. Now that more laws are being adopted and the different international organizations are more active in the field of AI,[17] it is necessary to give greater consideration to institutional aspects. An urgent need to establish some kind of global AI organization to avoid differing domestic regulatory approaches in the field of AI has been recognized for some time, but the need for regulation should not be ignored until such an organization can be established.[18] Ideally, the process of the regulation of AI should be accompanied by a debate on the optimal institutional support needed to secure the proper implementation and enforcement of the laws adopted. This step appears only logical as laws once adopted need institutions to monitor and, if necessary, enforce them to secure compliance with them. Additionally, institutional aspects related to AI are important given AI’s all-pervasive and cross-cutting nature. This feature of AI also means that the debate about the governance of AI needs to be tied to other policy debates, such as those summarized by the United Nations Sustainable Development Goals (SDGs). The debates about the global governance of AI and the future global goals succeeding the SDGs along with a debate about the reform of the international institutional system need to be combined. Such an opportunity will be provided by the United Nations (UN) Summit of the Future as proposed by the UN Secretary General’s Common Agenda, which is to be held in 2024 and aims “to forge a new global consensus on what our future should look like, and what we can do today to secure it.”[19]

With an emphasis on the necessity to consider the institutional aspects of the global governance of AI, Section 2 of this article first aims to critically assess whether AI constitutes a problem of a global nature that warrants regulatory action at the global level and in connection with other global governance issues. In Section 3, the article takes a brief look at the international institutional system that is currently in place. This system’s architecture originated from the post-World War II plans to establish the United Nations Organization (UNO) under the quasi-constitutional umbrella of the United Nations Charter. Despite revolutionary changes in the world, this system’s architecture has remained largely unchanged until the present day, which is the reason Section 4 is dedicated to seeking answers to the question of the possible causes for the inertia of international institutional change. This quest meets with two seemingly competing theories, one of which finds the major drive in cataclysms and the other in cognitive change. Section 5 recognizes the difficulty in using cognitive change to help overcome this inertia of international reform and seeks to identify some institutional aspects that an efficient future institutional framework for the governance of AI should display. To further concretize these aspects, Section 6 compares different developments in the fields of technology, language, and law to extract some common challenges that may pave the way for an improved understanding of the workings of the human brain and lay the foundations for future institutional reforms. Inspired by Albert Einstein’s description of the world as a “four-dimensional space-time continuum,” Section 7 attempts to imagine and describe a four-dimensional mode of human thinking to transcend the present perception of three-dimensional space based on a greater unity of the senses (synesthesia) and a more flexible logic.

2 AI as a Global Issue

Before examining the creation of an adequate global institutional framework for AI governance, it must be established whether AI can be regarded as a global issue itself. Realistically, it can be expected that the next years will continue to be dominated by AI regulation at the national level to tackle the challenges arising from the rapid development of AI and related technologies. Yet, there is strong evidence that AI constitutes an issue of global concern and therefore should be included in global regulatory efforts or subject to some degree of global harmonization of laws under the aegis of an adequate global institutional framework.

A first indicator of the global nature of AI is found in the 2019 Organisation for Economic Co-operation and Development (OECD) Recommendation on AI, which recognizes that “AI has pervasive, far-reaching and global implications that are transforming societies, economic sectors and the world of work, and are likely to increasingly do so in the future [emphasis added].”[20] The same quality of AI also surfaces in the repeated qualification of AI (and other disruptive technologies)[21] as an oxymoron, [22] that is, a “figure of speech in which apparently contradictory terms appear in conjunction.”[23] It also highlights that AI can contribute to “positive sustainable global economic activity, to increase innovation and productivity, and to help respond to key global challenges.”[24]

The fact that the 193 members of UNESCO adopted the Recommendation on Ethics of AI is itself a strong argument for the global nature of AI, as it reflects a growing global consensus on the need to regulate AI and to eventually create a multilateral standard-setting instrument for AI. At the same time, the recommendation explicitly expresses the goal as “to bring a globally accepted normative instrument.”[25] AI can indeed be considered a “global technology” because “many of the challenges and opportunities of AI will be global in nature.”[26] From a combined ethical and technological perspective, AI and digital technologies are also deemed to be “global” because they not only “make us more connected and smart but also more homogeneous, predictable and ultimately controllable.”[27] Seen from an economic perspective, the digital economy has equally been considered to be a global phenomenon, which is why it was argued that “an international agenda is needed to harness the full benefits of expanded competition.”[28] Similarly, the regulation of AI and responsibility for ensuring that law keeps up with AI has been called a “global concern.”[29] Based on the adage “ubi societas, ibi ius” (where there is a society, there is law), it could also be argued that law, by definition, marks a global concern.[30]

Various other recent national initiatives in the field of AI have called for greater global cooperation, such as the AI Safety Summit hosted in November 2023 by the UK Government in Bletchley Park or the President of the PRC’s call for a “Global AI Governance Initiative,” which was launched at the 10-year anniversary Belt and Road Forum.[31] Hence, even legislators at the national level show awareness of the global or cross-boundary nature of cyberspace. With the emergence of cyberspace, the application of sovereignty to the regulation of digital technologies was aptly qualified as an oxymoron[32] because of its contradiction with a state-centered and territorial understanding of sovereignty and regulation.[33]

As is well known in concerns about cybersecurity, it was predicted that AI systems capable of causing harm would not be confined to one jurisdiction but would actually be impossible to link to a specific jurisdiction at all, which is why the creation of a hypothetical International Artificial Intelligence Agency to be modeled after the International Atomic Energy Agency was proposed.[34] It was also proposed that a future AI organization should be established as “a UN specialised agency (such as the World Health Organisation), a related organization to the UN (such as the World Trade Organisation), or a subsidiary body to the General Assembly (such as the UN Environment Programme).”[35]

Since the global legal system has moved toward the blurring of the lines between public and private international law, it would be better to apply the term “transnational” or even “global” to any such future agency. Based on the finding that “global problems require global solutions,” an international (and not global) AI regulatory agency was proposed to “create a unified framework for the regulation of AI technologies and inform the development of AI policies around the world.”[36]

David Held identified as a paradox of our times that the most urgent contemporary problems are of a global and collective nature, such as climate change, the pandemic, or even AI, but unfortunately, “the means for addressing these are national and local, weak and incomplete.”[37] From another perspective, it is possible to conceive of AI as not only a global issue but as a human-made artefact for the precise purpose of making humans realize the global dimension of humanity. Either way, there is strong evidence that AI and related policy areas will require a more coherent global coordination based on a more efficient global institutional framework of governance in the future. Pondering the sketches of a future institutional framework, it is deemed useful to assess the adequacy of the present international system of governance to tackle the challenges and problems related to AI.

3 The Present Institutional “System” of International Governance

If any international treaty related to AI were to be adopted right now without additional plans to create an adequate global institutional framework of AI governance, it would have to contend with the system of international law and international organizations currently in place. The same applies to plans to create a specialized international agency for AI, which would have to find its place in the present international system amidst a number of fragmented international organizations. In the description of the present system of international governance, the term global has been deliberately avoided because of its widely fragmented structure. The fragmentation of international law has been explained by the notion of “‘functional differentiation’, the increasing specialization of parts of society and the related autonomization of those parts.”[38] Historically, this fragmentation has developed from the conceptual separation between public and private law and between national and international law.[39] Most importantly, the present international legal system of governance was fragmented at its birth due to an unplanned split into the governance of economic affairs by the General Agreement on Tariffs and Trade (GATT) and the governance of the remaining public policy areas by the UNO. This split was caused by the failure of the plan to create an International Trade Organization as a UN specialized agency as laid down in the Havana Charter.[40]

The failure to establish a coherent framework for both the UN and the GATT still persists, given that the World Trade Organization (WTO) was established in 1995 as a sui generis organization outside the UN system.[41] This split between the WTO and the UN continues to have harmful effects on the overall coherence and consistency between economic and all other areas of public policy, as is reflected in the so-called “trade linkage debate.”[42] This debate is made up of an infinite number of “trade and …” problems, such as “trade and public health,” “trade and environment,” or “trade and culture,” which seek to reconcile so-called “trade” with “non-trade issues,” even though this terminological distinction is artificial as trade is factually related to most other aspects of life and policymaking.[43] Furthermore, the lack of institutional coherence and the ensuing inability to reconcile trade with other public policy objectives may be a cause of various anti-globalization sentiments and should be further investigated.[44]

The substantive linkages between trade and other policy areas are also mirrored in institutional questions in which issues of linkage play an important role despite the fragmented responses. Relying on the current system of international organizations, this means that “the scope of our institutional choice – of our available responses to international problems – will be constrained,” which is why it is useful to exercise “institutional imagination” with a view to exploring the creation of other “institutional devices.”[45]

On a deeper level, the current fragmentation and lack of coherence in the international governance system is perhaps the result of a common or possibly universal cognitive trait of human thinking. This universal trait is dualism, which is understood as “a philosophical system or set of beliefs in which existence is believed to consist of two equally real and essential substances (such as mind and matter) and/or categories (such as being and nonbeing, good and bad, subject and object).”[46] An opposite mode of thinking was presented by Heraclitus in form of the identity of opposites according to which “the most beautiful harmony is born out of opposites.”[47] Whether universal or not, dualistic thinking has gained wide acceptance and traditionally holds a strong place in legal thinking as is notably proven by the frequent usage of dichotomies.[48] Dichotomies rest on an erroneous presumption that there are only two possibilities, which creates a false dichotomy in which “we forget the middle and think in extremes, missing important alternatives in the process.”[49] Similarly, dualistic thinking “trades accuracy for simplicity.”[50] To exemplify the problem of dualistic thinking, the governance of trade issues is distinguished from the so-called “non-trade issues,” albeit most policy areas are at least related to trade questions. Underlying these misperceptions is the understanding that issues related to international trade liberalization are incompatible with non-trade goals, such as the protection of cultural diversity. However, a closer look at the history of trade as well as the economic foundations of trade liberalization in the theory of comparative advantage reveals that – paradoxically – the variety of living conditions is not only a given fact but also serves as the “spring of commerce.”[51] In law, too, diversity poses no obstacle and is instead not only “compatible with all major legal traditions” but also provides an important means to guarantee the efficiency, legitimacy, and sustainability of law itself.[52] This is the reason any future law and, for that matter, global institutional framework must “develop instruments that adapt to a concrete and current plurality.”[53]

Already, the use of the term “system” to describe the present conditions at the international level can be contested. For instance, the present status quo has instead been called one of “international disorder.”[54] It is also appropriate to refer to the present conditions, which have been largely unaltered since the establishment of the UN system in 1945, as “systemic chaos,” to use another oxymoron. The term systemic chaos has been defined as “a situation of total and apparently irremediable lack of organization.”[55] Such lack of organization can be seen at all levels of global governance. Hence, it was also found to characterize the UN system, the management of which was also called an oxymoron, given that the UN system was described as being “highly politicized, led indifferently, and managed poorly” and that it has since its creation primarily “expanded but not adapted”.[56] However, the WTO or system of global trade governance does not fare better and now even faces a serious paralysis of its dispute settlement system up to its extinction in a post-WTO world order.[57]

In the beginning of the 21st century, both the WTO and the UN system saw urgently needed proposals for institutional reform.[58] Sadly, these reforms were not only separately proposed but also without mutual consideration of each other. As a result, they only shared a common outcome, which was both their failure and missed opportunity.[59] The failure of the present international system of governance to deliver is evident on all fronts and in all policy areas.[60] It fails to reconcile trade liberalization with the goals of public health, cultural diversity, social standards, and human rights. It also fails to provide international peace and security as notably shown by the recent outbreak or intensification of conflicts. Regarded jointly, the failure also manifests itself in the inability to realize the goals formulated in the SDGs, which has also been linked to institutional problems.[61] The cause of the failure is also a conceptual flaw, as there is really only one sustainable development goal, namely to achieve all 17 goals without leaving even one behind. A similar challenge exists with regard to the realization of all international human rights based on the idea of their indivisibility.[62] Ultimately, a similar paradox underlies the current need to regulate AI, which is based on the observation that all humans will be equally affected by AI, albeit in different ways.

Most drastically perhaps, the current global governance problem is visible in the paradox of the Anthropocene. This paradox consists in characterizing the current era based on the human impact on the Earth system having become a recognizable force, possibly even overwhelming the great forces of nature.[63] The paradox lies in the contradiction that the Anthropocene puts humans in control of the planet, while humans still seem largely unable to control the unintended consequences of their actions. As a result of the paradox remaining unsolved, the late modern lifeworld is “becoming increasingly uncontrollable, unpredictable, and uncertain.”[64] In other words, it is a control paradox, whereby humans – to express it oxymoronically – seem both “in control” and “not in control” at the same time.[65] It can also be rephrased as meaning that if most urgent global problems are created by humans, then why is it that humans cannot solve them?

Perhaps an answer to this paradoxical question lies in another paradox, namely that human progress in physical knowledge is voided by a deficit in social knowledge or a lack of understanding of human relations. This deficit also translates into inadequate institutions governing human relations, which has the consequence that the surplus created in wealth is “virtually canceled by the costs of armaments and war.”[66] In other words, the major problem in both the solution of the principal global problems and reform of the global institutional framework is that humans, through their respective governments, seem unable to agree. In times of societal polarization (perceived as a major global threat)[67] and deglobalization as well as decoupling,[68] reaching a global consensus appears to have become even harder. The lack of global consensus leads to another sad paradox that manifests itself in the apparent contradiction that to prevent future crises or even to avoid history from repeating itself – since it was born from the devastating shambles left from the scourges of two world wars – the present international institutional system requires a major reform; however, such reform may only become possible following another major cataclysm, as “only a World War III might provide enough shock, awe, and vision to equip the UN for the future.”[69] On the question about the future institutional governance of AI, it is therefore also important to ask why the UN and the current institutional system proved unable to reform itself and adapt to the current global conditions.

4 The Inertia of Institutional Change at the Global Level

The international institutional framework established in 1945 is essentially unchanged. The creation of the WTO widened its scope compared to the GATT 1947 but without changing the institutional rift between the UN and the WTO. It is true that dynamic changes have taken place at the regional level, contributing to global governance debates, such as the creation of the OECD, the G20, the BRICS (Brazil, Russia, India, China, and South Africa), or the Shanghai Cooperation Organization (SCO) as well as a large number of regional or mega-regional trade agreements.[70] The processes of the proliferation of international organizations and increasing codification of international law have also been observed.[71] Often, these changes are merely quantitative in nature and have done little to strengthen the multilateral system and global institutional framework needed to ensure the greater legal certainty and predictability warranted by a global rule of law. These quantitative changes have instead undermined the already fragile unity of the present international legal system by further fragmenting it in terms of jurisdiction, interpretation, regulation, and normativity.[72]

Additionally, a growing number of actors in the international legal arena operate without due coordination by a multilateral institutional framework, which also inevitably increases the probability “of the occurrence of dilemmatic normative conflicts” between laws or treaties as well as regime collisions.[73] At the same time, the absence of such a coherent institutional framework or global constitutional framework does little for the qualitative aspects of the enforcement of international laws.[74] The enforcement of international law remains weak or has even been further weakened given the decay of the existing system and the growing disregard for international law.[75] The latter trend is exemplified by the continuing paralysis of the WTO’s dispute settlement system. This example also shows that the reform of global institutions takes more than merely discontent and criticism; a viable solution to the problem must also be sought that is based on a wider consensus of all interested parties.

Overall, the problem with a proliferation of plurilateral agreements and a simultaneous fragmentation of regional as well as international instruments and organizations is that they do not allow for the crystallization of the global consensus needed for a coherent pursuit of the goals agreed upon as well as their multilateral enforcement. Cutting a long story short, the present era remains one in which the vision of global justice remains an expression of wishful thinking and an oxymoron.[76] For AI governance, this means that unless the present international legal system is reformed, every proposal for the future governance of AI is likely to be doomed to fail. It means that no matter what specialized agency for AI is created or if AI is integrated in all the existing international organizations, the outcomes will be suboptimal. Such a bleak and dreary prospect warrants a closer inquiry into the theories on the causes of institutional change.

On a general level, it has been argued that “institutions change when actors act as if they have the right, power, conditions, opportunities, and resources to change those institutions.”[77] For this scenario, another paradox must be overcome, namely the absence of a global governance platform. This paradox is like the chicken-and-egg paradox in that it seems impossible to establish a global governance platform without already having one to successfully deliberate on the features of such a platform. This paradox provides another example of the need to imagine new modes of cognition, modes which also relate to the perception of time in four-dimensional thinking (4D thinking), which will be explained in Section 7.

There exist numerous other exogenous explanations for institutional change, ranging from slow to fast or continuous to abrupt changes brought about by war or by peaceful means.[78] Unfortunately, most of these explanations seem unable to explain or overcome the current global inertia vis-à-vis a fundamental reform of the present international institutional framework. Opposite explanations also exist, which seek to explain institutional change based on endogenous causes, such as cognitive change as the primary precondition for institutional reform.[79] This was also aptly shown in the context of attempts to theorize a global legal order of the future in which cognitive elements in the form of a “common language” are presented as a way to overcome the present international discord that characterizes both “the different legal provisions and perspectives found within the societies of the world.”[80]

Most likely, the best way to overcome the inertia of institutional change is through a combination of both exogenous and endogenous factors of change. This means to regard them not as two different and opposite concepts, but rather as two sides of the same scale. Otherwise, it requires the adoption of novel conceptual models different from those exclusively based on the concepts of a linear relationship, classical logic, and dualistic thinking. Such a conception also requires a different cognitive framework, one that allows for a wider range of considerations and of possible dependencies between different factors. More concretely, it means to seek a possible complementarity between antagonistic concepts, to see them in a holistic way rather than isolated phenomena. For institutional change, it means to take endogenous factors seriously enough to take action before exogenous factors create fatal conditions that eventually force us to act and to reform the institutional framework that no longer proved capable of providing the stability for a continuous and sustainable governance of global affairs.

However, reading the endogenous factors correctly proves difficult. The reason is that it is always complicated to conceptualize a future phenomenon before it has materialized. For example, it is hard to conceive of a process to explain a new technology like an Internet browser to a person who has never seen a computer. Yet, forecasting is supposed to be easier than is generally anticipated.[81] In this regard, imagination expressed through, for example, science fiction and other oxymora or paradoxes, can provide useful insights for law and how to actively design the future by regulating it.[82] In sum, oxymora and paradoxes have been termed the “language of the future” because they can help to conceive of a reality that has not yet been perceived.[83]

This process of creative imagination of the future in line with certain policy goals usually starts from the same level on which the existing obstacles to their realization are deemed to derive, but later adds a new dimension to it. In short, it paradoxically combines analogy with novelty or a theoretical step with an empirical step. Such creative imaginative effort has been missing in the past decades and centuries as global governance only amounted to the projection of local or national models onto the regional and global plane, like modern international law is a mere geographic extension of the Italian city states. Future reform efforts must leave old traditional models of both local and global governance behind and, notably, an exclusive state-centric conception of international law.[84]

Instead of embracing novel ideas and concepts, for instance, in the form of oxymoronic concepts such as glocalization,[85] the old dominant ideological divisions between competing concepts that are usually framed as false dichotomies still largely persist. From a different perspective, it can be argued that long obsolete ideological divisions are maintained without considering – let alone discussing – entirely new forms of governance. It must be admitted that even though emperors and empresses or kings and queens have occasionally been replaced by presidents or prime ministers, very little in terms of institutional governance has principally, substantially, and structurally changed in the way societies have been administered and organized in political and legal terms.

The disruptive technologies rapidly evolving around the oxymoronic notion of AI provide a unique opportunity to search for novel ways to best reform the present international institutional system (or systemic chaos) with a view of not merely theorizing but actually establishing a proper global legal order supported by an efficient and consistent institutional framework.

5 Institutional Aspects of AI Regulation

The problem with the proposal to add a new agency for AI to the present international system is that it will not change much, particularly in terms of existing levels of fragmentation and lack of coherence or the unnecessary duplication of activities or existing organizations.[86] To the contrary, it risks adding to the existing problems of fragmentation because the system that was already in place was unable to successfully tackle the problem prior to the emergence of AI. Therefore, the proposal for the creation of an International Artificial Intelligence Agency modeled after the International Atomic Energy Agency lacks creativity and novelty because it would be designed as a public international organization that widely excluded the direct involvement of private actors. The Internet Corporation for Assigned Names and Numbers (ICANN) appears to at least add some novel elements to the search for optimal institutional models.[87] This is because ICANN’s mode of policymaking was described as a decentralized “multistakeholder model,” that “places individuals, industry, non-commercial interests and government on an equal level.”[88] The need to include a more diverse group of stakeholders from the public sector, industry, and academic organizations was precisely the idea of an earlier proposal for the creation of an International Artificial Intelligence Organization in order to “support policymakers in the overwhelming and crucially important task of regulating this novel, immensely complex, and largely uncharted area.”[89]

Another important aspect relates to the question of the architecture of a future global institutional framework. It concerns the choice of whether AI laws and policies will be placed under the administration of a single (national or global) institution in a centralized or in a decentralized way. Both models seem to offer advantages and disadvantages, such as more or less fragmentation, lower or higher costs, or faster or slower speed of creation and implementation of laws.[90] Ultimately, for each option to succeed, the outcome will also depend on the particular design of either a decentralized or centralized institutional system of AI governance. Consequently, it is possible to design and run an efficient centralized and inefficient decentralized institutional system and vice versa.

As previously mentioned, what stands between the success or failure of any decision is a strictly dualistic conception that creates false dichotomies that simplify at the cost of accuracy. In concrete terms, it means that there is no simple binary choice between a centralized or decentralized system. Instead, it depends on a wider range of factors, such as what goals are pursued, what resources are made available, and what powers are granted. Following the idea expressed by the metaphor of “Laplace’s demon,” the more reliable the information that was used for the design of an institution, the better and more efficient will it be.[91] The same metaphor can be used for decision-making processes in institutions after they have been established.

The difficulty inherent in dualistic judgments can be seen in the evaluation of the institutional setting of the PRC in relation to AI governance. While some commentators see its unique, largely centralized, and integrated politico-legal system whereby the “government simultaneously assumes multiple roles in the AI ecosystem as a policymaker, an investor, a supplier, a customer, and a regulator” as an advantage, others disagree.[92] It thus often depends on the eye of the beholder as expressed in the glass “half-full or half-empty” metaphor and its underlying paradoxes.[93]

Distinct from that view would be a non-binary perspective based on fuzzy (or polyvalent) logic, which differs from traditional logic in which each fact or proposition must be either true or false. Fuzzy logic has been defined as “a concept evolving from computer science that attempts to deal with ‘degrees of truth’ rather than a binary ‘true or false’ logic.”[94] The same concept was thought to be mostly alien to “Western conceptions of legal jurisprudence” but rather to form the underlying idea of the PRC’s approach to the governance of cyber security and data laws allowing “regulators to subsequently interpret key terms regarding data in that law in a fluid and flexible fashion to benefit Chinese innovation.”[95]

The United States initially took a different approach to AI regulation. At first, the US left this rapidly evolving area unregulated. Emphasis was put on the development of AI instead of its regulation, as is visible from the 2019 Presidential Executive Order on “Maintaining American Leadership in Artificial Intelligence.”[96] Gradually, concerns also grew in the US that if left unregulated, AI had the potential “to be dangerous to public safety and equality.”[97] The need to regulate AI was thus slowly recognized: “[that] Al regulation is done correctly is incredibly important, and the first step toward doing regulation right is doing regulation at all.”[98] As the United State’s most recent step, President Biden adopted the Executive Order on “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.”[99] This latest document recognizes that for AI to be safe and secure, it requires “robust, reliable, repeatable, and standardized evaluations of AI systems, as well as policies, institutions, and, as appropriate, other mechanisms [emphasis added].”[100] In terms of institutional involvement, there is no plan within the Executive Order to establish a single competent authority as it first names the Secretary of Commerce, who nonetheless acts through the Director of the National Institute of Standards and Technology (NIST), but in coordination with the Secretary of Energy, the Secretary of Homeland Security, and the heads of other relevant agencies as the Secretary of Commerce may deem appropriate.[101] The order thus leaves the exact institutional involvement to the discretion of the Secretary of Commerce. Underscoring the cross-cutting nature of AI, the Secretary of the Treasury is also involved for AI-specific cybersecurity risks affecting financial institutions.[102] The order also highlights the role of research institutions and other nongovernmental stakeholders in the process of AI governance. Overall, it relies on the existing politico-legal system to tackle the challenges posed by AI governance.

In turn, the European Union (EU) has opted for a more comprehensive regulatory approach via the AI Act as well as a large number of related instruments, such as the Digital Service Act, Digital Markets Act, Data Governance Act, or the Civil Liability Directive, many of which are still in the making. From a governance perspective, the initial proposal for an AI Act only foresaw the creation of a European AI Board to assist the Commission in a variety of tasks, such as the contribution “to the effective cooperation of the national supervisory authorities.”[103] The more concrete elaboration of a single governance or institutional support system was left for a recent decision by which an EU AI Office was to be established “within the Commission as part of the administrative structure of the Directorate-General for Communication Networks, Content and Technology.”[104]

This means that the Commission opted for the use of existing structures to govern AI issues in the future, without affecting “the powers and competences of national competent authorities, and bodies, offices and agencies of the Union in the supervision of AI systems.”[105] Its central tasks are further clarified to include a contribution to “the strategic, coherent and effective Union approach to international initiatives on AI,” to “fostering actions and policies in the Commission that reap the societal and economic benefits of AI technologies,” to “support the accelerated development, roll-out and use of trustworthy AI systems and applications that bring societal and economic benefits and that contribute to the competitiveness and the economic growth of the Union,” and to “monitor the evolution of AI markets and technologies.”[106] Overall, the EU pursues a bifurcated approach that aims to pursue “the twin objective of promoting the uptake of AI and of addressing the risks associated with certain uses of such technology.”[107]

In accordance with dualistic thinking, this kind of approach assumes that AI can be framed as good or bad. But there are doubts as to whether technology can generally be framed in such a way. Under the notion of value neutrality, technology is posed as “morally and politically neutral, neither good nor bad.”[108] According to the first Kranzberg law of technology, however, technology is considered to be “neither good nor bad; nor is it neutral.”[109] This distinction makes a good example for the underlying logic used to interpret language and legal terms, especially those that are qualified as oxymora and paradoxes. In light of the complexity and oxymoronic nature of AI and related technologies, it might become necessary to apply a new (e.g., polyvalent) logic.[110] This need can be well exemplified by, for example, the case of “fake news,” another alleged oxymoron.[111] In a ruling on an alleged case of “fake news,” the court ruled that “[I]n some cases whether a statement is true or false may be simply a matter of binary choice: it is either one or the other. In other cases, there may be degrees of falsity.”[112]

This finding indicates the occasional need to expand the rule of logic or – for the sake of greater legal certainty – to formulate rules similar to the conflict of law rules that clarify what kind of logic shall be applied to the interpretation and implementation of substantive laws governing AI.[113] Most of all, it will be necessary to broaden the horizon by way of a cognitive change, that is, a change brought about by an increasing realization of the oxymoronic or “paradoxical nature of nature” (possibly caused by the paradoxical quality of the human brain).[114]

At this point, it is hard to conceive of a new logic or new paradigm, just as it is hard to conceive of any new phenomenon before it has materialized (or has otherwise been accepted by a critical mass).[115] Regarding the future of dualistic thinking and the plethora of dichotomies, it is equally hard to imagine a perfect equilibrium between apparent opposites, such as those of machines and humans as reflected in the oxymoron of the cyborg or AI.[116] As a matter of fact, it has been said that in biology “only dead organisms are in perfect equilibrium,” whereas healthy living organisms are in “‘dynamic equilibrium’ – a condition of balance that requires constant work and maintenance to remain sufficiently stable and to adapt to the ever-changing environment.”[117] In the 21st century, the so-called “age of paradox,”[118] which is characterized by many “perplexing paradoxes”[119] and “essentially oxymoronic concepts,”[120] this term can thus be taken as an adequate “design metaphor” to begin the quest for the optimal design for a future dynamic global governance model for AI in connection with the SDGs and other related policies.

6 Technology, Language, and Law: Legal Synesthesia

AI can now be taken to constitute a global issue that requires an adequate global institutional framework for it to be governed efficiently, coherently, and in a future-proof way. It is hard to conceive of such a framework based on a dynamic equilibrium without fully grasping the potential of AI and, notably, the direction of its future innovations. To meet this difficulty, it can help to try to get to the root of the problem, gather what is known so far, and then creatively draw on analogies from past evolutionary cognitive steps. The obvious starting point is the notion of AI itself, which – albeit qualified as both a contested and oxymoronic concept[121] – still offers useful insight on its principal purpose. More concretely, research on AI has been said to be driven by the desire to reproduce the human mind artificially to better understand its foundations and running mechanisms.[122] This is how technology can serve to gain insights into the cognitive processes taking place inside the human brain. The same holds true for language, which is also considered to provide a “window to the mind.”[123] Moreover, technology and language are closely intertwined. Language has been described as a “natural collective technology that evolved primarily to facilitate efficient communication in populations whose social structures were becoming increasingly more complex”;[124] and technology has been compared to “a language, a complex interaction of pragmatic, syntactic and semantic rules of activity.”[125] In particular, digital technology can process language by materializing thoughts as engineering metaphors that determine the development of future technologies.[126] Thus, language can be regarded as a technology and technology as language, and their close link is presently well reflected in the disruptive technology of large language models (LLMs).

In the link between technology and language, an important role is played by the human senses, which bridge the outer world of the perception with the inner world of the reception of information. The senses help to make sense and, internally, the human mind must reach a “consensus” from among potentially conflicting information gathered by the different individual senses, as exemplified by the McGurk effect explaining conflicting information received from the eyes and ears.[127] Such conflicts among the different senses, both known and unknown or undiscovered, occur among all of them, and have a very practical relevance in the future enforcement of the AI Act’s prohibition of subliminal AI systems.[128] It is also worth considering the current trend for AI to converge with technologies of augmented and virtual reality.[129] This trend can already be observed in the recent shift from LLMs to multimodal LLMs, which can process equally well not only text but also multiple types of other data, such as “images, text, language, audio, and other heterogeneity,”[130] that correspond to the multisensory nature of human perception. These multimodal LLMs indirectly confirm the assumption made by LaPlace’s demon by showing “superior performance in common-sense reasoning compared to single-modality models, highlighting the benefits of cross-modal transfer for knowledge acquisition.”[131] In other words, the artificial reproduction of the human mind also proceeds with a parallel process of the artificial replication of the human sense organs in the form of e-skins, e-noses, or e-tongues.[132]

With regard to the interplay among the senses, it is worth noting that the fragmentation between the individual senses largely results from their historical study in isolation.[133] This itself may be caused by the dominance of dualistic thinking but stands in stark contrast to the fact that the human perception of the world is a multisensory one, as the different sensations (through the individual senses) are somehow brought together into a unified experience.[134] This fact is scientifically best exemplified by the condition of synesthesia, which literally means “to sense together” but stands for a condition “in which stimulation of one sense generates a simultaneous sensation in another.”[135] However, synesthesia not only describes a physical condition but also as a metaphor connects law and language in the term of legal semiotics and legal synesthesia.[136] The relevance of synesthesia for law has already been mentioned for subliminal AI systems. It may also have a wider relevance for global law and legal order in the future. A lack of coordination among the individual senses may also explain a lack of policy coherence and the present fragmentation of international law and its specialized organizations. By analogy, legal synesthesia – in attempting to replicate a greater union between different channels of information – can also provide a technique for the defragmentation of international law.[137] It simply assumes that a change in perception will also trigger changes in the interpretation of not only legal texts but also all information received through a combination of different channels.[138]

Thus, this analogy simply follows the assumption made by Laplace by virtue of his hypothetical “demon” that the more complete the information, the better the interpretation of the data received and the more adequate will be the ensuing decision(s). It is no coincidence then that the discourse about AI frequently uses different metaphors of magic.[139] It often addresses the desire for AI to be magical and omniscient in phrases like “digital voodoo,”[140] “digital crystal ball,” or even “Google as God.”[141]

If language through metaphors or other rhetorical figures of speech, such as paradox and oxymoron, influences and eventually determines the actual shape or features of various technologies, the same should be true for the discourse about the reform of international institutions. Put simply, “the way we talk about things is the way they will take shape.” By calling it “AI” and “machine learning,” we have given digital technologies human-like attributes given that “intelligence” was and continues to be considered an exclusive human privilege in spite of the legitimate questions raised by the Fermi paradox.[142] In that sense, AI is a misnomer because it is not equal and even less superior to humans.[143] What remains to ask now is what does it generally and more specifically mean for the debate about a future global institutional framework that we have qualified AI (and a large number of other phenomena)[144] as “oxymora” or contradictions in terms? It should mean that it will be necessary to embrace contradictions and perhaps adopt a fuzzy or polyvalent logic based on a synesthetic mode of perception. Based on the linguistic trend, it is useful to attempt to conceive of how such a future institutional global framework could or would have to look.

7 Toward a Synesthetic Four-Dimensional Perspective on Global Governance

Before trying to conceive a future global institutional framework, it may be helpful to briefly recapture what is known so far: First, AI marks a complex, all-pervasive or cross-cutting, cross-boundary, cross-cultural and dynamically evolving phenomenon, which has repeatedly been interpreted as an oxymoron. In the same order, these features would require a non-binary or fuzzy, coherent, consistent, pluralist, and interdisciplinary, as well as global or multilateral framework that allows for a continuous but stable monitoring process based on a global rule of law. Most of all, it must not replicate current levels of fragmentation and must avoid the unnecessary duplication of policies to ensure the realization of the goals that have been formulated. This latter point, of course, presupposes that the UN Common Agenda and successor of the SDGs yields such clear objectives.

Although these challenges were known long ago, for instance, in the context of the “trade linkage debate,” the reform of international institutions proved to be very difficult or nearly impossible. Two apparently competing options were offered in order to overcome the inertia in the design of a future global institutional framework, namely to either wait for the advent of another major global cataclysm or to prevent the former by accelerating the cognitive change that triggers institutional change.[145] In preference for the latter, an attempt has been made to visualize the cognitive changes, notably in perception, that may assist in the emergence of a more coherent global institutional framework.

To begin to imagine such order, synesthetic perception and paradoxes and oxymora were supposed to be able to facilitate the way for a new cognition and understanding of reality. Such new cognition can be exemplified by a simple experiment mentioned in the book “The Paradox Process,” in which the following task is given:

Put six wooden matches on the table or desk before you, as shown on the next page. Make an equilateral triangle from three of them. Use the remaining three matches to complete three more triangles, for a total of four. Each side of each triangle is to be the full length of a match.[146]

Basically, the task consists in creating three equilateral triangles from just six matches as depicted in Figure 1. While the solution of the experiment is provided in the annex, the point that is made here is that the solution of any problem involves at least two steps: First, to start from the problem itself and, second, to add a transcendent element or “new dimension” to the problem. For an example from technology, it is noteworthy that “the same technology used to ‘poi-son’ or attack a system (e.g., adversarial attacks) can then be adapted for use as a protection against the threat (e.g., adversarial machine learning) and so on and so forth.”[147] Generally, the new dimension consists of a hitherto unknown or unrecognized aspect, which can also be interpreted to mean more or better information. In this regard, the term is chosen for a reason, namely because Albert Einstein had used it for his scientific explanation of reality as “a four-dimensional space-time continuum.”[148] This also matches the explanation of human evolution, which is also said to proceed along four dimensions and to not merely follow the nature versus nurture dichotomy.[149] Similarly, the progressive shift from two-dimensions, or 2D, to 3D, 4D, and even 5D perspectives in cinema and converging in a future immersive technology is well documented.[150] It is difficult to generally define what exactly is meant by invoking four dimensions. Going back in time, one can find a reference to the fourth dimension by referring to the human perception as follows:

Figure 1: 
Six matches.
Figure 1:

Six matches.

Is it not more than possible, is it not more than probable, that there is a Fourth Dimension to which our eyes have not been opened, and that our so-called dead are living in this world, and that through our own development communication with them will come; that this new world is all around and about us, and is a world of an infinite variety of color and sound; that it is nature’s great vacation ground; that we enter it at so-called death; that in reality there is neither birth nor death, but that dying is but the passing into a larger life, and birth and effort to express externally some of the wonder and glory of that which we now call the Undiscovered Country; that in this Fourth Dimension all other dimensions exist but varying in degree and not in kind; that length, breadth, and thickness exist as much as they do in our three dimensional worlds, but that we are not able to see them or know them because the rate of vibration is so high that the physical eye and ear are incapable of seeing and hearing; that it is only when both are disclosed to the outer eye that we apprehend what may be termed an inner vision, and an inner hearing?[151]

Undoubtedly, this paragraph carries many – and fundamental – questions. It refers to the fourth dimension as a metaphor for future developments and paradigm shifts in science and human evolution that, paradoxically, will both drastically alter our perception to broaden our understanding of reality and alter the cognitive means of perceiving reality. It also implies a transcendence of dualistic thinking expressed in the form of dichotomies and often prompted by paradoxes offering “powerful opportunities to test models and conceptual frameworks, and to enable true ‘paradigm shifts’ in certain areas of scientific inquiry.”[152] In this sense, the notion of “fourth dimension” also relates to oxymora and paradoxes as the language of the future, which allows for the debate of a future reality before that reality has materialized and been experienced. It therefore resembles expressions similar to “intuitive thinking,” “oxymoronic thinking,” or “thinking outside the box,” concepts that describe cognitive efforts to bring about an enhanced understanding of reality.[153] In line with magical metaphors of technology, it also relates to “magical thinking” as a way to think about problems for which “the more traditional institutions often cannot provide satisfactory solutions.”[154]

The etymology of “dimension” refers to “measuring,” and it can also be regarded as a unit by which to measure the (scientific) understanding of any phenomenon, including that of reality. Every subsequent dimension is superseded by a higher level (or greater dataset) of information. This can be well visualized by the depiction of the geometric figure of a cube in different dimensions (Figure 2). Thus, a cube is merely a point in a 0D space, a line in a 1D, plane in a 2D, and only a “true” cube in a 3D space. A cube from a four-dimensional perspective was called a “Tesseract” by Charles Hinton and depicted as in Figure 3.[155]

Figure 2: 
Hypercube, [CC BY-SA 3.0] https://en.wikipedia.org/wiki/Hypercube.
Figure 2:

Hypercube, [CC BY-SA 3.0] https://en.wikipedia.org/wiki/Hypercube.

Figure 3: 
Tesseract.
Figure 3:

Tesseract.

It can be derived from the different dimensions that a lower dimension’s level of information does not allow for an accurate judgment. Inversely, a higher dimension usually contains more accurate information from which to judge a phenomenon. These principles can be visualized by drawing five circles on a two-dimensional plane (see Figure 4). To an entity only living in a two-dimensional reality, it would therefore be very difficult or even impossible to exactly state what they are. From the perspective of a 2D plane, they could be five glasses, five chandeliers, five finger tips, or anything with a round base. To judge them more accurately, the information provided from a 3D perspective of length, width and height would be required. This means that the same quality can be assumed for judging any phenomenon from a 4D perspective. To this end, however, it would be required to know what constitutes the fourth dimension after the three spatial criteria of length, width, and height.

Figure 4: 
Five circles.
Figure 4:

Five circles.

Based on Einstein’s description of the world, the fourth dimension would be time. This also matches the conclusions drawn about time as a way to solve any contradictions expressed by way of paradoxes or oxymora.[156] It also corresponds to Oscar Wilde’s remark that “paradox is the way of truth,”[157] or perhaps, of at least a 4D truth. A similar hidden hint may be found in Shakespeare’s Hamlet, in which he wrote: “This was some time a paradox, but now the time gives it proof.”[158] This has also been illustrated by the lawyer paradox, which solves an apparent logical dilemma of judges by “left the question undecided, and deferred the cause to a very distant day.”[159] Last, the theoretical physicist Carlo Rovelli stated that our modern languages dividing reality in the dichotomy of past and future do not allow us to fully grasp the complexity of time.[160] This statement could allow for the opposite conclusion that paradoxes and oxymora – in comparison with dichotomies – could provide a more accurate account of the phenomenon of time. Similarly, Albert Einstein also called the distinction between past, present, and future an illusion or, more precisely “a stubborn one.”[161]

Translated into the legal realm, it could then well mean that in dealing with paradoxical or oxymoronic phenomena, regulatory paradoxes provide the best response in terms of both reconciling the apparent contradictions as well as future-proofing the laws. For the regulation of technologies and AI, the crucial role of time in finding the best moment of intervention also forms the subject of two paradoxes, namely the Collingridge dilemma and the Amara paradox. Basically, both paradoxes are a reminder that the right temporal moment to intervene is crucial; that intervention should neither happen too early nor too late. Nor should the law merely become relevant ex post (e.g., through tort or fines) or ex ante (e.g., through licensing).[162] Instead, the law should achieve a maximum level of protection at all stages of the life cycle of AI, even before the initiation of the process of developing a technology. This again requires a sound scientific understanding of the phenomenon of the perception of time.

Unfortunately, such understanding is still not within reach, but it is again an oxymoron that will point the way. More precisely, the notion of “space-time” was qualified as an oxymoron “in the sense that it is unusual for a geometrical coordinate system to mix units” given that the “first three units are in units of distance, while the fourth looks as though it is a unit of time.”[163] Perhaps in this regard, it is useful to consider time and space as related and not opposite as Jean Piaget observed based on children’s perception of time:

Space is a still of time, while time is space in motion – the two taken together constitute the totality of the ordered relationships characterizing objects and their displacements.[164]

The paradox between time and space could thus be dissolved by regarding time as another spatial (but dynamic) extension of 3D space. Time is also crucial to organizing, and the paradoxes of time play an important part in organization.[165] In this regard, it is interesting that from a technological standpoint, the work on the artificial replication of the human senses via AR and VR proceeds from the simple assumption that time is the fourth dimension.[166] Linguistic experiments that have been conducted seem to confirm this as well based on the metaphorical relationship between space and time, which “revealed that people are unable to ignore irrelevant spatial information when making judgments about duration, but not the converse.”[167] Accordingly, time would be an extension of space since people usually talk and think about time in spatial terms (e.g., short vacation, a long day, or time flies) but not the other way around. Another link between space and time is that the faster we seem to live (e.g., because of enhanced transportation), the less time we seem to have.[168]

Cutting a long story short and applying a 4D view to the purpose of the global governance of AI and other urgent global matters, it means that the design of a future institutional framework must not only cover the three dimensions of space but also include the fourth dimension of time. In cognitive terms, four-dimensional or 4D thinking means to allow knowledge to be gained about the present state of an object in terms of its three-dimensional shape and derived functions as well as to see its shape not only as a whole but also in terms of the sum of all its constituent parts. Such a sense would, like X-ray vision, allow for tracing the entire life cycle of the object back and forth equally in time, that is, to see its origin and possible end.

For law, 4D thinking would also bring radical changes. For the present purpose, it means considering not only the establishment of an institution at any place in time but also its future tasks and adaptations to future challenges as captured by the notion of future-proofing in law. This task also requires a different approach to four-dimensional time through increased efforts of foresight, forecasting, and upstream thinking.[169] It is a kind of thinking that will also be based on an extended new logic of fuzzy or polyvalent thinking as well as novel cognitive functions developed from a greater unity of the senses (or respective channels of information).

In institutional terms, applying four-dimensional thinking will need novel and more inclusive structures compared to the usual hierarchical models found at today’s local, national, regional, and global levels. A comparative look at different organizational charts or organigrams of most institutions, for example, the UN system, including its different specialized UN agencies,[170] the WTO,[171] or other governmental organizations, reveals that they are structured in a similar hierarchical way.[172] Moreover, a comparison of different national or supranational systems, such as those of the US, the EU, or the PRC, reveals that regulation alone is “not capable of supporting sufficient change in the institutional environment” but instead requires a holistic combination of all institutional pillars.[173] Even the same political system, which may have undergone drastic changes in the past centuries, will still be based on a similar kind of hierarchical structure except that perhaps the terminologies will have changed in line with the transition from monarchies to republics.

It is also interesting that oftentimes these institutional organigrams have scarcely changed from their establishment until today, as can be seen from UNESCO’s chart in 1946 and 2018 (Figure 5).[174] If at all, the original organigram from 1946 appeared slightly more flexible and dynamic than the latest from 2018.

Figure 5: 
UNESCO organigramms 1946 and 2018.
Figure 5:

UNESCO organigramms 1946 and 2018.

Even if at this stage, no concrete models of 4D institutional structures exist, multiple ideas exist that transcend the former hierarchical models. For instance, the concept of “collective (or crowd) intelligence” provides some guidance for novel organizational models. Its history in human societies suggests that it is not only an oxymoron but also one “riven with paradoxes.”[175] Due to this quality, collective intelligence can be applied to all areas of human interaction, whether it is in business, politics, or law.[176] It was also proposed as a model for the UN.[177] Not surprisingly, but paradoxically, collective intelligence also has a role to play in AI as does AI in the use of collective intelligence.[178]

Overall, the principal challenge is how to best organize decision making based upon an optimal balance between individual and collective interests. This can certainly be better achieved by a less hierarchical and static organizational structure. Inspired by H.G. Wells’ notion of a “world brain,”[179] it is high time to make use of AI for the governance of AI or, better, the governance of global affairs in the age of AI. This involves, first, creative ideas about future organizational structures other than the plethora of hierarchical models in place.

In this respect, the terms “overarchy” and “holarchy” provide useful points of departure for a scale from a total separation via the interaction between different agencies or entities to their overlap and optimal connection through complete integration.[180] So-called “Borromean rings,” which are described as an order of three circles where “no two elements interlock, but all three do interlock,” can provide symbolic models that help to visualize a dynamic change from total isolation between two or more entities (or organizations) toward their complete integration in line with the different states from zero- to four-dimensional perspectives (Figure 6).[181]

Figure 6: 
Borromean rings in 4D.
Figure 6:

Borromean rings in 4D.

The concept of holarchy was derived by Arthur Koestler from “holon,” an oxymoron deliberately coined as a blended word from the Greek holos – whole – with the suffix – on (as in neutron, proton), and was defined as follows:

The concept of the holon is meant to supply the missing link between atomism and holism, and to supplant the dualistic way of thinking in terms of “parts” and “wholes,” which is so deeply engrained in our mental habits, by a multi-levelled, stratified approach.[182]

As an oxymoron bridging the biggest and smallest entities in the universe, it is also a fit theoretical concept for seeking to organize the complex and dynamic relationships between an individual and their sum as a collective whole. The structure of a holarchy as opposed to a hierarchy can also be illustrated by the mutual relation between the human senses. In history, the individual senses were largely studied and portrayed in isolation from each other. From the perspective of synesthesia as a multisensory mode of the perception of reality, the senses ought to be structured more like a holarchy as opposed to a hierarchy, as shown in Figure 7.

Figure 7: 
Hierarchy versus holarchy of the senses.
Figure 7:

Hierarchy versus holarchy of the senses.

Compared to a hierarchy, a holarchy appears to be able to better coordinate different channels of information and thereby contribute to a greater coherence in decision making. Additionally, holarchic structures should be combined with decentralized polycentric models to better reflect the interaction between the diverse world’s legal systems or jurisdictions, which are closely intertwined. Such holarchic polycentric structures also appear to correspond to the structures characterizing the human brain, as they were beautifully illustrated by Santiago Ramón y Cajal (Figure 8).[183] Therefore, in a first and realistic step, the various possibilities offered by electronic governance in combination with networks or the Internet must be used by the existing system of international governance. This can take both the form of a virtual Global Secretariat interlinking different jurisdictions, consisting of a ubiquitously accessible database of law and policies as it has been described in a future dystopian scenario of global governance.[184] It can and should be combined with the different electronic file systems used by governments, which – as different from the traditional paper file – have the advantage that virtual branches can be created ex ante that simultaneously connect different administrative units, departments, ministries, or international organizations, allowing them to enhance the coherence of their decisions.[185] As a matter of fact, AI is being used more and more by national administrations, and adequate usage at the global level should also be discussed but not without due consideration of related ethical concerns.[186]

Figure 8: 
Sternenmodel.
Figure 8:

Sternenmodel.

Two principles are important in this regard. First, that information flows are multidirectional, meaning that they can go every direction in every direction. Second, it is important that there be awareness that the decision-making progress takes place within a hermetic, that is, a closed system, just like all data received through the individual senses are received and channeled within the human mind. For a multidirectional flow of information and generally paradoxical modes of thinking, it is also crucial that the dominant modes of thinking not be restricted to dualism and binary logic. It additionally requires a more fuzzy or polyvalent logic, like the one of dialetheism, which holds that there can be contradictions that are true and others that are false.[187] The ultimate judgment of any situation, whether apparently contradictory or not, will depend on the moment that is singled out in both time and space. In this regard, 4D thinking requires that any decision taken must be reminiscent of the paradoxical connection between time and space as expressed by the notion of spacetime. What may sound vague and bizarre now will become clearer once human cognition has evolved in its ability to perceive reality from a 4D perspective. The current trend of a shrinking or acceleration of the perception of time appears to testify to such trend.[188]

8 Conclusions

Words and magic were in the beginning one and the same thing, and even today words retain muck of their magical power. [189]

As this article argued, the present time of a heated debate about the governance of AI should, first and foremost, be framed not by a narrative of a “global race toward the regulation of AI” but instead as “a race toward the global regulation of AI.” The reason is that such a narrative is dangerous because language – due to its inherent magical power – greatly matters, as it influences not only the world of thoughts but also of actions. The alternative narrative of a global regulation of AI offers a more inclusive and sustainable alternative, one which more completely corresponds to the “all-pervasive” or cross-cutting, cross-boundary, and cross-cultural nature of AI. This quality of AI in combination with other relevant technologies, such as the Internet of Things, robots or smart cobots,[190] neurotechnologies, and augmented and virtual reality, equally advocates that the successful regulation of AI requires that it be linked not only to the goals enshrined in the SDGs but also their possible successor to be formulated in the “Summit of the Future” scheduled for fall 2024. This wider goal, namely to use AI to achieve a future set of fundamental goals formulated by the global community for humanity, requires not only a strong consensus on the substantive aspects of AI regulation but also parallel efforts to set up a global institutional framework adequate to address the present and future global governance issues.

To establish an adequate future institutional framework based on a global legal order, it is necessary to overcome the present levels of fragmentation of international law and the international legal system. Therefore, the International Law Commission should follow up on its 2006 report on the fragmentation with a future report on the institutional aspects of this problem.[191] In this respect, it is highly regrettable to observe the continuing inertia or perceived obstacles to the reform of the present international system – the so-called international “systemic chaos” – by the international community that began from the establishment of the present international system under the aegis of the UN. In this context, it is also frustrating to note the reigning resignation in global affairs, which presupposes that only a major cataclysm, like a World War III, could provide a realistic impetus for global reform efforts. By contrast, the present article follows theories of institutional change that see the main cause for institutional transformation and reform not (exclusively) in external but also in combination with internal cognitive causes. In this regard, the current growing global consensus about ethical concerns related to AI as a technology aiming to replicate the human mind offers a unique opportunity for upstream thinking or, in other words, to address the urgent global problems not only before they grow (further) out of control but possibly also before they even arise. To this end, it is necessary to tap into the imaginative power of the human mind to predict the future by creating it akin to a self-fulfilling prophecy.

Thus, the choice for all stakeholders, which is humanity as a whole, is clear: We must either actively change the mind to change the world or let the changes in the world force the future changes into our minds. In this regard, the article aimed to highlight the need for an extension of traditional modes of dualistic thinking to more flexible or fuzzy modes of oxymoronic, paradoxical, or “four-dimensional thinking.” Given the human cognitive difficulties of expressing phenomena that have not yet materialized in language, the article simply aimed to connect the different bits and pieces of human perception with language, technology, and law in order to draw the contours of a future framework of human cognition by describing and visualizing some aspects of a “four-dimensional space-time continuum” in the way that Albert Einstein tried to more accurately describe the reality of our world.


Corresponding author: Rostam J. Neuwirth, Professor of Law, Head of Department of Global Legal Studies, Faculty of Law, University of Macau, Taipa, Macao, China, E-mail:

Award Identifier / Grant number: MYRG2022-00075-FLL

About the author

Rostam J. Neuwirth

Rostam J. Neuwirth is Professor of Law and Head for Department of Global Legal Studies at the University of Macau. Previously, he taught at the West Bengal University of Juridical Sciences (NUJS) in Kolkata and the Hidayatullah National Law University (HNLU) in Raipur (India) and worked as a legal adviser in the Department of European Law of the International Law Bureau of the Austrian Federal Ministry for Foreign Affairs. He received his PhD degree from the European University Institute (EUI) in Florence (Italy) and also holds a master’s degree in law (LLM) from the Faculty of Law of McGill University in Montreal (Canada). As an undergraduate he studied at the University of Graz (Austria) and the Université d’Auvergne (France). He is the author of the books ‘The EU Artificial Intelligence Act: Regulating Subliminal AI Systems’ (Routledge 2023) and ‘Law in the Time of Oxymora: A Synaesthesia of Language, Logic and Law’ (Routledge 2018) as well as numerous other publications that focus on contemporary global legal problems by exploring the intrinsic linkages between law, on the one hand, and language, cognition, art, culture, society, and technology, on the other.

Acknowledgments

The author would like to thank Yeliz Doker for her useful comments on an earlier draft of this article.

  1. Research funding: The author acknowledges the support from the Research Services and Knowledge Transfer Office, University of Macau Multi-Year Research Grant (MYRG2022-00075-FLL).

Received: 2024-01-15
Accepted: 2024-02-16
Published Online: 2024-03-08
Published in Print: 2024-04-25

© 2024 the author(s), published by De Gruyter on behalf of Zhejiang University

This work is licensed under the Creative Commons Attribution 4.0 International License.

Downloaded on 13.9.2025 from https://www.degruyterbrill.com/document/doi/10.1515/ijdlg-2024-0004/html
Scroll to top button