Home Social anthropology 4.0
Article Open Access

Social anthropology 4.0

  • Mandy Balthasar

    Mandy Balthasar researches and teaches at the University of the Bundeswehr Munich at the Institute for Software Technology and in the Usable Security and Privacy Group of the Research Institute Cyber Defense (CODE). Her research interests focus on the complex processes of joint decision-making in technical and socio-technical systems, especially in critical environments.

    EMAIL logo
Published/Copyright: July 22, 2024

Abstract

Human-computer interaction as a coordinating element between human and machine is used in many different ways. Due to their digital processes, countless industries are dependent on an effective intermeshing of humans and machines. This often involves preparatory work or sub-processes being carried out by machines, which humans initiate, take up, continue, finalise or check. Tasks are broken down into sub-steps and completed by humans or machines. Aggregated cooperation conceals the numerous challenges of hybrid cooperation in which communication and coordination must be mastered in favour of joint decision-making. However, research into human-computer interaction can also be thought of differently than a mere aggregation of humans and machines. We want to propose a nature-inspired possibility that has been successfully practising the complex challenges of joint decision-making as proof of successful communication and coordination for millions of years. Collective intelligence and the processes of self-organisation offer biomimetic concepts that can be used to rethink socio-technical systems as a symbiosis in the form of a human-computer organism. For example, the effects of self-organisation such as emergence could be used to exceed the result of an aggregation of humans and machines as a future social anthropology 4.0 many times over.

1 Acting together – an introduction

This contribution proposes a new approach to human-computer interaction (HCI) based on the scientific findings of self-organisation and in particular on the social example of swarms. The aim is to initiate a discussion about the potential of self-organised collective intelligence in the connection between humans and machines on a horizon in 50 years. To this end, various topics relating to collective decision-making are discussed. A backward-looking approach is taken.

Since the early 1950s, science is said to have been preoccupied with the emerging phenomenon of complexity and has since attempted to fathom it with the help of artificial intelligence, cybernetics, mathematics and systems theory. The aim of cybernetics was to understand human and machine as elements of a self-controlling system. The aim was also to combine intelligence. An approach that comes close to the human being, as collective intelligence is precisely what has brought humanity forward. If technological progress is also taken into account, the idea of a hybrid collective intelligence, as an amalgamation of human intelligence and artificial intelligence from silicon chips and software, is inevitable. 1 This collective intelligence in the form of socio-technical human-machine systems has long been the focus of scientific interest as collective intelligence systems (CIS) in order to optimise their design. 2 , 3

Today, in 2074, we look back and cannot understand why a principle already given by nature, such as self-organisation, was not applied in order to give the management of complexity a regulated process through this desired amalgamation and thus make the bundled intelligence usable for us. We live in constant dialogue with artificial agents. They enrich both our professional and private lives. In all respects, from the wake-up call in the morning – set specifically between deep sleep phases – to the coordinated departure of one’s own avatar from the virtual world after death in the analogue world. A created symbiosis of human and machine, without which this complex world would no longer be conceivable.

A flock of birds glides across the gray winter sky. A sight that makes me smile and signals to my artificial colleague, who makes this view possible, that it was a good idea to open the roof hatch for this special moment. Such a dialog between human and machine increasingly became the focus of interest in the 1980s. 4 However, the term “human-computer symbiosis” 5 had already been coined in the early 1960s. The concept behind this was seen as a development that could be expected for future cooperation between human and machine. The background to this was the analyses already available at the time, which indicated that a symbiosis of human and machine would be more effective than could ever be achieved by humans alone. 5 The discovery of parallels between biological swarms and human societies, 6 as well as between natural swarm intelligence and logical computer science, also occurred in the second half of the 19th century. 7 Thus, numerous scientists were working on the overlapping topics of complexity science and the human-machine connection. And there were probably some of them who paved the way for our present-day life in this symbiosis with artificial agents.

1.1 Joint decision-making in humans

This work begins with the challenges of joint decision-making. To this end, the influencing parameters from psychology and sociology in relation to humans and natural systems are considered. The collective intelligence resulting from successful communication, cooperation and coordination as well as their challenges are also examined. In particular, the possibilities and impossibilities of utilising human reactions and their subjective factors for optimal communication and cooperation are discussed.

Flocks of birds, schools of fish or herds of buffalo – the animal kingdom seems to have developed excellent collective decision-making skills from the very beginning. Until the 1950s, on the other hand, humans were denied the ability to make optimal decisions with reference to their limited rationality. 8 Behind this attribution was the assumption of a systematic susceptibility to choose arational and thus often unfavourable options. 9 , 10 , 11 It had yet to be proven that a stroke of genius such as collective intelligence could succeed in joint decision-making. The crowd, which was initially declared stupid, was taken for granted until this could be refuted by the crowd itself. 12 Thus, despite limited rationality at the micro level, collective intelligence is made possible by a common basis for decision-making at the macro level. The degree of intelligence increases with the number of actors. This increase applies to both natural and artificial systems, such as robot swarms. 13 In order to achieve the highest possible degree of intelligence, challenges such as successful communication, cooperation and coordination between the actors must be overcome. For this reason, a single-digit group size is assumed in order to be able to act effectively. 14 , 15 Initial approaches to overcome this hurdle of limiting group size and thus also the limitation of collective intelligence were advanced with the help of large language models (LLMs) as so-called conversational swarm intelligence (CSI). 16 This was supported by the fact that humans have been working together in communities for thousands of years in order to survive. Thinking, sharing and acting together is therefore already inherent in humans. 17 And decisions are rarely made alone. 18

Yes, actually … my gaze continues to follow the flock of birds in the sky until it finally disappears from my field of vision. How could the potential of such perfect harmony in a community remain untapped for so long?

The neuronal networks of a brain in the prefrontal cortex already form couplings that act as interfaces. This enables people to trigger emotional reactions in their counterparts, for example. It is also possible that in the context of joint decision-making processes, errors in thinking or biases of an individual can also have an effect on co-decision-makers. Who knows, maybe even in this swarm.

At the same time, other challenging characteristics occur in human communities, which can be triggered by group dynamics or inadequate communication. 10 However, the possibility of joint decision-making and acting in harmony through swarms, herds, schools or even groups has always been visible in the world. These seemingly perfectly functioning communities are strengthened by reciprocal links, such as the transmission of emotional states. Trust grows and people feel connected. This connectedness can then be utilised in turn. 19

The behaviour patterns from swarms and networks of flora and fauna were therefore always available as a template. For example, the metaheuristics for solving combinatorial optimisation are based on the emergent abilities of ant colonies. 20 And in swarm intelligence, ant colony optimisation (ACO) is also the epitome of a metaheuristic based on the foraging abilities of ant colonies. 7 , 20 , 21 , 22 , 23 An equally well-known example is an optimisation inspired by biological swarms: Particle Swarm Optimisation (PSO), which is based on the flocking behaviour of bird flocks just observed. 24 , 25 These metaheuristics had already emerged in the 1990s and 2000s by simulating the behaviour of natural swarms in order to solve central optimisation problems. Why was this natural source of inspiration: swarms, not also used for joint decision-making from person to person or, as is common today, from person to machine?

Enriching phenomena such as the feeling of belonging but also the challenge of being influenced by group dynamics remain hidden from my artificial colleague: And at the same time, his gesture of opening the roof hatch at the right moment shows me his correct assessment of my emotional world. To be more precise, my efferent reactions can be determined on the basis of behaviour or movement sequences, such as my smile, and collected, analysed and evaluated as data by my artificial colleague. However, this is not yet possible for subjective factors such as phenomenal values. It would have to be possible to specify the value for a mental state, which expresses which feelings occur in connection with a certain situation. But these values can only be determined once the situation has already occurred, such as my reaction when observing the flock of birds. Nevertheless, if this experience is made for the first time, it is epistemically transformative and possibly also personally transformative. Thus, an experience can permanently change an individual’s phenomenology, replacing previously established core preferences. 26 With the weighted data of my artificial colleague, however, the indeterminate change in all previously collected data due to a single event is not comprehensible or even predictable. This means that the integration of subjective human factors in the development of optimised human-machine systems remains out of the question.

Moravec’s paradox has already shown that the most challenging human abilities are those that occur unconsciously. 6 , 27 At the same time, this paradox also remains valid in artificial neural networks. Why trivial processes usually do not work or only work inadequately, while complicated processes are executed without errors. 28

At the same time, it could be concluded from this realisation of subjective factors that comprehensible human decisions can only be assumed if no subjective values are involved. This excludes decisions as calculable decisions and thus as comprehensible decisions for my artificial colleague as soon as the human decision-maker is or will be affected by the decision outcome itself. However, this was made possible by the generation of a “veil of ignorance” 29 by my artificial agent in joint decision-making processes. The information basis of the initial situation is prepared by artificial systems in such a way that the human decision-maker does not realise that they could be personally affected. For example, information about age, gender or origin is omitted in order to enable an almost uninfluenced decision.

The fact that a decision in favour of a potentially transformative action can trigger feelings of uncertainty or even fear is already taken into account. 30 Since the occurrence of uncertainty can lead to feedback on the evaluation of an option, 31 the evaluation is created as a variable in the data. Other aspects, which is why the human condition cannot be analysed from the outside as a glass box, are dislikes or preferences as well as abilities and talents.

Afferent signals, which are detected by the body and transmitted to the brain, provide another source of data. In risky situations, these can already be intercepted by the posterior horn in the spinal cord and converted into efferent signals. This leads to a reaction of the body without having been processed by the brain beforehand. Such endogenous reflexes are already innate and can also be registered by an artificial agent using sensor technology. As a result, these reflex actions offer a barely distorted reaction to a stimulus, which can provide new insights. One reason why the tea in front of me is no longer brewed by my artificial colleague, but is now made tolerably hot.

So both deciding agents, human and artificial, are capable of learning. In order to enable an artificial agent to do this, knowledge of learning processes in human brains was utilised. For example, the human brain constantly adapts the connections between neurons during learning. This procedure should also be applied to the learning algorithms of artificial agents in order to optimise them in terms of speed and resistance. To date, the human brain has a head start over machine learning systems. The difference between natural and artificial learning becomes clear, for example, when it comes to absorbing new information. While it may be sufficient for the human brain to see something new just once in order to learn, artificial agents still require hundreds of attempts. In addition, newly learnt information is added to existing information in the brain. In artificial neural networks (ANNs), however, until recently, newly learnt information often collided with existing information and degraded in the process. 32 Thus, at least when it comes to forgetting during learning, my artificial colleague seems to have become more similar to my own way of learning over the last 50 years.

Which brings us to another special effect of our species: Reactions to external influences. The daily food intake or its macronutrient composition already influences the sensitivity and tolerance of the human decision-maker and thus the intensity for or against cooperation. 33 If, for example, a previously supporting cooperative strategy is interrupted due to a nutrient-related drop in tyrosine levels, an interplay between the decision-making parties of accommodation and rejection begins. In the context of the prisoner’s dilemma, a withdrawal from mutual cooperation may be more lucrative, 34 but in the long term a cooperative strategy that is aligned with the behaviour of the other party is more successful. 35 This distortion of human thinking can also be registered by an artificial agent using sensors and communicated by means of a warning.

However, these special characteristics of humans and their imponderables do not mean that artificial agents are always the better decision-makers compared to humans. On the one hand, natural thinking or natural systems have the ability to develop a shared intelligence that exceeds the intelligence of an individual. At the same time, natural systems are challenging due to their size, as they are dependent on communication and cooperation. If, on the other hand, cooperation is not possible, coordination must be used in order to utilise collective intelligence. At the same time, coordination itself is a complex process, as numerous psychological and sociological parameters, which are both consciously and unconsciously incorporated into communities, must be taken into account. Research into HCI is and will therefore remain a science in which cooperation between computer science, psychology and sociology is absolutely essential.

1.2 Special effects of decision-making with artificial agents

It is not only humans or entire natural systems that present special challenges that need to be overcome in the context of HCI. Artificial agents also present numerous hurdles that need to be overcome. In the following, we will look at these in the context of HCI and joint decision-making.

My artificial colleague, consisting of software and additional hardware such as effectors, sensors and processors, can thus act perfectly in a clear, sterile test environment like a simulation. In order to enable decision-making and thus a certain degree of autonomy, artificial agents are equipped with decision-making methods like decision trees, markov decision processes or reinforcement learning. If this ability to make decisions within an artificial agent did not exist, the desired autonomous state could not be achieved either. I would have missed the sight of the flock of birds passing overhead.

At the same time, my artificial colleague acts with algorithms, which is why decision-making situations must be expressed using formal rules of mathematics or physics in order to be calculable. This requirement is particularly fulfilled in complicated decision-making situations. Complicated challenges can be defined in machine-readable form, have a clear goal, static processes and stable framework conditions as well as an inherent logic so that decisions can be made quickly and optimally by artificial agents. If, on the other hand, a dynamic environment, arational behaviour or an unclear objective is given, no statistical probabilities can be calculated in these complex scenarios. 36 And it is precisely in these situations that my artificial colleague and I find it a little slower to reach a consensus.

In the old paradigm of symbolic methods of artificial intelligence, logic and reasoning found a way to make decisions. The later approach of machine learning found its way to decision-making through the use of data. From the use of deep learning systems and neural networks onwards, artificial intelligence had set things in motion with numerous findings in the field of computer vision. Today, human and machine can act as a whole, similar to this seemingly harmonious flock of birds, to make optimal decisions.

However, all approaches to overcoming existing challenges using artificial agents are still characterised by the fact that they draw on enormous amounts of data that describe similar situations and provide orientation on decisions that have already been made. At the end of the decision-making process, used data is pushed into the feedback loop of the learning artificial decision-maker in order to measure the effectiveness of the decision made and to generate further data material at the same time. This procedure illustrates the strength of artificial agents to date as well as their greatest weakness: the dependence on data quantity and quality as well as the restriction to complicated and therefore predictable issues. As well as the burden of training a new artificial colleague on its own thought and behaviour patterns.

Nevertheless, the risk of false evidence due to inappropriate data input is lower, as the connection is partly generated by the human-machine team. This has also minimised other problem areas such as the accuracy of the relevance of the data and the selected framework to the use case. Likewise, the challenge that algorithms are already subjective due to their model-like construction. 37 Nevertheless, the processing of data by algorithms is normative, which is why a normative bias can still be assumed. 38

We could have learnt back in 2008 just how much extrapolations from the past can lead to wrong decisions. In the largest insolvency case in U.S. history to date, the investment bank Lehman Brothers and its subsidiary were given an A+ rating by the rating agency Standard & Poors (S&P) three days before their demise – with the weekend in between. 39 The technical background to this was probably that, despite a wide range of options for controlling the training process of a machine learning model, for example using free parameters such as weights, it was not possible to ensure an optimal decision or decision proposal even with careful preparation. For example, the future does not repeat itself. However, people believe they can read patterns from the past. We know from experiments that even minimal changes can produce fundamentally different results. Moreover, these minimal changes cannot be anticipated and more. But these environments, which are unfriendly to machines and constantly change the status quo, are usually the dynamic systems in which complex decision-making situations arise. 40 At this point in history, there could already have been a rethink towards a human-machine organism. The way we live now – almost half a century later – in which we balance our opposing strengths and weaknesses.

To summarise, we can now assume that the artificial decision-maker also has its strengths and weaknesses. On the one hand, for example, decision-making methods can be implemented to create a certain autonomy through decision-making ability. On the other hand, artificial decision-makers are dependent on data and the mathematical or physical describability of the decision situation. At the same time, processing is limited to complicated and predictable situations in favour of optimal results. In this environment, however, machines can produce excellent results that humans are unable to achieve. Nevertheless, in environments that are unfriendly to artificial systems because they are dynamic, humans or entire natural systems can create added value based on their potential for collective intelligence. For the HCI community, this means creating optimal links between two different systems: natural and artificial, with all their strengths and weaknesses, in favour of the best possible collaboration in the form of human-computer teaming (HCT). One example of this is the HCT concept of dual-mode cognitive automation, 41 which transfers cognitive tasks to both humans and artificial cognitive units (ACUs). The focus here is on the human actor in order to give them more awareness of the situation and at the same time minimize their workload. The actual collaboration between humans and ACUs can be realized in two ways (dual mode): By means of hierarchical delegation from the human to the ACU or in the context of cooperative teaming, such as between a human and an assistance system. 41 However, there is always a center within the team that coordinates and/or makes decisions.

2 Necessity of cooperation

Based on the assumption that the joint accomplishment of set tasks is the most effective, cooperation will be considered as a structuring component in the following.

Game theory has already crystallised that psychological influences and social factors, in particular norms or moral concepts, play an important role in humans. 42 Since people often behave cooperatively for strategic reasons, institutionalised processes can create incentives in favour of cooperation through artificial agents and their feedback or reputation systems. 43 Even the regulation of artificial decision-makers, as envisaged in the Artificial Intelligence Act (AI Act) 44 of the European Union (EU) is a regulation of humans. For example, those involved in the development of artificial agents are to be encouraged to act within the set guidelines by means of targeted incentives or penalties. 45 , 46

However, as the decentralised and hybrid structures in HCT meant that cooperation was no longer the basis for a joint decision-making process, it had to be replaced by coordination. This was already predicted by consensus theory, which is based on constitutive principles to which the actors in a system are subject and which drive them to make a joint decision. Thus, the structure of communication within a system already creates a pull that drives the actors to make decisions and is necessary for a community. 47 For our current joint decision-making process in a system of human and artificial actors, this communication structure had to be created specifically to generate this pull. 48

If we look at the decision-making processes of natural persons or communities and compare them with the approach of artificial agents, such as machine learning (ML) algorithms, the fundamental differences become apparent. An artificial agent calculates a decision based on mathematical rules. A set of variables is used as input, which is compared with a target as a calculated prediction. Natural agents, on the other hand, rely on a mix of variants. Various heuristics are combined with static procedures and implicit knowledge. At the same time, the diversity and abundance of information has increased exorbitantly in recent decades. This is why an artificial colleague has been added to the human mix of variants as a data collector, processor and visualiser and has become indispensable for successful decision-making.

However, the artificial agent still has difficulties in assessing situations that the human decision-maker has not yet experienced. Although it is possible to exchange experience reports as a kind of verbal simulation within a joint decision-making process, this represents the phenomenal values of the communicating person, which does not allow any conclusions to be drawn about the epistemic experience of others. 30 As a result, visualisation and simulation still have their limits as decision support for both human and artificial decision-makers.

On the other hand, an artificial agent offers support by means of simulations in the cooperation between human actors. By means of sociometric representations such as diagrams, which visualise the course of the decision-making process and its dimensions, or by means of sociomatrices, 27 , 49 which illustrate the relationship structure within a group. This helps to deal with differences in interpersonal relationships in order to develop an awareness of the opinion patterns and group-specific trends.

Artificial co-decision-makers are predestined for the creation of visualisations from a meta-perspective despite their involvement in the process. The background to this is their internal and, in some cases, external autonomy and the associated ability to be objective, provided they draw on data that is not evaluated by humans, such as pure sensor data. This makes artificial agents social due to their cooperation-promoting behaviour.

Reasons for assuming that an HCT could deliver viable results became apparent early on. The reason for this is the different decision-making tactics, which in turn result in different strengths and weaknesses. 50 For example, human decision-making tactics have a weakness in the assessment of risks, 51 while algorithms in turn have a weakness in terms of robustness, which can be recognised in particular with increasing dynamics in the initial or data situation. At the same time, artificial agents benefit from the feedback of natural experts, as demonstrated by decision making in a clinical context using reinforcement learning as early as 2022. 52 The interactive machine learning (IML) approach already started with the integration of feedback during the modification of an ML model. 53 , 54

The stringent approach of an artificial agent and the emergence and creativity potential of natural systems is another reason for the human-machine connection that exists today. This allows a decision-making process to be optimised even under complex conditions. Of course, this is only possible if there is a symbiosis of human groups and artificial agents using intelligent tactics. For example, the human ability to recognise and understand simple causalities must be revealed to the artificial decision-maker. However, if the correlation used as a basis cannot correctly depict the causal relationship, this inevitably leads to incorrect decisions. This insight was gained through the Generative Pre-trained Transformer 3 (GPT-3) language model, 55 which reacted as a trained neural network to speech input from human users. Compared to humans, GPT-3 was almost as good at making rational decisions. However, there were glaring deficiencies in abilities such as causal reasoning. These were due, for example, to the way in which the training was conducted, in which information was passively extracted from data without actively interacting with the environment or its context, which would have been necessary for the development of fully complex human cognition. 56

If, on the other hand, the interaction increases in several dimensions due to the complexity of the situation to be decided, the human decision-maker quickly reaches its limits. Due to the multiplying interlocking of humans and machines and the associated joint decision-making, the optimal cooperation between the two actors was used as a kind of social anthropology 4.0. The aim was to interweave causal awareness and creativity at the human micro level and collective intelligence at the macro level, while at the same time efficiently processing huge amounts of data from a linear process using an effective structure of evolutionarily proven and self-organised principles. This approach came a big step closer to the flock of birds permeated by cooperation as a prime example of decision-making as one organism than a mere consideration of the diverse forms of HCT.

Although natural and artificial actors differ in the way they make decisions at the micro level, they do not differ so much in the way they interrelate in favour of a joint decision at the macro level. Thus, communication and cooperation are essential for both actors to negotiate goals and develop the associated process. A hybrid collaboration is conducted through a sensory input and output of data in favour of communication and cooperation via behaviour as a dialogue. 57

By drawing on nature’s strategies as a kind of bionic concept, a principled joint hybrid decision-making process could be generated. In addition to mere coordination, this process also offered protection against the mere adoption of calculated opinions from artificial colleagues, as was the case with ChatGPT or in clinical decision support systems (CDSS), for example. 58 , 59 In addition, the necessary pull to reach a consensus could be generated by the adapted processes, 60 thus creating a system that could be described as a human-computer organism (HCO). Moreover, this concept of joint decision-making had already been tested for thousands of years in evolutionary terms: described as so-called self-organisation.

A self-organised socio-technical culture was already being promoted in 1994. At that time, the model was already living beings that act autonomously as a whole without a centre and also use the phenomenon of emergence for themselves. 61 This idea was already modelled on a swarm. Not a flock of birds, however, like the one that has just inspired me, but the superorganism of a swarm of bees.

Such a swarm of bees is an excellent analogy for distributed systems in which both the potential of the individual at the micro level and that of the community at the macro level can be optimally utilized. 62

If we summarise all the aspects mentioned in this chapter, the tasks of pioneers become apparent for HCI science. Effective HCT is only possible if cooperation is practised between the entities. In turn, cooperation is only feasible if there is communication between the entities. The essential task is therefore to create structures in which communication can be cultivated and from which cooperation can simultaneously emerge. Possible tools for accomplishing these tasks are usually located at the macro level, as this is where most of the overlaps between the decision-makers occur. This is the way to reach a joint decision: using the tactics of communication and cooperation. The HCI community is supported by the bionic concept of self-organisation, which can be analysed in practice using the example of swarms. If this task of building structures for communication and cooperation in favour of HCT succeeds, in an optimal case a swarm-inspired HCO can emerge from an HCT and social anthropology 4.0 can emerge from the scientific field of HCI, which goes beyond the consideration of interaction.

3 The paradigm of self-organisation

The concept of self-organisation introduced in the previous chapter will now be examined in more detail.

Research into collective intelligence was successful at an early stage in computer science. For example, in the leader-follower problem, packet forwarding and variants of Arthur’s El Farol bar problem. Since numerous other sciences also make use of collective intelligence, such as sociology, 63 or research areas such as behavioural economics, the advancement of research findings was and is essential. 64 However, it is challenging that the bundle of shared experiences, intuitions and knowledge does not correspond to an addition of the contributed intelligence, but can exceed the actual sum many times over due to emergence. 65 , 66 This emergence in social systems becomes apparent through the reductive description as a social change due to the decision-making action of an individual, but without wilfully bringing about this change. 67 , 68 This process is also described as a single “invisible hand” 48 that achieves a result in a social system completely unintentionally. In this way, something like objective reason can assert itself in the secrecy of a joint decision-making process, which the actors themselves were not even striving for at the time. The phenomenon of emergence in turn stems from the concept of self-organisation.

When it comes to researching the theory behind self-organisation, the classical models of physics such as deterministic or stochastic methods are not expedient. Even the terminology used in physics does not seem appropriate, which is why almost two hundred years after Friedrich Wilhelm Joseph Schelling’s (1775–1854) theories – which for the first time ran counter to the mechanical view of the world at the time – the search was on again for suitable terminology and his natural philosophy was used. Schelling’s theses were thus the impetus for numerous other hypotheses, research questions and cognitive interests, including those relating to process-based self-organisation. 69

In the sky, I am now presented with the spectacle of a swarm that seems to dance as a whole. The swarm doesn’t seem mechanical to me, more like a perfectly choreographed sequence of changing directions, widening distances and then immediately narrowing them again. No one is left behind, no one falls off, no collisions, no runaways: everything seems perfectly harmonised.

The findings relating to such a concept of self-organisation can be traced back to Erwin Schrödinger (1887–1961). 70 The development of a resulting theory of self-organisation, on the other hand, arose from Hermann Haken’s (*1927) so-called synergetics. 71 , 72 This theory should make it possible to analyse the conditions and processes of self-organisation as well as the resulting states. 73 The fact that this endeavour was not unproblematic was shown by the existing different concepts of self-organisation, each of which also had a different definition. This led to a situation in which no universally accepted, comprehensive and generally valid theory for self-organising systems could be assumed. 74 , 75 , 76 , 77 , 78

The fact that the phenomenon of collective intelligence has already found its way into numerous scientific disciplines has thus been sufficiently demonstrated. Likewise, the situation of a non-existent definition of self-organisation, to which the development of an HCO could refer. Both the conceptual culture and the process itself, as well as the possibilities for shaping this process, will be presented in the following chapters in order to create an opportunity to clarify the concept of self-organisation and the resulting tasks for the HCI community.

3.1 Conceptual culture of self-organisational processes

The transition from the mechanistic view of the world to the later so-called modern physics was based on numerous discoveries and insights. For example, the tunnelling effect showed that elementary particles can also be found beyond the potential barrier. At the same time, overall atomic systems are not sets of individual particles, but each individual electron already changes the overall wave function of the entire system. 79 A common denominator in the definition of complex systems was provided by the self-organising dynamics at work. 80

In this way, self-organising phenomena can overcome the challenges of forming complex structures due to thermodynamic conditions in living organisms with the development of non-linear physics or non-equilibrium thermodynamics. Together with models of kinetics, this provides a way to analyse and explain cooperative processes in a physico-chemical or mathematically quantifiable way. Both the dynamics and the genesis of such synergetic structures as well as macromolecular biochemical evolutionary mechanisms are comprehensible in their approaches. The background is dissipative and fluctuation-induced instabilities and non-linear phase transitions: self-organisation. 81 This concept of synergetics gained ground because it defines cooperation within a system by mathematically modelling the transitions of non-equilibrium phases. 72 Numerous examples from biology, chemistry, physics or ecology follow this approach to cooperation, such as cooperation in markets, patterns in liquids, spiral arms of galaxies or consensus building in superorganisms and neural networks. 82 Cloud formations also follow this approach. Unfortunately, no example that I could reproduce in the grey winter sky. However, as proof, the flock of black starlings continues to dance in front of the grey sky backdrop.

Last summer, I drove a superorganism out of the house, which could also serve as proof: Ants. For example, the use of ant colony optimisation (ACO) helped to find the shortest route for transport robots and to meet the challenges of designing supply chains in logistics. 83 However, other swarms have also provided inspiration, such as the principle of synchronising a school of fish to optimally use the flow field of wind turbines in wind farms or autonomous NASA exploration swarms, 84 , 85 which are based on the behaviour of bee colonies as shown in Figure 1; Similarly, autonomous drone swarms in use as a fire-fighting unit in disaster control based on nature-analogue particle swarm optimisation (PSO) as well as medical interventions using nanorobots, 86 which as a group can provide minimally invasive and precise treatment. 87 , 88

Figure 1: 
Swarm of honey bees (Apis mellifera) during a joint decision-making process (1.1) and during the joint implementation of such a democratically reached decision (1.2).
Figure 1:

Swarm of honey bees (Apis mellifera) during a joint decision-making process (1.1) and during the joint implementation of such a democratically reached decision (1.2).

In addition, upheavals occurring in nature, culture or human society, such as new orders or structures, can also be seen as the result of self-organising processes. 89 In physics, such equilibrium phase transitions were considered early on as a form of self-organisation. 14 The fact that self-organisation is not a linear development of different entities side by side is shown by the property of forming cooperative links that generate a homogeneous structure that enables harmonious integration of all system elements. 90 If this were not the case, the acrobatics presented to me in the sky would resemble an air show with starlings flying in parallel.

In the context of research into self-organising processes, the focus was mostly on understanding the emergence and maintenance of order. 70 , 75 , 91 This led to the question: How must a complex system be organised so that it is able to organise itself? 92 Without being able to answer this question, it is obvious to want to coordinate systems by means of hierarchies or a centre. However, a dynamic nature with cooperating entities in complex structures, which organises itself and constantly reinvents itself via feedback and its synergy effects, is a complete contrast to a deterministic, completely predictable nature. 93 Thus, a complex system is not an isomorphic, static structure, which means that it does not achieve thermal equilibrium. 94 On the contrary, phase transitions generate interconnections and thus create new structures in favour of self-organisation. For example, the perception of a complex system can be understood as a learning process on a macroscopic level, which links to existing structures through input, restructures them or forms them entirely as new patterns. 95 , 96 Just like the billowing black cloud in front of my eyes.

If atomic systems do not form a mere aggregation of particles, it can be assumed that other systems also have this property. In the section above, self-organising dynamics in complex systems were demonstrated, which can provide an explanation for the formation of cooperative processes. For example, as mathematical models of cooperation through the concept of synergetics. Numerous disciplines follow this approach and have been able to implement cooperation in their dynamic systems without control via a centre using examples from nature. From this it can be concluded that an HCO could also be realised.

3.2 The process of self-organisation

But how exactly the process of self-organisation works will be explained below. Systems, like swarms, merge their ongoing processes, information and stimuli from both outside and inside into a common database. On this basis, processing such as the evaluation of input and existing information is driven forward. The resulting output in the form of a decision is made by the overall system at macro level.

This process of combining information and stimuli on an internally distributed micro-level and the interweaving of all opinions into a common consensus as well as the resulting consensus behaviour as a single organism is referred to as self-organisation. Self-organisation simultaneously contains both forms of chaos and order, which makes the dynamic processes almost impossible to predict. However, the various feedback loops between cause and effect can be an indicator. 97 Self-organisation takes place in various successive phases, which can be observed in natural systems, such as superorganisms, 60 as well as in human social systems. In a first phase, a system grows out of a state of equilibrium in which it exchanges information with its environment. Depending on the type of system, this can be an exchange of energy, information or matter, for example. The exchange between the environment and the system increases continuously until the system reaches its maximum capacity. A new phase of self-organisation then begins, in which the system starts to become unstable. In order to counteract this instability, the existing fluctuations in the system are mitigated. In order not to miss the point at which a new phase is triggered, the stability of the system is continuously determined. Fluctuations that trigger positive resonances in the system are further amplified by means of positive feedback. This continues until a new formation of the existing structure of the system becomes unavoidable by means of bifurcation. The necessary new structure of a system can neither be influenced nor predicted. In the subsequent phase, in which the system has a new structure, a greater capacity for exchange between the environment and the system is created. However, this state will also only be a phase, which will slip back into an imbalance towards the end, as the capacity will also become too small. At this point, the phases of self-organisation are repeated. 98 The transitions between the individual phases thus represent a type of symmetry breaking in which a state of equilibrium is to be restored in the system. Decisive for the new structure at the macro level of the system is the grown-up consensus of all entities. 93 A frequently used example to illustrate phase transitions is the laser. A laser beam is created due to the coordination of its individual parts: the photons, as soon as an externally supplied energy has increased to a maximum in the system. 99

Many successive bifurcations ensure continuous optimisation of the system’s organisation and increasing complexity by means of constantly emerging new structures. Thus, a continuous development takes place within a self-organised system, which is kept going by the driving phases. 98 Due to this constant change in structures, individual structural elements, so-called organisation parameters, are put to the test. If they prove to be conducive to the formation of a new optimised structure, they continue to be used and are thus retained. However, if individual elements are no longer useful for the system, they are discarded, similar to a selection. At the same time, it is possible that different order parameters cooperate with each other and are thus able to jointly optimise the system by achieving greater structural complexity. 100

The constant influx from outside onto a system creates a process loop of reception, processing, transfer, cooperation and integration, which is continuously run through within the system. The resulting recursiveness is inherent to all systems that behave in a self-organised manner, such as autopoietic systems. 101 This can be seen in self-organised superorganisms, such as honey bee colonies, which reach the limit of their absorption capacity at a maximum energy input and therefore look for a new, larger home together as a swarm. This decision-making process (see Figure 1) starts all over again every time a limit is reached. In the same way, once a decision has been made, a new decision-making process begins again and again in humans. The implications, which occur both as a consequence and as a trigger for new decisions, guide the decision-making process. As a result, there is no end to a self-organising decision-making process during the lifetime of natural systems. The interaction between the processes inside the system and the cause acting on the system allow the exchange process to run in a continuous loop and at the same time drive the system forward. 97

It can be stated that natural systems are always self-organising systems, which therefore take care of their own inherent functions and structures. They therefore offer a concept for systemic autonomy, which is desirable in both hybrid socio-technical and artificial systems. At the same time, self-organised natural systems obtain their necessary resources from outside and are thus in a constant connection, which drives the systems forward and enables development, which is also desired as a system property. 90 However, as the system structure appears to be neither influenceable nor predictable, the question arises as to whether and how a self-organised HCO can be influenced at all. And what can be contributed to this in the context of a social anthropology 4.0. We will address these questions in the following chapters.

3.3 Interventions in a running system

Interventions in the existing process of a self-organising system can, for example, inhibit the development of the entire system. The affected system switches to a kind of emergency mode, in which only the most necessary things are done. However, the system no longer achieves any further development under inhibition. The processes of a system running in emergency mode then only serve to maintain what already exists. In particular, the inhibition of fluctuations stops the development of a system. At the same time, the system then tries to achieve an equilibrium on its own, which can exist despite the inhibition. In order to avoid restructuring, it is necessary to intervene in the system with the aim of reducing the critical mass required for a bifurcation in the system. This is done, for example, by establishing a flow as a substitute. This substitute flow is unavoidable in order to force a system to remain within a certain structure. This applies not only to the macro level of a social system, but also to the micro level and thus to each individual actor in a system. 98

If interventions are made and result in a higher flow between the system and its environment, these are considered positive. If, on the other hand, the flow or exchange between the system and its environment or the flow within the system itself is lowered or inhibited, the interventions are labelled as negative. Successful intervention in the system therefore supports development and thus self-organisation, thereby ensuring the autonomy of the system. Autonomy also includes the possibility of a system dissolving itself as soon as its purpose has been fulfilled. Thus, there must be moderation in the intervention so that a system can continue to organise itself despite an intervention. 98 For successful intervention, it is necessary to recognise which conditions and relationships exist at all system levels and in the exchange with the system environment and how potential measures can have an effect.

In principle, it is not possible to determine the behaviour of a self-organised system. However, it is possible to steer the system in a desired direction. This is practised, for example, by so-called travelling with bee colonies. Neither an individual bee nor the entire colony can be told which flower to collect nectar from. But it is possible to make offers to the system that are lucrative, as they promise a higher flow or nectar yield. Nevertheless, this is no guarantee that the offer made will be accepted by the system, i.e. the bee colony. If, for example, there is a more promising orchard in the neighbouring area of the offered rapeseed field, the bee organism will reject the attempted influence. Self-organised systems thus always choose their future structure themselves within the framework of a bifurcation. In the same way, I will not be able to stop the flock of birds in the sky or induce them to perform other dance figures. However, it would be worth trying to distribute food in the meadow to make the flock of birds an offer that might be more useful than dancing energy in the sky.

For the coordination of HCO, this means that it is possible to influence it. Fluctuations in the system must not, however, be prevented without creating a replacement. The aim must always be to maintain a balance in the HCO. In addition, an HCO must not exist without a task or a purpose so as not to run the risk of it abolishing itself and thus not being available at a desired point in time. In the context of social anthropology 4.0, an awareness of HCO must therefore be created. This requires an understanding of the organism, its structures and interrelationships as well as its environment in order to be able to make suitable offers in favour of a new structure and assess possible effects.

4 Human-computer organism

However, in order to create an awareness of the research subject of HCO and thus also for the theory of social anthropology 4.0, the hurdles of self-organization must also be known. We now want to address these and outline them using the natural example of swarms.

The validation of models for self-organisation was simplified or made possible in the first place by computer-aided simulation, for example. However, this was only possible at a time when the Brussels School 102 was endeavouring to justify self-organisation on the basis of examples. 69 , 103 The necessary machines or measuring devices for analysing self-organising processes were either not yet available or not yet in common use. The first descriptive models for demonstrating emergent capabilities offered precise concepts of non-linear non-equilibrium thermodynamics. This enabled a uniform description of organisational structures in systems for different disciplines. Despite the fact that these mathematical models were initially used in physical chemistry and physics, which led to accusations of physicalism, the models can be said to be universally valid in terms of their applicability. However, this assumes that both their semantics and syntax are projected for the application and can be assumed to be appropriate from an empirical point of view. 101 , 104 , 105 , 106

So what stopped science from projecting self-organisation as a principle onto the human-machine system for so long? Why did it cling for decades to its view of HCI, which focuses on the human-computer pair and not the entire system?

Due to the multi-layered nature as well as the prevailing complexity, self-organisation only became a much sought-after model at a late stage in order to make navigating systems comprehensible and to be able to analyse the existing dynamics and their tendencies. 73 However, the fact that even understanding the inherent processes of a self-organising system is a challenge becomes clear to me when I look at the flock of birds. I can hardly stop watching the harmonious structure and yet neither I nor my artificial colleague can predict which turn the birds will take next. When it comes to analysing flocks or complex systems in order to subsequently make use of the knowledge gained, for example by making predictions, modelling and simulation is helpful, but this is no trivial undertaking due to the multidimensionality of the spatially and time-dependent objects. 107 , 108 This enormous challenge is immediately obvious to me and perhaps this is also part of the answer as to why the interest in HCI research has focussed on one detail of the human-machine system.

The simulation of an HCO should enable the coordination of dynamics by means of self-organising processes as well as the resulting development of emergent capabilities. To achieve this, however, a space must first be created within the model in which the variable micro-state of each actor is taken into account. In the subsequent calculation of the model in favour of a successful simulation, this means a permanent dynamic, which also constantly correlates anew. But it is precisely this undertaking that enables the self-organising symbiosis of human and machine, which I can lead with my artificial colleague as a kind of HCO.

However, with regard to the challenges of modelling self-organising processes, a look at the modelling of the behaviour of swarms could have been informative. Swarms, for example, are not defined as independent objects, but by their self-organised behaviour. They are always associations or systems that develop through communication, swarm behaviour and the resulting emergence. Examples of this possible emergence include joint decision-making processes, but also coordinated exploration, self-organization or autopoiesis. 109 , 110 This definition of swarms via their behaviour and the associated exploration of collaborative processes forms the basis for an understanding of the communication flows and the resulting emergent capabilities of social self-organising systems. Colonies of honey bees (Apis mellifera) are social organism-forming insects, 111 which together form a superorganism. 112 Observing the behaviour of a single honeybee outside the hive while collecting pollen and nectar can give the impression of a rule-based process that is continuously repeated. On closer inspection, however, this potentially deterministic system of thousands of foraging honeybees reveals numerous random deviations and fluctuations. Nevertheless, a honey bee colony exists as a whole with a complex organisational structure. The entire system becomes physically visible as a cluster of bees outside the hive (see Figure 1). Just as a flock of birds appears in the sky as a large cloud of birds. 113

A modelling of consensus building in a swarm was only realised a few years ago in the form of a self-organising network. This made it possible to highlight aspects and mechanisms of self-organisation, which is essential in social systems for a joint decision-making process and thus for autonomous action and the solution of complex problems. 114

Numerous other capabilities, such as cooling or energy and building material supply, show that superorganisms only need minimal changes in the environment to create a new process on a microscopic level. At the same time, cooperative processes are also set in motion at the community macro level, in which individual honey bees distribute, position themselves and whirl their wings at the micro level in the beehive in such a way that a jointly organised flow is created, which draws fresh air from outside to the organism in the beehive. This creates solutions to complex problems by means of fluctuations at the micro level, which become emergent skills, structures and tools at the macro level. The ability to find solutions through emergence arises from the tension between the complex system, the thermal equilibrium and the mathematical non-linearity of the time-dependent evolution equation. 93 The condition of a centre, such as a decisive and thus controlling unit in the system, is therefore no longer necessary. This results in the property of autonomy for a self-organising system.

I realise that I am gradually getting cold and at the same time my artificial colleague starts to close the skylight. Thanks to the understanding of the principles of decision-making in both humans and artificial systems, as well as the findings relating to the natural phenomenon of self-organisation, the focus of research has shifted from HCI to HCO. This means that I can rely on a unique hybrid community of humans and machines that has emerged as a result of technological developments, particularly in the field of artificial intelligence. The concept of the human is no longer at the centre of interest as a placeholder for all people, but rather the organism consisting of humans and machines as a whole and its bidirectionally influenced environment. The artificial actor can be customised to the respective individual, which corresponds to the perception of the individual human being with all its special effects. At the same time, the human actor adapts to the strengths and weaknesses of his artificial counterpart in a mutually enriching way. Thus, autonomous action is optimized through self-organising processes in which strengths and weaknesses are now balanced out in joint decision-making. This chapter aims to provide a brief description of the field of self-organisation with the hurdles that need to be known in order to prepare a path that can be followed by means of skilful manoeuvring to form self-organised HCOs.

5 Structure of a human-computer organism

Building on the previous chapters on the nature of self-organisation, its processes and influenceability, and the challenges arising from self-organising systems, the structure of an HCO will now also be addressed.

Based on the collective intelligence already mentioned at the beginning of this paper, but also due to the scientific disciplines involved and their respective intersections, 63 one can already guess the complexity that must be mastered in order to bring human and machine together as an intelligent, self-organised organism.

As an autonomously acting HCO, it is essential to be able to decide and act together. Complexity science itself, which has developed various models for decision-making processes, has in turn been developed from the scientific strands since the 1940s: Mathematics of complexity, systems theory, theory of complex systems, cybernetics and artificial intelligence. 115 These strands are already interwoven within themselves through numerous ramifications. The theoretical foundations from the logic of joint decision-making are the building blocks on which a sustainable joint decision can grow.

Philosophy and sociology form the gap between orientational and disposable knowledge in order to connect the various building blocks of a collective intelligence. At the same time, the integration of these scientific disciplines expands the instrumental rationality 112 , 116 , 117 of mere natural sciences, which is perceived as truncated. Although an ends-means relationship 118 remains assignable within a collective intelligence as a predetermined pattern of action at the micro level, the associated goal achievement at the macro level fails to materialise due to the manifold links within the HCO. The background to this is, on the one hand, the complexity of an HCO and, on the other, the rules and processes extracted from decision theory, psychology and sociology.

Anthropology itself poses the question of the nature of human, but as an overarching discipline it draws its results from the interlinked findings of other, already mentioned sciences of collective intelligence. The existing knowledge and experience of the various disciplines can provide a viable structure as a kind of “fabric” 119 of the common decision-making culture.

All of the scientific disciplines that are relevant to anthropology are those that are closely related to humans. In the scientific disciplines of collective intelligence, these are biology, philosophy, psychology and sociology, and thus four out of six sciences (see the gray paths in Figure 2). 63 None of these disciplines alone is able to define the human being or collective intelligence, although the material object in all of them is the human being. Anthropology, on the other hand, draws on all those sciences that use the human being as a material object in order to track down knowledge about the human being on a broad scientific basis through a variety of formal objects, i.e. from multiple perspectives. This is why anthropology includes the general perspective of philosophy, which asks what is human, the social cognitive interest of social science, the scientific attempt to fully understand humans by means of the structures of their bodies and the desire of psychology to understand humans on the basis of their actions and thought processes. While the term anthropology refers to the sciences relating to humans, the sub-concept of social anthropology focuses on the study of humans specifically as social beings and is therefore of particular interest to HCO. Social anthropology 4.0 now combines all these scientific findings of anthropology as well as the more focused social anthropology of the human actor.

Figure 2: 
Roadmap of scientific disciplines in favor of social anthropology 4.0.
Figure 2:

Roadmap of scientific disciplines in favor of social anthropology 4.0.

Figure 3: 
Development phases from user-centered to process-oriented human-computer connection.
Figure 3:

Development phases from user-centered to process-oriented human-computer connection.

However, in order to analyse an HCO in which the opposing strengths and weaknesses of natural and artificial actors come into play, knowledge of the artificial agent is also essential. This can be achieved through the two other scientific disciplines of collective intelligence: computer science and mathematics (see the black paths in Figure 2). At the same time, this is similar to the claim of cybernetics to create a system of self-organisation which, as a hybrid, balances the opposing strengths and weaknesses of human and machine in a joint process.

Here, too, a purely anthropocentric mode is not desired, but rather the embedding of equally entitled actors in a common hybrid system that interacts as a whole and is thus jointly subject to the principles of its environment. Based on digital anthropology, 120 a research discipline that emerged from social anthropology and analyses human-machine systems in digital space using a cybernetic approach, digital anthropology can be expanded to social anthropology 4.0 in the context of HCO. Anthropology 2.0 was already conceived at the beginning of the 21st century as a further development of the human body in the context of technological developments and as an upgrade of the human being through technical innovations that characterise the human environment. 121 , 122 Research into HCI would therefore represent a type of anthropology 1.0. Transhumanism, on the other hand, could be defined as a dualistic approach to anthropology 2.0 with its connection to the human body. The approach of anthropology 2.0 represents an optimisation of humans through artificial intelligence, whereas social anthropology 4.0 represents research into the joint action of human and artificial actors as a unit in the form of the HCO. Behind the term extension 4.0 lies the concept of web 3.0 with its focus on the aspect of decentralisation on the one hand and the processual logic of industry 4.0 on the other. Thus, the foundations of industry 4.0 are: networking of actors as well as intelligence in the form of communication and the resulting autonomous self-control. 123 An HCO thus combines all these concepts as a decentrally networked intelligent system that is self-organised and therefore autonomous. As social anthropology 4.0, in which multi-optionality and transdisciplinarity characterise the environment and the framework conditions of joint hybrid decision-making actions, the principles of swarm intelligence can combine the tactical rationality of artificial intelligence with the strategic manoeuvres of intuition, creativity and the recognition of causality in human groups and thus form excellent self-organised and thus emergent systems that also act autonomously. For the research field of HCI, the turn towards HCO and thus towards social anthropology 4.0 means an expansion of disciplines towards transdisciplinarity and its multiple perspectives. In addition to the various scientific disciplines (connections in Figure 2), it is the factors (boxes in Figure 2) that influence or enable joint decision-making as the basis for an HCO. The flow from top right to bottom left via cooperation, communication and coordination as well as the paradigm of self-organization also points to the essential factors that must be present for a self-organized HCO.

With this realisation, I turn my thoughts away from the observed flock of birds for good and am grateful for my own flock-like connection, which my artificial colleague and I maintain as HCO.

6 On the shoulders of the HCI giants – process-driven human-computer connection

Based on the attribution that the future is not what will definitely come, but what we believe will come, this article was written on the future of the human-machine connection. This assumption of how such a future can be shaped is based on numerous transdisciplinary findings from scientific research and practice.

Based on numerous findings from HCI, in which the machine served as a tool, it was possible to develop an HCT that assigns cognitive tasks to humans and/or machines and thus switches from a user-tool connection to a human-machine hierarchy. From this hierarchy, the connection between human and machine can be further developed into a self-organized cooperation and thus from a user-centered to a process-oriented approach.

For research into the human-machine cosmos, this means a change in paradigms from the symbolic user as a shepherd over a herd of machines to a beekeeper of a self-organized human-machine swarm. The aim is to create a symbiosis of human and machine that can act self-organized and thus autonomously in the form of an HCO. The natural model for such a symbiosis is the swarm, which acts as a whole by means of its collective intelligence and its self-organized effects and processes. The fact that the concept of swarms works is shown in practice, where social insects have been making decisions and living together for millions of years. However, mathematical models can also prove that colonies of social insects can reach statistically optimal decisions as a unit, 124 which are then implemented together (see Figure 1).

Similar to synchronization in the kuramoto model, the individual actors cooperate dynamically to form a coherent whole. Due to successful communication and cooperation, a coordinating center is just as unnecessary as interventions from outside the system. Similar to the primate brain, feedback processes are responsible for creating a coherent state. 125 , 126

In order to help ensure that humans and machines cooperate optimally with each other as a swarm and balance their opposing strengths and weaknesses, numerous transdisciplinary findings are required, which must be discovered, collected, brought together and made available in the research field of HCI (Figure 3). The focus must be on the processes between the individual players. To this end, humans as social beings in a social context with machines must be researched further in order to be able to utilize the processes and structures that occur for the benefit of HCOs. The underlying theory of social anthropology 4.0 can thus be built up and supplemented piece by piece, providing a viable framework of knowledge for the further development of HCOs. For the scientific community in the research field of the human-machine connection, this means that it is both a producer of the product of social anthropology 4.0 theory and an HCO keeper for the functions and processes between humans and machines at the micro level and between HCOs and the environment at the macro level. It is essential to analyze and understand the individual actor of an HCO as well as the HCO itself as an actor (see Figure 4).

Figure 4: 
HCO as a whole system at macro level as well as the individual players at micro level.
Figure 4:

HCO as a whole system at macro level as well as the individual players at micro level.

For academics, dealing with an HCO involves both empirical and theoretical research. Thus, the interest in knowledge revolves around the perception and behavior of the individual actors, as well as their development over time. At the same time, the social behavior of an HCO as a whole need to be researched. For both research interests, internal and external conditions and factors, sequences and consequences of processes and their changes must be recorded and investigated. To this end, the processes of HCOs can be captured as hybrid human-machine systems using the means and methods of systems engineering. Both actors, natural and artificial, must be consistently integrated with all their strengths and weaknesses. In this way, models can be created that truly reflect the necessary aspects of communication and cooperation as well as any necessary coordination and the processes of self-organization (see the factors at the flow in Figure 2). By means of such system designs, for example, dependencies, relationships, possibilities of influence or connections can be explored. In favor of self-organization, the focus should not be on the actors, but on the effects between them. At the same time, a theory should be developed based on these hybrid systems, which can be updated as social anthropology 4.0. This social anthropology 4.0 should also focus on the relationships between the actors, the processes taking place and the characteristics of the system as well as the influences from outside the system.

Numerous problem areas outlined in this article can help to understand human and machine as a system in order to develop innovative approaches that drive the system forward. For example, the inclusion of afferent signals or efferent reactions in feedback loops, the creation of transparency about the uncertainty of subjective factors such as phenomenal values or the creation of intransparency in the case of information that is relevant for decisions but nevertheless influences them, such as age or gender. Similarly, indications of cooperation blockages can be visualized, for example, as well as errors and consumption in calculations or thinking. At the same time, unfavourable dynamics both inside and outside the system can be made aware of and thus neutralized by means of visualizations such as sociometric representations.

As a result, many of the tasks and questions raised will probably have to wait a while for adequate answers, such as the nature of the communication processes between the hybrid actors. However, it remains important that an existing hybrid system, such as an HCO, should not be interfered with from the outside. Only by creating offers or surrogates can changes be made possible, but not guaranteed. This in turn requires science, which creates an understanding of the system, its structures and interrelationships as well as its environment in order to develop optimal offers and assess potential effects.


Corresponding author: Mandy Balthasar, University of the Bundeswehr Munich Faculty of Computer Science, Neubiberg, Germany, E-mail: 

About the author

Mandy Balthasar

Mandy Balthasar researches and teaches at the University of the Bundeswehr Munich at the Institute for Software Technology and in the Usable Security and Privacy Group of the Research Institute Cyber Defense (CODE). Her research interests focus on the complex processes of joint decision-making in technical and socio-technical systems, especially in critical environments.

Acknowledgments

Very special thanks to the we4bee project 127 for the opportunities as part of the digital hives network.

  1. Research ethics: Not applicable.

  2. Informed consent: Not applicable.

  3. Author contributions: The author has accepted responsibility for the entire content of this manuscript and approved its submission.

  4. Competing interests: The author states no conflict of interest.

  5. Research funding: None declared.

  6. Data availability: Not applicable.

References

1. Malone, T. W.; Woolley, A. W. Collective Intelligence. In Cambridge Handbooks in Psychology. The Cambridge Handbook of Intelligence; Sternberg, R. J., Ed., 2nd ed.; Cambridge University Press: Cambridge, 2020; pp. 780–801.10.1017/9781108770422.033Search in Google Scholar

2. Kapetanios, E. Quo Vadis Computer Science: From Turing to Personal Computer, Personal Content and Collective Intelligence. Data Knowl. Eng. 2008, 67 (2), 286–292. https://doi.org/10.1016/j.datak.2008.05.003.Search in Google Scholar

3. Lykourentzou, I.; Vergados, D. J., & Loumos, V. Collective Intelligence System Engineering. MEDES '09: Proceedings of the International Conference on Management of Emergent Digital EcoSystems, Association for Computing Machinery, New York, NY, USA, Article 20, 134–140, 2009. https://doi.org/10.1145/1643823.1643848.Search in Google Scholar

4. Card, S. K.; Moran, T. P.; Newell, A. The Psychology of Human-Computer Interaction; CRS Press: Boca Raton, 1983.Search in Google Scholar

5. Licklider, J. Man-Computer Symbiosis. IRE Transactions on Human Factors in Electronics, HFE-1, 1960; pp 4–11.10.1109/THFE2.1960.4503259Search in Google Scholar

6. Minsky, M. L. The Society of Mind; Simon & Schuster: New York, NY, USA, 1988.10.21236/ADA200313Search in Google Scholar

7. Bonabeau, E.; Dorigo, M.; Theraulaz, G. Swarm Intelligence: From Natural to Artificial Systems. Santa Fe Institute Studies in the Sciences of Complexity; Oxford University Press: New York, 1999.10.1093/oso/9780195131581.001.0001Search in Google Scholar

8. Simon, H. A. Rational Choice and the Structure of the Environment. Psychol. Rev. 1956, 63 (2), 129–138; https://doi.org/10.1037/h0042769.Search in Google Scholar PubMed

9. Bazerman, M. H.; Moore, D. A. Judgment in Managerial Decision Making, 8th ed.; Wiley: Hoboken, New Jersey, USA, 2013.Search in Google Scholar

10. Kahneman, D.; Sibony, O.; Sunstein, C. R. Noise: A Flaw in Human Judgment; Little, Brown & Co: Boston, 2021.10.53776/playbooks-judgmentSearch in Google Scholar

11. Kahneman, D.; Tversky, A. Choices, Values, and Frames. Am. Psychol. 1984, 39 (4), 341–350. https://doi.org/10.1037/0003-066X.39.4.341.Search in Google Scholar

12. Galton, F. Vox Populi. Nature 1907, 75 (1949), 450–451. https://doi.org/10.1038/075450a0.Search in Google Scholar

13. Valentini, G.; Hamann, H.; Dorigo, M. Efficient Decision-Making in a Self-Organizing Robot Swarm: On the Speed versus Accuracy Trade-Off. In Proceedings of the 14th International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2015; International Foundation for Autonomous Agents and Multiagent Systems: Richland, SC, 2015; pp. 1305–1314.Search in Google Scholar

14. Davis, J. H.; Hulbert, L.; Au, W. T.; Chen, X.; Zarnoth, P. Effects of Group Size and Procedural Influence on Consensual Judgments of Quantity: The Examples of Damage Awards and Mock Civil Juries. J. Pers. Soc. Psychol. 1997, 73 (4), 703–718. https://doi.org/10.1037/0022-3514.73.4.703.Search in Google Scholar

15. Kerr, N. L.; Tindale, R. S. Group Performance and Decision Making. Ann. Rev. Psychol. 2004, 55, 623–655. https://doi.org/10.1146/annurev.psych.55.090902.142009.Search in Google Scholar PubMed

16. Rosenberg, L.; Willcox, G.; Schumann, H.; Mani, G. Conversational Swarm Intelligence amplifies the accuracy of networked groupwise deliberations. In: IEEE 14th Annual Computing and Communication Workshop and Conference (IEEE CCWC 2024), Las Vegas, USA, 2401.04112, 2024. https://doi.org/10.48550/arXiv.2401.04112.Search in Google Scholar

17. van Schaik, C.; Michel, K. Mensch sein: Von der Evolution für die Zukunft Lernen. [Being Human: Learning from Evolution for the Future]; Rowohlt: Hamburg, 2023.Search in Google Scholar

18. Tump, A. N.; Pleskac, T. J.; Kurvers, R. H. J. M. Wise or Mad Crowds? the Cognitive Mechanisms Underlying Information Cascades. Sci. Adv. 2020, 6 (29), eabb0266. https://doi.org/10.1126/sciadv.abb0266.Search in Google Scholar PubMed PubMed Central

19. Giddings, F. H. Pluralistic Behavior: A Brief of Sociological Theory Restated. Am. J. Sociol. 1920, 25 (5), 539–561; https://doi.org/10.1086/213051.Search in Google Scholar

20. Dorigo, M.; Maniezzo, V.; Colorni, A. Ant System: Optimization by a Colony of Cooperating Agents. IEEE Trans. Syst. Man Cybern. Part B, Cybern. A Publ. IEEE Syst. Man Cybern. Soc. 1996, 26 (1), 29–41. https://doi.org/10.1109/3477.484436.Search in Google Scholar PubMed

21. Bonabeau, E.; Dorigo, M.; Theraulaz, G. Inspiration for Optimization from Social Insect Behaviour. Nature 2000, 406 (6791), 39–42. https://doi.org/10.1038/35017500.Search in Google Scholar PubMed

22. Dorigo, M.; Di Caro, G.; Gambardella, L. M. Ant Algorithms for Discrete Optimization. Artif. Life 1999, 5 (2), 137–172. https://doi.org/10.1162/106454699568728.Search in Google Scholar PubMed

23. Dorigo, M.; Birattari, M.; Stutzle, T. Ant Colony Optimization. IEEE Comput. Intell. Mag. 2006, 1 (4), 28–39. https://doi.org/10.1109/MCI.2006.329691.Search in Google Scholar

24. Floreano, D.; Mattiussi, C. Bio-inspired Artificial Intelligence: Theories, Methods, and Technologies. Intelligent Robotics and Autonomous Agents; MIT Press: Cambridge, MA, 2008.Search in Google Scholar

25. Kennedy, J.; Eberhart, R. Particle swarm optimization. In: Proceedings of IEEE International Conference in Neural Networks (ICNN'95) Vol. 4, 1942–1948. Perth, WA, Australia, 1995. https://doi.org/10.1109/ICNN.1995.488968.Search in Google Scholar

26. Ullmann-Margalit, E. Big Decisions: Opting, Converting, Drifting. In Normal Rationality: Decisions and Social Order; Ullmann-Margalit, E.; Margalit, A.; Sunstein, C. R., Eds.; Oxford University Press: Oxford, UK, Vol. 1, 2017; pp. 157–172.10.1017/CBO9780511599736.009Search in Google Scholar

27. Moravec, H. The Future of Robot and Human Intelligence; Harvard Univ. Press: Cambridge, Massachusetts, USA, 1995.Search in Google Scholar

28. Zador, A. M. A Critique of Pure Learning and what Artificial Neural Networks Can Learn from Animal Brains. Nat. Commun. 2019, 10 (1), 3770. https://doi.org/10.1038/s41467-019-11786-6.Search in Google Scholar PubMed PubMed Central

29. Rawls, J. A Theory of Justice; Belknap Press of Harvard Univ. Press: Cambridge, Massachusetts, USA, 1999.Search in Google Scholar

30. Fink, S. B. Die Schwierigkeit, für sich selbst zu entscheiden: Transformativität und Unvorhersehbarkeit. [The Difficulty of Deciding for Oneself: Transformativity and Unpredictability] Was Bedeutet das Alles? Nr. 19654; Reclam: Ditzingen, 2020.Search in Google Scholar

31. Schopenhauer, A. Die Welt als Wille und Vorstellung. [The World as will and Imagination]; Zweiter Teilband. Zürcher Ausgabe. Diogenes: Zürich, 2017.Search in Google Scholar

32. Song, Y.; Millidge, B.; Salvatori, T.; Lukasiewicz, T.; Xu, Z.; Bogacz, R. Inferring Neural Activity before Plasticity as a Foundation for Learning beyond Backpropagation. Nat. Neurosci., 2024, 27, 348–358.10.1038/s41593-023-01514-1Search in Google Scholar PubMed PubMed Central

33. Strang, S.; Hoeber, C.; Uhl, O.; Koletzko, B.; Münte, T. F.; Lehnert, H.; Dolan, R. J.; Schmid, S. M.; Park, S. Q. Impact of Nutrition on Social Decision Making. Proc. Natl. Acad. Sci. U. S. A., 2017, 114 (25), 6510–6514. https://doi.org/10.1073/pnas.1620245114.Search in Google Scholar PubMed PubMed Central

34. May, R. M. More Evolution of Cooperation. Nature 1987, 327 (6117), 15–17. https://doi.org/10.1038/327015a0.Search in Google Scholar

35. Axelrod, R.; Hamilton, W. D. The Evolution of Cooperation. Science 1981, 211 (4489), 1390–1396. https://doi.org/10.1126/science.7466396.Search in Google Scholar PubMed

36. Ramge, T. Augmented Intelligence: Wie wir mit Daten und KI Besser Entscheiden. [Augmented Intelligence: How we Make Better Decisions with Data and AI]. Was Bedeutet das Alles? Nr. 19689; Reclam: Ditzingen, 2020.Search in Google Scholar

37. Cardon, D.: Vier Typen Digitaler Informationsberechnung. [Deconstructing the Algorithm: Four Types of Digital Information Computation]. In Kulturen der Gesellschaft; Seyfert, R.; Roberge, J.: Über die rechnerische Konstruktion der Wirklichkeit, Vol. 26, 2017; pp 131–150, transcript Verlag: Bielefeld; https://doi.org/10.1515/9783839438008-005.Search in Google Scholar

38. Müller-Mall, S. Freiheit und Kalkül: Die Politik der Algorithmen. [Freedom and calculation: the politics of algorithms]. Reclams Universal-Bibliothek Was bedeutet das alles? Nr. 14043; Reclam: Ditzingen, 2020.Search in Google Scholar

39. Sorkin, A. R. Too Big to Fail: The Inside Story of How Wall Street and Washington Fought to Save the Financial System - and Themselves (Updated and with a new afterword); Penguin Books: London, 2010.Search in Google Scholar

40. Epstein, D. J. Range: Why Generalists Triumph in a Specialized World; Riverhead Books: New York, NY, USA, 2019.Search in Google Scholar

41. Onken, R.; Schulte, A. System-Ergonomic Design of Cognitive Automation. Dual-Mode Cognitive Design of Vehicle Guidance and Control Work Systems. In Studies in Computational Intelligence SCI; Springer: Berlin, Vol. 235, 2010.10.1007/978-3-642-03135-9Search in Google Scholar

42. Ockenfels, A.; Raub, W. Rational und Fair. [Rational und fair]. Kölner Z. Soziol. Sozialpsychol. 2010, 50, 119–136.Search in Google Scholar

43. Milinski, M. Gossip and Reputation in Social Dilemmas. In The Oxford Handbook of Gossip and Reputation (192–213); Giardini, F.; Wittek, R., Eds.; Oxford University Press: Oxford, UK, 2019.10.1093/oxfordhb/9780190494087.013.11Search in Google Scholar

44. EU Regulation (EU) 2024/… of the European Parliament and of the Council of Laying Down Harmonised Rules on Artificial Intelligence and Amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act), 2024. https://www.europarl.europa.eu/doceo/document/TA-9-2024-0138-FNL-COR01_EN.pdf.Search in Google Scholar

45. Bengio, Y.; Hinton, G.; Yao, A.; Song, D.; Abbeel, P.; Harari, Y. N.; Zhang, Y.-Q.; Xue, L.; Shalev-Shwartz, S.; Hadfield, G.; Glune, J.; Maharaj, T.; Hutter, F.; Baydin, A. G.; McIlraith, S.; Gao, Q.; Acharya, A.; Krueger, D.; Dragan, A.; Mindermann, S. Managing extreme AI risks amid rapid progress. Science 2024, 384, 842–845. https://doi.org/10.1126/science.adn0117.Search in Google Scholar PubMed

46. Simon, J. “KI ist ein sehr konservatives Instrument”: Interview mit Judith Simon. [“AI is a very conservative instrument”: Interview with Judith Simon]. Int. Politik 2023 (06), 25–29.Search in Google Scholar

47. Wellmer, A. Konsens als Telos der sprachlichen Kommunikation? [Consensus as the telos of linguistic communication?]. In Suhrkamp-Taschenbuch Wissenschaft: Vol. 1019. Kommunikation und Konsens in modernen Gesellschaften: Beiträge einer Tagung zum Thema “Kommunikation und Konsens” am 20. und 21. April 1990 in Marburg; Giegel, H.-J., Ed.; Suhrkamp: Berlin, 1992; pp. 18–30.Search in Google Scholar

48. Smith, A. The Wealth of Nations; Wiley & Sons: Hoboken, NJ, USA, 2021.Search in Google Scholar

49. Moreno, J. L. Die Grundlagen der Soziometrie: Wege zur Neuordnung der Gesellschaft. [The Basics of Sociometry: Ways to Reorganize Society]; VS Verlag für Sozialwissenschaften: Berlin/Heidelberg, 1996.10.1007/978-3-663-09720-4Search in Google Scholar

50. Balthasar, M. Balancing Strengths and Weaknesses in Human-Machine Decision Making. In Mensch und Computer 2023: Workshopband. GI. MCI-WS16 - UCAI 2023: Workshop on User-Centered Artificial Intelligence. 03.-06. September 2023 Rapperswil (SG); Fröhlich, P.; Cobus, V., Eds.; Gesellschaft für Informatik e.V, 2023.Search in Google Scholar

51. Evans, D. Risk Intelligence. In Springer Reference. Handbook of Risk Theory: Epistemology, Decision Theory, Ethics, and Social Implications of Risk; Roeser, S., Ed.; Springer: Berlin, 2012; pp. 603–620.10.1007/978-94-007-1433-5_23Search in Google Scholar

52. Hügle, T. Learning from Chess Engines: How Reinforcement Learning Could Redefine Clinical Decision-Making in Rheumatology. Ann. Rheum. Dis. 2022, 81 (8), 1072–1075. https://doi.org/10.1136/annrheumdis-2022-222141.Search in Google Scholar PubMed PubMed Central

53. Amershi, S.; Cakmak, M.; Knox, W. B.; Kulesza, T. Power to the People: The Role of Humans in Interactive Machine Learning. AI Mag. 2014, 35 (4), 105–120. https://doi.org/10.1609/aimag.v35i4.2513.Search in Google Scholar

54. Teso, S.; Hinz, O. Challenges in Interactive Machine Learning. KI Künstliche Intell. 2020, 34 (2), 127–130. https://doi.org/10.1007/s13218-020-00662-x.Search in Google Scholar

55. OpenAI Inc. Generative Pre-trained Transformer 3 (GPT-3); 2020. https://github.com/openai/gpt-3.Search in Google Scholar

56. Binz, M.; Schulz, E. Using Cognitive Psychology to Understand GPT-3. Proc. Natl. Acad. Sci. U. S. A. 2023, 120 (6), e2218523120. https://doi.org/10.1073/pnas.2218523120.Search in Google Scholar PubMed PubMed Central

57. Moon, A. Negotiating with Robots: Meshing Plans and Resolving Conflicts in Human-Robot Collaboration; University of British Columbia, 2017. Retrieved July 17, 2024, from https://open.library.ubc.ca/collections/ubctheses/24/items/1.0348225.Search in Google Scholar

58. Krügel, S.; Ostermaier, A.; Uhl, M. Chatgpt’s Inconsistent Moral Advice Influences Users’ Judgment. Sci. Rep. 2023, 13 (1), 4569. https://doi.org/10.1038/s41598-023-31341-0.Search in Google Scholar PubMed PubMed Central

59. Bussone, A.; Stumpf, S.; O’Sullivan, D. The Role of Explanations on Trust and Reliance in Clinical Decision Support Systems. In 2015 International Conference on Healthcare Informatics, Dallas, TX, USA, 2015; pp 160–169.10.1109/ICHI.2015.26Search in Google Scholar

60. Hölldobler, B.; Wilson, E. O. Superorganism: The Beauty, Elegance, and Strangeness of Insect; Norton: New York City, 2008.Search in Google Scholar

61. Kelly, K. Out of Control: The Rise of Neo-Biological Civilization; Addison-Wesley: Boston, 1994.Search in Google Scholar

62. Bakker, K. The Sounds of Life - the Sounds of Life: How Digital Technology Is Bringing Us Closer to the Worlds of Animals and Plants; Princeton University Press: Princeton, New Jersey, 2022.10.1515/9780691240985Search in Google Scholar

63. Balthasar, M. Aspects of Decision-Making in Human-Machine Teaming. In Advances in Social Simulation. Proceedings of the 18th Social Simulation Conference, Glasgow, UK, Springer Proceedings in Complexity; Elsenbroich, C.; Verhagen, H., Eds.; Springer: Cham, 2024.10.1007/978-3-031-57785-7_43Search in Google Scholar

64. Wolpert, D. H.; Tumer, K. An Introduction to Collective Intelligence. In Computing Research Repository (CoRR); 1999. https://doi.org/10.48550/arXiv.cs/9908014.Search in Google Scholar

65. Ferron, M.; Massa, P.; Odella, F. Analyzing Collaborative Networks Emerging in Enterprise 2.0: The Taolin Platform. Procedia Soc. Behav. Sci. 2011, 10, 68–78. https://doi.org/10.1016/j.sbspro.2011.01.010.Search in Google Scholar

66. Kauffman, S.; Clayton, P. On Emergence, Agency, and Organization. Biol. Philos. 2006, 21 (4), 501–521. https://doi.org/10.1007/s10539-005-9003-9.Search in Google Scholar

67. Hayek, F. A. Scientism and the Study of Society. Economica 1942, 9 (35), 267–291. https://doi.org/10.2307/2549540.Search in Google Scholar

68. Troitzsch, K. G. Individuelle Einstellungen und kollektives Verhalten. [Individual attitudes and collective behavior]. In Universal-Bibliothek: Vol. 9434. Chaos und Ordnung: Formen der Selbstorganisation in Natur und Gesellschaft [Nachdr.]; Küppers, G., Ed.; Reclam: Ditzingen, 1997; pp. 200–228.Search in Google Scholar

69. Heuser-Keßler, M. L., Ed. Die Produktivität der Natur: Schellings Naturphilosophie und das neue Paradigma der Selbstorganisation in den Naturwissenschaften. [The Productivity of Nature: Schelling’s Philosophy of Nature and the New Paradigm of Self-Organization in the Natural Sciences]. Erfahrung und Denken; Duncker & Humblot: Berlin, Vol. 69, 1986.10.3790/978-3-428-46079-3Search in Google Scholar

70. Hörz, H. Selbstorganisation Sozialer Systeme: Ein Verhaltensmodell Zum Freiheitsgewinn. [Self-Organization of Social Systems: A Behavioural Model for Gaining Freedom]. In Selbstorganisation Sozialer Prozesse; Lit: Münster, Vol. 1, 1993.Search in Google Scholar

71. Haken, H. Erfolgsgeheimnisse der Natur: Synergetik: die Lehre vom Zusammenwirken. [Nature’s secrets to success: Synergetics: The Science of Tnteraction]; Deutsche Verlags-Anstalt: München, 1981.Search in Google Scholar

72. Haken, H. Synergetik: Eine Einführung. Nichtgleichgewichts-Phasenübergänge und Selbstorganisation in Physik, Chemie und Biologie. [Synergetics: An Introduction. Non-equilibrium Phase Transitions and Self-Organization in Physics, Chemistry and Biology]; Springer: Berlin, Heidelberg, 1990.10.1007/978-3-662-10186-5Search in Google Scholar

73. Ebeling, W.; Feistel, R. Selbstorganisation in Natur und Gesellschaft und Strategien zur Gestaltung der Zukunft. [Self-organization in nature and society and strategies for shaping the future]. Beitrag zur Konferenz. Beitrag zur Konferenz “Die Welt des Menschen: Unbestimmtheit als Herausforderung: Zum 90. Geburtstag von Hermann Haken und dem 100. Geburtstag von Ilya Prigogine”, Moskau 21. Nov. 2017 Leibniz Online, Nr. 28.Search in Google Scholar

74. Bender, C. Selbstorganisation in Systemtheorie und Konstruktivismus. [Self-organization in systems theory and constructivism]. In Suhrkamp-Taschenbuch Wissenschaft; Rusch, G.; Schmidt, S. J., Eds.; Suhrkamp, Vol. 1099, 1994, Konstruktivismus und Sozialtheorie (1. Aufl., 263–281).Search in Google Scholar

75. Bolbrügge, G. Selbstorganisation und Steuerbarkeit sozialer Systeme. [Self-organization and controllability of social systems]. Zugl.: Paderborn, Univ., Diss., 1997 u.d.T.: Bolbrügge, Gisela: Selbstorganisation in systemtheoretischen Konzepten. Dt. Studien-Verlag: Weinheim, 1997.Search in Google Scholar

76. Dahme, C. Selbstorganisation und Tätigkeitstheorie. [Self-organization and activity theory]. In Selbstorganisation Vol. 1.: Selbstorganisation und Determination; Niedersen, U.; Pohlmann, L., Eds.; Duncker & Humblot, Verlag: Berlin, 1990.Search in Google Scholar

77. Krohn, W.; Küppers, G. Die Selbstorganisation der Wissenschaft. [The self-organization of science]. In Suhrkamp-Taschenbuch Wissenschaft; Suhrkamp: Berlin, Vol. 776, 1987.Search in Google Scholar

78. Krohn, W.; Küppers, G.; Paslack, R. Selbstorganisation: Zur Genese und Entwicklung einer wissenschaftlichen Revolution. [Self-organization: The genesis and development of a scientific revolution]. In Der Diskurs des Radikalen Konstruktivismus; Schmidt, S. J., Ed.; Suhrkamp: Berlin, 1992; pp. 441–465.Search in Google Scholar

79. Kather, R. Die Wiederentdeckung der Natur: Naturphilosophie im Zeichen der Ökologischen Krise. [The Rediscovery of Nature: Natural Philosophy in the Face of the Ecological Crisis]; WBG Wiss. Buchges: Darmstadt, 2012.Search in Google Scholar

80. Dress, A. W. M.; Dress, A. W., Eds. Selbstorganisation: Die Entstehung von Ordnung in Natur und Gesellschaft. [Self-organization: The emergence of order in nature and society]; Piper: München, 1986.Search in Google Scholar

81. Nicolis, G.; Prigogine, I. Self-organization in Nonequilibrium Systems: From Dissipative Structures to Order through Fluctuations; John Wiley & Sons, Ltd.: Hoboken, New Jersey, 1977.Search in Google Scholar

82. Mainzer, K. Thinking in Complexity: The Computational Dynamics of Matter, Mind, and Mankind; Springer: Berlin Heidelberg, 2007.Search in Google Scholar

83. Wen, J.; He, L.; Zhu, F. Swarm Robotics Control and Communications: Imminent Challenges for Next Generation Smart Logistics. IEEE Commun. Mag. 2018, 56 (7), 102–107. https://doi.org/10.1109/MCOM.2018.1700544.Search in Google Scholar

84. Whittlesey, R. W.; Liska, S.; Dabiri, J. O. Fish Schooling as a Basis for Vertical axis Wind Turbine Farm Design. Bioinspiration Biomimetics 2010, 5 (3), 35005. https://doi.org/10.1088/1748-3182/5/3/035005.Search in Google Scholar PubMed

85. Truszkowski, W.; Hinchey, M.; Rash, J.; Rouff, C. NASA’s Swarm Missions: The Challenge of Building Autonomous Software. IT Prof.l 2004, 6 (5), 47–52. https://doi.org/10.1109/MITP.2004.66.Search in Google Scholar

86. Innocente, M. S.; Grasso, P. Self-organising Swarms of Firefighting Drones: Harnessing the Power of Collective Intelligence in Decentralised Multi-Robot Systems. J. Comput. Sci. 2019, 34, 80–101. https://doi.org/10.1016/j.jocs.2019.04.009.Search in Google Scholar

87. Al-Hudhud, G. On Swarming Medical Nanorobots. Int. J. Bio-Sci. Bio-Technol. 2012, 4 (1), 75–89.Search in Google Scholar

88. Soto, F.; Wang, J.; Ahmed, R.; Demirci, U. Medical Micro: Nanorobots in Precision Medicine. Adv. Sci. 2020, 7 (21), 2002203. https://doi.org/10.1002/advs.202002203.Search in Google Scholar PubMed PubMed Central

89. Krammer, A. Die Bedeutung von Instabilitäten für die Entstehung neuer Strukturen. [The importance of instabilities for the formation of new structures]. In Grundprinzipien der Selbstorganisation; Kratky, K. W.; Wallner, F., Eds.; Wissenschaftliche Buchgesellschaft: Darmstadt, 1990; pp. 59–76.Search in Google Scholar

90. Wolf, G. Gestalten von Komplexität durch Netzwerk-Management. [Shaping complexity through network management]. In Grundprinzipien der Selbstorganisation; Kratky, K. W.; Wallner, F., Eds.; Wissenschaftliche Buchgesellschaft: Darmstadt, 1990; pp. 103–126.Search in Google Scholar

91. Paslack, R. Urgeschichte der Selbstorganisation: Zur Archäologie eines wissenschaftlichen Paradigmas. [Prehistory of self-organization: The archaeology of a scientific paradigm]. In Wissenschaftstheorie, Wissenschaft und Philosophie; Vieweg: Wiesbaden, Vol. 32, 1991.10.1007/978-3-322-88776-4Search in Google Scholar

92. Malik, F. Selbstorganisation im Management. [Self-organization in management]. In Grundprinzipien der Selbstorganisation; Kratky, K. W.; Wallner, F., Eds.; Wissenschaftliche Buchgesellschaft: Darmstadt, 1990; pp. 96–102.Search in Google Scholar

93. Mainzer, K. Zeit: Von der Urzeit zur Computerzeit. [Time: From Prehistoric Times to the Computer Age]. In C. H. Beck Wissen in der Beck’schen Reihe; Beck: München, Vol. 2011, 1995.Search in Google Scholar

94. Leibniz, G. W. Monadologie [Monadology]: Die erste deutsche Übersetzung von Heinrich Köhler von 1720 (Berliner Ausgabe); Holzinger: Berlin, 2017.Search in Google Scholar

95. Kohonen, T. Self-organization and Associative Memory. In Springer Series in Information Sciences, 3rd ed; Springer: Berlin, Heidelberg, 8, 1989.10.1007/978-3-642-88163-3Search in Google Scholar

96. Kohonen, T. Self-organizing Maps. In Springer Series in Information Sciences; Springer: Berlin, Heidelberg, Vol. 30, 1995.10.1007/978-3-642-97610-0Search in Google Scholar

97. Küppers, G., Ed. Chaos und Ordnung: Formen der Selbstorganisation in Natur und Gesellschaft [Chaos and order: Forms of self-organization in nature and society]. Universal-Bibliothek; Reclam: Ditzingen, 9434, 1997.Search in Google Scholar

98. Partl, Q. Förderung der Selbstorganisation sozialer Makrosysteme. [Promoting the self-organization of social macrosystems]. In Selbstorganisation Sozialer Prozesse; Lit-Verl: Münster, Vol. 4, 1997.Search in Google Scholar

99. Graham, R.; Haken, H. Laserlight: First Example of a Second-Order Phase Transition Far Away from Thermal Equilibrium. Z. Phys. 1970, 237 (1), 31–46. https://doi.org/10.1007/BF01400474.Search in Google Scholar

100. Haken, H.; Wunderlin, A. Synergetik: Prozesse der Selbstorganisation in der belebten und unbelebten Natur. [Synergetics: processes of self-organization in animate and inanimate nature]. In Selbstorganisation: Die Entstehung von Ordnung in Natur und Gesellschaft; Dress, A. W. M.; Dress, A. W., Eds.; Piper: München, 1986; pp. 35–60.Search in Google Scholar

101. Tschacher, W. Interaktion in selbstorganisierten Systemen: Grundlegung eines dynamisch-synergetischen Forschungsprogramms in der Psychologie. [Interaction in self-organized systems: Grounding a dynamic-synergetic research program in psychology]. Zugl.: Tübingen, Univ., Diss., 1990. Forschung Psychologie; Asanger, Kröning, 1990.Search in Google Scholar

102. Ebeling, W.; Scharnhorst, A. Modellierungskonzepte der Synergetik und der Theorie der Selbstorganisation. [Modeling concepts of synergetics and the theory of self-organization]. In Handbuch Modellbildung und Simulation in den Sozialwissenschaften; Braun, N.; Saam, N. J., Eds.; Springer: Fachmedien Wiesbaden, 2015; pp 419–452.10.1007/978-3-658-01164-2_15Search in Google Scholar

103. Prigogine, I.; Stengers, I. Dialog mit der Natur: Neue Wege Naturwissenschaftlichen Denkens. [Dialogue with nature: New Ways of Scientific Thinking]; Piper: München, 1986.Search in Google Scholar

104. Leiber, T. Vom mechanistischen Weltbild zur Selbstorganisation des Lebens [From a mechanistic view of the world to the self-organization of life]: Helmholtz’ und Boltzmanns Forschungsprogramme und ihre Bedeutung für Physik, Chemie, Biologie und Philosophie. In Zugl.: Augsburg, Univ., Habil.-Schr., 1998. Alber-Reihe Thesen; Alber: Baden-Baden, Vol. 6, 2000.Search in Google Scholar

105. Tschacher, W. Prozeßgestalten: Die Anwendung der Selbstorganisationstheorie und der Theorie dynamischer Systeme auf Probleme der Psychologie. [Process design: The application of self-organization theory and dynamical systems theory to problems in psychology]; Hogrefe Verl. für Psychologie: Göttingen, 1997.Search in Google Scholar

106. Tschacher, W.; Brunner, E. J. Empirische Studien zur Dynamik von Gruppen aus der Sicht der Selbstorganisationstheorie. [Empirical studies on the dynamics of groups from the perspective of self-organization theory]. Z. Sozialpsychol. 1995, 26 (2), 78–91.Search in Google Scholar

107. Horn, E.; Gisi, L. M., Eds. Masse und Medium. Schwärme: Kollektive ohne Zentrum; Transcript-Verlag: Bielefeld, Vol. 7, 2015.10.14361/9783839411339-introSearch in Google Scholar

108. Reynolds, C. W. Boids: Flocks, Herds, and Schools: A Distributed Behavioral Model, 1995. https://www.red3d.com/cwr/boids/.Search in Google Scholar

109. Barretto, F. d. P.; Venturelli, S. Zer0: An Emergent and Autopoietic Multi-Agent System for Novelty Creation in Game Art through Gesture Interaction. Procedia Manuf. 2015, 3, 850–857. https://doi.org/10.1016/j.promfg.2015.07.341.Search in Google Scholar

110. Varela, F. G.; Maturana, H. R.; Uribe, R. Autopoiesis: The Organization of Living Systems, its Characterization and a Model. Biosystems 1974, 5 (4), 187–196. https://doi.org/10.1016/0303-2647(74)90031-8.Search in Google Scholar PubMed

111. Tautz, J. Phänomen Honigbiene. [The honeybee phenomenon]; Elsevier: Spektrum, 2007.Search in Google Scholar

112. Horkheimer, M. Zur Kritik der instrumentellen Vernunft. [On the critique of instrumental reason]; Fischer: Frankfurt am Main, 2007. (Fischer-Taschenbücher, 17820).Search in Google Scholar

113. Seeley, T. D. Honeybee Democracy; Princeton University Press: Princeton, New Jersey, 2010.Search in Google Scholar

114. Foss, R. A Self Organising Network Model of Information Gathering by the Honey Bee Swarm. Kybernetes 2015, 44 (3), 353–367. https://doi.org/10.1108/K-11-2014-0264.Search in Google Scholar

115. Castellani, B.; Gerrits, L. Map of the Complexity Sciences; Art and Science Factory, LLC, 2021. https://www.art-sciencefactory.com/MAP2021Sharing.pdf.Search in Google Scholar

116. Habermas, J. Theorie des kommunikativen Handelns. Handlungsrationalität und gesellschaftliche Rationalisierung. [Theory of communicative action. Rationality of action and social rationalization]. Band 1; Suhrkamp: Frankfurt/Main, 2019a. (Suhrkamp-Taschenbuch Wissenschaft, 1175).Search in Google Scholar

117. Habermas, J. Theorie des kommunikativen Handelns. Zur Kritik der funktionalistischen Vernunft. [Theory of communicative action. On the critique of functionalist reason] Band 2; Suhrkamp: Frankfurt/Main, 2019b. (Suhrkamp-Taschenbuch Wissenschaft, 1175).Search in Google Scholar

118. Bayertz, K. Die instrumentelle Rationalität der Wissenschaft. Eine Metakritik. [The instrumental rationality of science. A meta-critique]. Ulrich Arnswald und Hans-Peter Schütt (Hg.): Rationalität und Irrationalität in den Wissenschaften; VS Verlag für Sozialwissenschaften: Wiesbaden, 2011; pp 160–172.10.1007/978-3-531-93347-4_8Search in Google Scholar

119. Geertz, C. The Interpretation of Cultures; Basic Books: New York, 2017.Search in Google Scholar

120. Knorr, A. Cyberanthropology; Hammer: Wuppertal, 2011. (Edition Trickster).Search in Google Scholar

121. Puzio, A. Über-Menschen. Philosophische Auseinandersetzung mit der Anthropologie des Transhumanismus. [Superhumans. Philosophical examination of the anthropology of transhumanism]; Transcript: Bielefeld, 2022. (Edition Moderne Postmoderne).10.14361/9783839463055Search in Google Scholar

122. Nowak, P. Humans 3.0. The Upgrading of the Species; HarperCollins: London, 2015.Search in Google Scholar

123. Weiser, M. The Computer for the 21st Century. ACM SIGMOB- Mob. Comput. Commun. Rev. 1999, 3 (3), S. 3–11; https://doi.org/10.1145/329124.329126.Search in Google Scholar

124. Myerscough, M. R. (2003). Dancing for a Decision: a Matrix Model for Nest-Site Choice by Honeybees. Proc Biol Sci. 2003, 270 (1515), 577–582; https://doi.org/10.1098/rspb.2002.2293.Search in Google Scholar PubMed PubMed Central

125. Buzsáki, G. Rhythms of the Brain; Oxford University Press: Oxford, UK, 2001.Search in Google Scholar

126. Marshall, J. A.; Bogacz, R.; Dornhaus, A.; Planqué, R.; Kovacs, T.; Franks, N. R. On Optimal Decision-Making in Brains and Social Insect Colonies. J. Roy. Soc. Interface 2009, 6 (40), 1065–1074. https://doi.org/10.1098/rsif.2008.0511.Search in Google Scholar PubMed PubMed Central

127. Tautz, J. Digital Hives Network; We4bee project, 2023. https://we4bee.org.Search in Google Scholar

Received: 2024-02-11
Accepted: 2024-06-24
Published Online: 2024-07-22

© 2024 the author(s), published by De Gruyter, Berlin/Boston

This work is licensed under the Creative Commons Attribution 4.0 International License.

Downloaded on 15.9.2025 from https://www.degruyterbrill.com/document/doi/10.1515/icom-2024-0016/html
Scroll to top button