Home Missed encounters: what may be relevant for an AI is not for a human being
Article Open Access

Missed encounters: what may be relevant for an AI is not for a human being

  • Filippo Silvestri ORCID logo EMAIL logo
Published/Copyright: October 22, 2024

Abstract

The World Wide Web has been a fundamental part of our daily lives for years. Its algorithmic framework ensnares our online journeys in an “endless recurrence” of the “same” by creating multiple filter bubbles. Digital algorithms establish a precise “order of discourse,” leaving little to no room for deviation. Functioning as a colossal machinic apparatus, the web embodies the culmination of Artificial Intelligence (AI), transforming every piece of posted content into a database that profiles our online behavior and activities. This article explores whether approaches that describe everyday human communication, such as the theories of relevance developed by Alfred Schutz, or by Dan Sperber and Deirdre Wilson, can be applied to the realm of digital algorithmic grammars governing the web, and to the semiospheres that animate it. The conclusions raise some doubts: a gap exists between algorithmic language and its historical counterparts, characterized by a disparity between logical-mathematical grammar and other linguistic-historical-natural ones. While these two types of languages coexist within the digital landscape, their relevance differs; what may be “relevant” in an algebraic context may not necessarily translate to our live conversational exchanges, and vice versa. Although merged in the digital sphere, these languages operate distinctly, and proficiency in one does not guarantee adeptness in interpreting the other accurately.

1 Some philosophical introductory observations

Jacques Derrida wrote: “D’abord la chose est l’autre, le tout autre qui dicte ou qui écrit la loi … une injonction infiniment, insatiablement impérieuse à laquelle je dois m’assujettir” (Derrida 1984: 13).[1] Today, the “chose” we encounter is the vast expanse of the World Wide Web. This network lays down the laws that regulate our online interactions, imposing an infinite “injunction” upon us – a mandate as boundless as the numerical and algorithmic progression to which we surrender, which we “subject” ourselves to while navigating these realms. The “insatiable” nature of the mechanism-dispositif – the master-slave dynamic – we stick to is governed by a voracious “artificial intelligence”[2] which consumes all available data, perpetually absorbing every upload to the web. Within this context, the man-machine digital relationship is shrouded in opacity, lacking a transparent framework that would elucidate our online actions and their repercussions. We surrender ourselves to machines instrumentally, under the illusion of control, only to find ourselves in the service of, indeed dependent on, them. Furthermore, in our interactions with digital choses, we are never afforded a holistic understanding of them, as we only get to see the “back” (Bloch 1959 [1930]) of these things-not-things that are all around us. The World Wide Web largely remains inaccessible to decipherment, an enigmatic codex.

The man-machine relationship is never a real one, as the web-machine exists not as a personal entity – a “you” – but rather as an impersonal “Es.” It is not even a thing; it is a “non-thing” (Han 2021) existing purely as abstract “information-communication.” Functioning primarily as a structure-system, once algorithmically programmed, the Web operates autonomously with minimal interruption.[3] The contents we upload are translated into data, and we, in turn, are translated into filter bubbles – numerical computations that encapsulate our actions. As we hide into our social media platforms that are like rooms lined with mirrored walls, we are endlessly confronted with reflections of ourselves and our myriad online personas, perpetuating a cycle of self-repetition. Echoing Marshall McLuhan’s seminal notion that “the medium is the message” (1994 [1964]), we agree to communicate algorithmically, accepting the endless repetition of the numbers that let us upload contents, resulting in an eternal echo chamber where our voices reverberate ad infinitum, and no genuine dialogue ever takes place. The Web has no ears, no mouth, no nose, no face, no hand to shake. It acts like the elusive “Nobody” (Ulysses and his companions)[4] of Greek mythology who blinded Polyphemus.

In a manner reminiscent of Odradek Kafka’s (2022 [1920]), the Web engulfs us as we pronounce our monologues in front of digital interfaces. It is an “object devoid of meaning yet complete,” extraordinarily mobile, perpetually eluding our grasp, “lacking a fixed abode,” slipping into the interstices of our lives to occupy the fluid realm of cyberspace. We find ourselves unable to shake hands with this impersonal digital Es – an inhuman, algebraic and algorithmic “Langue”[5] that shapes our online relationships. Our physical engagement with this digital Odradek is confined to the tips of the fingers we use to compute, compulsively write, communicate, inform our “onlife” (Floridi 2014), tapping away on keyboards, whether on a computer or smartphone. We sway to the rhythm of our information-communication, dancing on the tips of our fingers (almost, therefore, upside-down), fingers tapping on plastic keys, for an ever more virtual pragmatics of our communications-information in a fully-digitized world, where the sense of ourselves as otherwise conditioned, limited, and always contextualized beings is lost (Han 2021; Heidegger 2015 [1950]). Within cyberspace, our presumed relationships lack the tactile friction of physical collisions and encounters with others, fostering a phantasmagorical “Will to power” (Nietzsche 2017) and belief in one’s omnipotence within the virtual landscape.

Like unwitting flies, we are caught in an intricate spiderweb-network of digital connectivity, of machinic “nonsense.” That is because unlike human agents, the inhuman machine lacks the capacity to imbue meaning into the fabric of existence, whereas within this very network we pursue meaning and purpose fervently. Our digital journeys across this spiderweb-network unfold in a distracted, almost passive, “studium” (Barthes 1980), characterized by shallow and mindless scrolling and “liking” that has no genuine emotional resonance. The absence of a “punctum” reduces our lives to a monotonous “uniformity,” inducing a soporific effect. As observed by Han (2021), our relentless self-exposure has been stripped of all allure and eroticism, descending into an alienating pornographic totality. Within this digital realm, where everyone sees everyone and everything, life is laid bare and there are no “blind spots” that may leave imagination fertile ground to flourish. Everything is consumed in continuous exposure, with no interruptions, without true surprises: in our “filter bubbles” almost everything is predictable. After all, nothing truly surprises us, because we navigate a mathematical-algorithmic flow that calculates our every move, every signal and behavior of ours, to always present us with the same path, the same 0–1.

The digital algorithmic language, comprised of digits, exudes a phantasmatic neutrality, its transparency epitomized by the binary code 0–1 that underpins its function to represent-process. However, it lacks the corporeal dimension of our pragmatics, the sensual complexity of face-to-face interactions or signs. It offers no hint of even vaguely artistic nuance. We cannot expect calculations to transcend their nature, nor can we imbue ghosts with bodies they never possess. Those who entrust their thoughts to digital, algorithmically-developed, media should acknowledge that everything will be translated into the hard language of data, often serving the utilitarian ends of marketing agendas. Yet, conception and execution are inseparable, as there is no conception that can precede its execution (Merleau-Ponty 1996 [1962]). Meaning is not independent of the signs that represent it, just like a musical score that can only be brought to life by a musician’s interpretation. The medium is the message: the neutrality of algorithmic writing mirrors our lives, which we compel online, emptying them of many otherwise vital reflections.

Transparency, neutrality. In the expanse of the algorithmic high seas, everything appears within reach. Yet, beneath this veneer lies a realm of ambiguity where the true meaning and implications of the algorithms governing our digital lives remain elusive, obscured by the simplistic Boolean language of 0s and 1s. Moreover, beyond the enigma of the algorithms’ network itself lies the question of who truly controls it, beyond the usual suspects. Leaving aside the financial returns accrued through the constant profiling of behaviors, once we shift our gaze to the underlying political agenda driving our digital translation, we encounter a myriad unanswered questions, multiplying in complexity. While everything seems transparent in the realm of numbers, the underlying “cypher-matics” (Ponzio 2008) remains inscrutable, governed by unknown entities. Is the algorithm a white “tyranny” (Benasayag 2021), where transparent masks conceal the identities of nameless entities? Some hold this pessimistic view, while others seek less apocalyptic interpretations and solutions.

In any case, and beyond what has been argued so far, it becomes apparent that we are experiencing a constant semantic shift made of metaphors that should be returned to their metaphorical function. When we speak of “Artificial Intelligence” (AI), we should remember that we are not referring to actual “intelligence” but rather to something altogether different. Similarly, the term “machine learning” does not refer to actual “learning,” but rather denotes something else. Here a possible answer to the question is summed up in our language’s “economy” that accommodates shifts in meaning, contributing to causing some confusion and leading us to attribute to machinic-artificial-intelligences the ability to learn, when, in fact, they lack true intelligence and the capacity for genuine learning.

2 Establishing what is truly relevant within the semantic web of our searches is challenging

Let us begin by noting that our current interactions with AI mirror the user-system dualism (Saracevic 2007), a digital version of the classic parole-langue duality. Drawing a parallel between the user-system model and the Saussurean parole-langue model is complex. For example, the algorithmic system is much more locked down in its semiological development than de Saussure’s langue. The latter is an “open work” (Eco 1962), much more “open” than an algorithmic system, regardless of all the machine-learning updates that enhance and advance the latter system.

However, this interaction is not uniform; it unfolds within the intricate fabric of the semantic web, where phenomena akin to “loose coding” (Wilson and Sperber 2012: 333) are prevalent. That causes us[6] to struggle to grasp the context in which we operate, impending our interpretation and positioning within this complex framework. In any case, and to clarify our position, reflecting on system and user in light of the relationship between the system-web and the users, we follow Jan Strassheim:

A recurring theme in the discussion has been the interplay of ‘system’ relevance and ‘user’ relevance, which developed more and more into a tension. An algorithm, based on what the system treats as relevant to the user’s query, can supply data which the user finds irrelevant. Mismatches of this kind (which we continue to experience with search engines and social networking services today) had sparked discussions about relevance, or rather irrelevance, in the early 1950s. (Saracevic 2007; Strassheim 2018: 10)

The author also writes: “On the system side, systems are made, updated and evaluated by people. On the user’s side, some relevances may be more robust than others, for example those of experts on the topic in question (Hjørland 2010), or those related to the user’s overall task as opposed to their momentary satisfaction” (Soergel 1976; Strassheim 2018: 11). As is evident, this system-users confrontation relates to a long history of critical studies, which can be brought to bear today when applied to the study of our articulated relationship with the Web in all its various ramifications.

Our confinement within filter bubbles constrains orientation, while the interweaving of diverse semiospheres online is commonplace, fostering missed connections and misunderstandings. Social media are a case in point, mirroring the disjointed nature of human behavior, characterized by erratic shifts between unrelated topics and shaped by well-managed schizophrenic impulses. It is worth looking more into the human-machine relationship and the presumed confrontation-exchange with the AIs that govern our continuous “on-living.” Drawing from studies on relevance and pertinence (Strassheim and Nasu 2018), and acknowledging that any attempt at “typification” to simplify our cognitive and non-cognitive realms proves vague, any machine learning, as it stands and given how it is programmed, falls short of achieving the level of “personification” necessary to disambiguate certain digital passages. In any context, cognitive processes useful for disambiguating a living confrontation between physical individuals always require a real confrontation-dialogue between persons who are able to manage and interpret unexpected changes in the same exchange and in the right direction. There are doubts as to whether any machine can effectively handle an unexpected change of direction in an argumentative process otherwise attempted by a human. In other words, and limiting it to a trivial example, present-day assistants like Alexa cannot offer true answers to our questions, adapt to our needs, or navigate inherently vague contexts. Alexa gives responses that are consistently coherent yet strictly “literal,” demanding precise queries without the capacity for nuanced understanding or adaptation. Furthermore, our online searches are confined within a predefined scope, with refinements offering limited maximization of results. Within the grammatical mesh of the Web and its archives, accessibility to desired information is not guaranteed.

The “iterative process of approximating the truth” of what is discussed in real life (Wilson and Sperber 2012: ch. 3) is hindered within the dialogical realm of social media, where interactions are “frozen” in a semiotic interval made up of hashtags, photos, posts, memes, etc., so we often must settle for the “literal meaning” without being able to “disambiguate” certain passages. Everyday life encompasses a multitude of “provinces of meaning” (Barber 2018), characterized by blurred boundaries, overlaps, and translations across semiospheres, contingent upon our interlocutors. The reference to the different “provinces” in which meanings are found has a purpose: to remind us that human intelligences move from one province of meaning to another via continuous metaphorical transitions and translations, whereas networked algorithmic AIs can only follow the directions-tracks on which they have been programmed to travel. Internet algorithms always answer our questions in the same way, like HAL, the computer guiding the spacecraft in 2001: A Space Odyssey.

The algorithms guarding the operations of the HALs of our Web always yield the same data. Why is that? Because they do not know how to move between dictionaries and encyclopedias (Eco 1976, 1997), or how to make the random leaps typical of abductions. AI works through deductions and inductions. It does not know how to access a neighboring province of meaning and return to its point of departure, which means that it gives interpretations by approximation. AI does not move. AIs are not living beings. They are not even things (Han 2021), even though they are the engine that runs the machine-web of our searches, our entertainment. Envisioning an AI capable of seamlessly achieving these translations is daunting, as certain metaphorical nuances elude disambiguation, constituting an integral part of human discourse. Presumedly intelligent AIs lack the capacity to listen to us. They do not inhabit a Lebenswelt (Husserl 1976) nor do they make cognitive progress, other than the one programmed for them within well-defined limits. AIs are not living intelligent beings who can address the intricacies of human conversation: they govern our online searches in a machinic way that implies neither progress nor reactions.

Of course, we are referring here to a naive and widespread approach to using social networks, and a simplistic method of navigating Google. Along these specific routes, the algorithms, having computed our entire browsing history, proceed with calculations based on straightforward analog similarity ratios. In doing so, these algorithms tend to come up with the same routes repeatedly, almost as if gently educating us to what is new while avoiding abrupt transitions. A significant percentage of Internet users rely on this kind of algorithm-governed Web. Of course, there is an increasing number of users with active relationships with the Web who use the tools at their disposal to their own advantage, refusing to be dictated by them. Certainly, when we refer to a passive approach to the Web, we do not discuss tools like Python, which exist to be used in the same way as any practical tool for a job. Yet, even in these cases, the web-painter using Python is compelled to work with the available colors and canvases he or she finds online. In this context, even the most creative artists must adapt to the materials and tools at their disposal, which do not bend to whatever form they want to give them (Addis et al. 2023).

In any case, if we confine ourselves to the social environment, here too every form of communication is circumscribed by a predefined “set of resources” that delineate a pragmatic perimeter of action. Within this domain, we all acquiesce to a set of constraining conditions that contribute to shaping our identities (Yus 2018: 122–123). These algorithmic barriers limit our interactions. Jean Baudrillard correctly suggested that our cyber-pragmatics are the result of a combination of hyper-real, real, and digital instances that collectively construct meaning (Baudrillard 1997). When we enter the Web, we make a transition from “orality” to “literacy” (Ong 1982) that has no room for maneuver, except for the regulations inherent in remote relationships with all the advantages this entails and the misunderstandings linked to a form of communication that remains indirect. Verba volant scripta manent, in its immobility writing has no room for adjustment.

It is worth noting that this constraint extends beyond textual communication. Yus (2018: 125) convincingly illustrates how the interplay of “denotative and connotative combinations,” through “implications” and “explicitations,” permeates our continuous image sharing online, engendering a semiospheric mechanism via an open series of “verbal-visual-multimodal discourses.” Within this milieu, we collectively “generate content” (Dayter 2016: 17; Yus 2018: 126), whether through images or traditional text, that is, constrained by the “cues-filtered quality of typed texts” (Yus 2018: 126), the same ones teenagers often employ – and hide behind – to navigate their relationships (Gonzales 2014: 198; Huang 2016: 123; Yus 2018: 127). Those texts are also the ones from which, in a post-pandemic world, we have all decided to keep our safe distance from each other. The political effects of this distancing cause a perspective distortion whereby our “list of Friends” mirrors our “intended public” (Boyd 2010: 43; Yus 2018: 127), leading to a spectacle of our lives that is commensurate with the imagined distance we have placed between ourselves and the world.

Further analysis gives a sense of how being together online with others does not mean truly being together. Yus (2018: 133) revisits the key notion of “ambient awareness,” previously explored by Thompson (2008) in a fragmentary manner (Lin et al. 2016). We are within a classic social dynamic, whereby a fragmented community of online users congregate on social media platforms like Facebook, Instagram, and X, scrolling through their own profiles, engaging in discussions and contributing to various topics in a casual manner, often jumping compulsively from one topic to another. This behavior epitomizes the digital dynamic, wherein we experience a continuous “presence in the absence” (Zappavigna 2016: 272) through pseudo-conventional entrances and exits, often carrying no contextual cues. In a schizophrenic semantic dance, we express our opinions on diverse topics through short, impromptu posts, increasingly often intertwined with a logic characterized by fierce “hate speech” (Guillén-Nieto 2023).

Nevertheless, we strive to adhere to certain classic parameters, again echoing the theory of Sperber and Wilson, who assert that “the human cognitive system is automatically set up to attend to relevant information in the environment. Our perceptual mechanisms are geared and select relevant stimuli, including utterances, from the environment. Memory is programmed to select from its vast databases only relevant assumptions that would enable comprehension” (Sperber and Wilson 2002: 6). Guided by these theoretical principles, the game of relevance governing our logical-perceptual processes primarily consists of recurrences. In a live dialogic exchange between acquainted individuals, characterized by such recurrences, the error rate in achieving mutual understanding tends to be low. However, in a scenario governed by machine-learning AIs, the dynamics shift. While an AI/web machine may accurately process and govern exchanges or searches, its comprehension abilities are confined to a vast database, constrained by linguistic games referencing the operational constraints guiding its functions. While an AI can correctly replicate what can be considered relevant within predefined parameters on a large scale, its ability to generate genuine, real exchange remains limited.

The Internet of Things has always operated on a foundation of continuous algorithmic self-generation underpinning its autonomous growth: a process known as “machine-learning.” Yet, as early as 2008, Miller observed:

We see a shift from dialogue and communication between actors in a network, where the point of the network was to facilitate an exchange of substantive content, to a situation where the maintenance of a network itself has become the primary focus. Here communication has been subordinated to the role of the simple maintenance of ever expanding networks and the notion of a connected presence. (Miller 2008: 398; cf. also Yus 2018: 130)

Humans find themselves embedded within this highly self-referential algorithmic system, akin to pieces of a self-sufficient Matrix, in a sort of structuralist triumph with no true referents, constructing new hyperreal ontologies, with high digital constitution rates. We have entrusted our memory to this self-generating system, allowing it to craft a digital twin of ourselves. Everything is dominated by an expansive language modeling, resembling something more than a matrix, almost a macro-semiosphere capable of shaping us before keeping us tied to linguistic peripheries governed by their own grammars.

In online communication, we contribute to an extensive archive of content that is immediately translated into data. Independent of our individual intentions, by uploading diverse content onto the web of our presumed relationships, we feed a network-matrix that multiplies through digital ramifications. Yus (2018: 132–133) introduces the two concepts of “interface-centered non propositional effects” and “user-centered non propositional effects” that warrant exploration. These concepts underscore that within digital networks, communicative processes transcend mere “propositionality.” Instead, they adhere to grammars-pragmatics that involve dynamics where the totality of interfaces plays a pivotal role in shaping the meaning of online interactions. Thus, understanding how to interface with the inhuman web-digital-Odradek (Kafka 2022 [1920]) becomes more crucial than the actually shared content, regardless of its potential value or interest to potential readers. Yus writes:

The use of an interface may produce a number of non-propositional effects in the user. In general, the ability to use the menus, frames, tabs, links, etc. properly generates an offset of positive effects in terms of self-concept, while an interface lacking the necessary degree of usability may increase mental effort gratuitously, thus affecting the user’s feeling of control over the interface … In a pragmatic analysis of virtual communication, non-intended non-propositional effects are often the key to an explanation of why Internet-mediated interactions turn out (ir)relevant regardless of the actual value of the content transferred to the other users. Several of these effects have an impact of user’s self-concept or overall sense of identity. (Yus 2018: 132–33)

Do we all occupy a new semiotic hierarchy, no longer governed by propositional content, but by a semiotics composed of links, and thus governed by “icons” and “indexes” in the Peircean sense of the two expressions we are using here? Perhaps so. And that is not necessarily a bad thing, for we are likely in a new phase of the phenomenology of the last new media read à la McLuhan (1994 [1964]).

3 Why our AIs fail to understand when we speak to them

The grammars of online navigation are no longer bound to a system comprised solely of words but are instead processed and reduced to data (Eugeni 2021). This is a major semiotic shift – again, the medium being the message, it shapes the content it conveys. Words are transformed into numbers-data. For years, we have been going through an epochal turning point, an informational-communicative revolution, which we seem to fully acknowledge only when scandals erupt concerning the opaque management of online privacy. Expanding the discourse to encompass a real-virtual-hyperreal spectrum, as per Baudrillard’s teachings (Baudrillard 1997), we cannot overlook our immersion in a quantum dimension where life intersects with artificial creation. “Digital point of views” versus “point of Being” (de Kerckhove and de Almeida 2014)? Derrick de Kerckhove’s assertion that everything has “liquefied” (Baumann 2000) into a “quantum dimension” where life mixes with data and data translate life – constantly shifting between materiality and information,[7] from a “point of being” to “digital point of views” on that “being” – encapsulates an informational-communicative realm devoid of material consistency.

Immersed within the quantum-digital dimension, akin to a new Lebenswelt that envelops us, the once clear distinction between ourselves and the matrix-network has dissolved. In this fluid realm where we are like inhabitants of a world, we no longer fully dominate,[8] we often find ourselves adrift, losing sight of the contextual framework we discuss – an absence of context that resembles the loss of an “oikos” (Amendolagine and Cacciari 1975). Within the Web-spiderweb, we are constantly displaced by shifting perspectives, to the extent that we struggle to locate our self-I-center, our home-Heimat, which is no longer solely confined within our mind-body but scattered across the various semiotic currents in which we are entangled. The lack of full control over our digital interactions engenders a sense of passive participation, akin to a doubling of our unconscious (de Kerckhove and de Almeida 2014). This digital unconscious augments the unconscious that governs our real-life dimension, intensifying the feeling of passivity that permeates our existence and logical processes.[9] What does this lack of control entail? Rather than offering a psychological explanation, we turn to Floridi (2019), who observes that alongside our traditional “common sense,” an “algorithmic common sense” has emerged, consisting of statistical-algebraic calculations – a fundamentally “unconscious process” (de Kerckhove and de Almeida 2014) – that shapes our behaviors and steers our navigations towards new commonplaces in which our judgments become entangled.

The “abilities and preferences” (Wilson and Sperber 2012: 7) of the users conducting online searches are aggregated into a maximal computation, which tends towards a strong “typification” (Husserl 1948) of our behaviors, an algorithmic characteristic that will then guide all our further research. This may not pose a problem to those who prefer mathematical models of research and do so knowing they will likely find what an AI system determined they should find. However, this is not mainstream knowledge. Not everyone knows that the search yields only the results that users are allowed to find. Everyone ends up consuming media as long as they remain interesting. Fads die out in the endless repetition of a style. It will not be any different with a logic limited by the Web’s filter bubbles, which tend to constantly reproduce the same topics. We could again speak of a “mal d’archive” (Derrida 2008). Our “machine learning,” one of the mysteries of the age of AI, the engine that updates our contemporary archives, is posited as the ideal solution, the silver bullet for all ills, even the inhuman algorithmic order that gives the same responses to different queries. However, to reiterate what has been maintained so far, this is not true learning, but rather a broken record. If a machine does not make mistakes, it is not clear what it is truly capable of learning.

The numbers organizing the digital dialogues-searches wield a certain power, lacking the inherent unpredictability of our Lebenswelt. Within the dynamics of online bubbles, every signal is subdued, filtered through statistical lenses to ascertain relevance. In this digital realm we inhabit while navigating online, “empirical testability” becomes elusive, as nothing truly embodies reality within our digital interactions. Of course, that is not the case of our GPS systems, whose functional ability to geolocate us, providing a semblance of logical-spatial context and orientation, is not in doubt.[10] But once we push this discourse past the issue of geolocalization, the inability of AIs. to contextualize presented information persists. They lack the capacity to adjust based on real-world dimensions, or to discern the truthfulness of our statements, programmed solely for the continuous translation into data. The web seldom presents straightforward facts, because it is a semiospheric bubble where information ricochets without definitive grounding. Queries regarding truth or falsehood often go unanswered, as the essence of reality resides beyond the realm of algorithmic calculation. The web becomes a breeding ground of post-truth narratives (Lorusso 2018) and fake news, like a closed chamber adorned with mirrors.

4 Why our AIs don’t speak with us

In our daily navigation of life, we encounter myriad seemingly insignificant details that we either disregard or re-interpret within the conversations we partake in, adapting them as needed. Genuine human conversations are rife with implicit meanings, woven into the fabric of our choices and interpretations. In contrast, experimental inquiries rigorously test proposed solutions (Schutz 2011 [1951]) to assess their robustness, with any inconsistencies prompting necessary revisions. The real semiospheres we inhabit are richly nuanced, defying straightforward predictive, computational, algorithmic, or statistical computations, except in highly relative contexts. The web-network of our relations often falls short of mirroring the multifaceted dimensions of life (Barber 2015, 2018: 51; Schutz 1962), where we constantly navigate “multiple realities” and discern what is relevant. Within the digital realm, much of this discernment and interpretation is preconstructed algorithmically, presented to us under the guise of machine learning mechanisms tasked with regulating the transformations of the various semantic webs. However, genuine “polyphonic dialogue,” as theorized by Mikhail Bakhtin (1975), is characterized by anticipations, delays, and perpetual adjustments – an interactional complexity beyond the capabilities of AIs, which lack the depth of understanding required to navigate the depths and intricacies of human communication.

In the algorithmic landscape we inhabit, simplicity becomes imperative. While live conversations are nuanced, not all expressions neatly translate into linguistic-literal terms (Sperber and Wilson 1995; Sperber and Wilson 2002: 82). And yet, this is precisely what is demanded of us online. Within the semantic web of our relationships, the prevailing principle is that of “minimal interpretative effort.” The demand for simplicity dictates that what is obvious becomes the norm, with interactions happening at a pace that admits neither implicit meaning nor allusions. Interpretive efforts that may prove demanding are eschewed wherever possible. In the digital realm, responses are expected urgently and are typically concise, pertinent, and pragmatic, resembling a chat-like model that leaves no room for misunderstandings. The algorithms governing online interactions ensure a continuous stream of “literal responses,” fostering a conversational environment that lacks nuance or depth. We are not discussing those who go online to consult books, articles, or catalogues. What cannot cater to everyone is the Web modeled after Wikipedia, useful for smartphone searches but potentially perilous in the long run if assumed as a gnoseological model. The “minimal-interpretative-effort” model works for the fast-paced nature of online information-communication but lacks the richness inherent in human interaction. Individuals who embrace this approach to problem simplification risk succumbing to a form of illiteracy that aligns with the mainstream semantic of the web, with far-reaching political consequences already beginning to surface.

AI-mediated interactions obviously lack the living, physical presence of those who seek to understand each other. Within the Web, a “mutual tuning-in relationship” (Schutz 1964a: 161) remains unattainable, because everything is reduced to a form of indirect communication. Despite the simplifications inherent in digital relationships, the “indeterminacy effects” theorized by Wilson and Sperber (2012: 16) persist, lying beyond the scope of algorithmic calculation. However, paradoxically, reflections on the digital world often align with a phenomenological take on the quasi-algorithmic logics governing part of our everyday communicative calculation. In what sense is this theoretical alignment observed? Building a framework of relevancies relies on the semantic structuring of a field through its articulated “topicalization,” and the progressive establishment of “stocks of knowledge” (Schutz 1967: 9–15). These two keywords help describe the different phases that constitute cognitive processes, phenomenologically understood. This intricate but classic mechanism, facilitating broad communication in daily life, has been scaled up by the AI systems that govern the inter-web of our relationships and searches. These systems translate everything into a rigid chain of meaning based on a behavioral calculation of all possible domains of interest-attention relevant to various online searches. This is similar to the mechanism underlying marketing, which treats clients-users as data-commodities for economic and commercial gain (Kotler 2023).

Underscoring this point is far from futile. Reducing our experiences to “types” (Strassheim 2016) is a mechanism of salvation that lets us rationalize our cognitive efforts. Similarly, translating this interpretative work into a memory that streamlines various processes, allowing for automatic passive syntheses, is another crucial aspect of our cognitive framework. This capacity aids us in interpreting different situations, focusing on what is and is not relevant and scrapping the need to reconstruct scenarios as if encountering them for the first time (Schutz 1964b; Schutz and Luckmann 1973). Mechanical and algorithmic “machine learning” operates on similar principles online, albeit with much more automation and fewer moments of creative correction. It could not be otherwise, as every form of writing, including the algorithmic variety, is essentially a reflection of its creator. Between the individual who contemplates and the machine that reasons, a semiotic mirror exists. Each machine functions in accordance with the intentions of its programmer.

5 Conclusions resonating as morals

Our world of daily lives is intertwined with a neuromantic dimension, encompassing both real and virtual territories of meaning. Mastering a universe like the digital realm proves to be an insurmountable task, as it constitutes an infinite semiotic web with continuous references ad infinitum – a sprawling rhizome with many non-human shadow zones. As observed, not everything within this digital infinity lends itself to clear explication or explanation that fully satisfies the relevance criteria applicable in a live dialogical context.

In our contemporary langue-parole confrontation, experienced in its algorithmic version, we experience a sense of “fragility” (Monico 2020) characteristic of moments of genuine transition. What bewilders us in the man-machine-algorithm relationship is the absence of meaningful interpretation, with algorithms merely pointing us in a direction-sense of navigation without engaging in dialogue, thought, or mediation. Navigating the Web’s semiospheres, we are caught in a maelstrom that plunges us into a whirlpool of logical-mathematical passages, compelled to comply with a grammar formed by complex combinations of historical, big, and synthetic data. Much of our interactions occur through mechanistic relationships, adhering to digital orders and algorithmic commands that we often obey in a passive fashion. Whereas classical writing hinted at linear directions in our reading and reasoning journeys, algorithmic writing operates through rhizomatic relationships, making us easily lose track of discourse as we go down different paths at each node-point in the network (Buffardi and de Kerckhove 2011). We are continuously propelled forward by algorithmic inertias, uncertain of our destination (Accoto 2011). As Derrick de Kerckhove remarked in a long interview with Dionisio Ciccarese, we are experiencing a real semiotic turning point – a transition from word to algorithm (de Kerckhove and Ciccarese 2022: 55).

Indeed, the web functions not only as an economic network but also as a political one. However, it is crucial to recognize that no machine can govern the world without human intervention. Nancy Fraser’s (2022) concept of cannibal capitalism intersects with Shoshana Zubov’s (2019) capitalism of surveillance, with the latter serving as a tool for the former. The cannibal capitalists who surveil us are the primary operators of this system, creating a dynamic of new slaves and new masters. In this complex scenario, we are not necessarily relegated to a passive role. Through tagging and striving to open new paths, everyone can actively participate. However, the grammar of the algorithmic discourse remains stringent.

Two key points are worth highlighting in conclusion. Firstly, there are no questions posed that do not elicit a response. That is because the web-archive-repertoire consisting of big data always provides a reply, albeit with varying degrees of sophistication, as it remains a machine responding to human queries. We are required to maintain a necessary simplicity when posing questions, if we expect to receive an effective reply. Secondly, the web entangles us in a contradictory dynamic, capturing significant portions of our attention and interests. While it offers immediate intervention, allowing our thoughts to become public and potentially viral, we often lose control of the repercussions of this immediacy. Surrendering ourselves to this dynamic entails a relinquishment of control, as the fate of our online expressions is sealed instantly and permanently. The Web operates as a quintessential machinic dispositif, characterized by a fast, instantaneous, yet potentially eternal timescale. It functions as an archive-library, recording everything indefinitely on behalf of unknown third parties.

As implied by the title of these conclusions and in line with the literary style of this article, we propose some unscientific conclusions (Kierkegaard 1992). We have advanced certain assumptions that govern the definition of what is relevant and what is not in a mobile-dialogical-living context between people in their cognitive traversal of the Lebenswelt. These living mechanisms do not work if the encounter is between a human being and a machine, regardless of how much “learning” the machine engages in. No positive solution can be offered for the “missed encounters” between humans and machines, between the astronaut in 2001: A Space Odyssey and HAL, the computer that drives the humans adrift. We can only ever establish an instrumental relationship with AI, as Maurizio Ferraris (2021) insists in his latest book. Even if these AIs continue to mediate among us like modern telegraphs, carrying information from one side of the ocean to the other, we can never avoid asking our distant interlocutor to see us in person, to clarify together what is and is not relevant to us. It is a happy destiny to which we are consigned: coming back, immer wieder, to be with each other to talk, knowing that being close to each other can be exhausting. But trusting only the algorithms that govern our social relationships has already taught us how tedious it is to read and see the same things repeatedly, things that are not things, but “Un-dinge” (Han 2021). These Un-dinge are our writings, mediated by other algorithmic writings, which in a machinic way repeat the same 0–1 sequence, over and over again. This cannot be enough for us and is often truly irrelevant.


Corresponding author: Filippo Silvestri, Dipartimento di Scienze della Formazione, Psicologia, Comunicazione, Università degli Studi di Bari Aldo Moro, Via Crisanzio 42, Bari, 70122, Italy, E-mail:

References

Accoto, Cosimo. 2011. Il mondo dato. Cinque brevi lezioni di filosofia digitale. Milan: Egea.Search in Google Scholar

Addis, Maria Cristina, Giorgia Costanzo, Dario Mangano & Elisa Sanzeri. 2023. Il discorso dei materiali: Senso e significazione. E|C 39(1–2).Search in Google Scholar

Amendolagine, Francesco & Massimo Cacciari. 1975. Oikos. Da Loos a Wittgenstein. Rome: Officina Edizioni.Search in Google Scholar

Bakhtin, Mikhail. 1975. Questions of literature and aesthetics. Moscow: Progress.Search in Google Scholar

Barber, Michael. 2015. Making humor together: Phenomenology and interracial humor. SocietàMutamentoPolitica 6(12). 43–65.Search in Google Scholar

Barber, Michael. 2018. Finite provinces of meaning: The expansive context of relevance. In Jan Strassheim & Hisashi Nasu (eds.). Relevance and irrelevance: Theories, factors, and challenges, 51–68. Berlin & Boston: De Gruyter Saur.10.1515/9783110472509-003Search in Google Scholar

Barthes, Roland. 1980. La chambre claire: Note sur la photographie. Paris: Gallimard Seuil.Search in Google Scholar

Baumann, Zygmunt. 2000. Liquid modernity. Malden, MA: Polity Press.Search in Google Scholar

Baudrillard, Jean. 1997. Le crime parfait. Paris: Galilée.Search in Google Scholar

Benasayag, Miguel. 2021. The tyranny of algorithms. New York: Europa Compass.Search in Google Scholar

Bender, Emily. 2023. You are not a parrot. The Intelligencer. https://nymag.com/intelligencer/article/ai-artificial-intelligence-chatbots-emily-m-bender.html (accessed 21 June 2024).Search in Google Scholar

Bloch, Ernest. 1959 [1930]. Spuren. Frankfurt am Main: Suhrkamp.Search in Google Scholar

Boyd, Danah. 2010. Social network sites as networked publics: Affordances, dynamics, and implications. In Zizi Papacharissi (ed.). A networked self, 47–66. London: Routledge.10.4324/9780203876527-8Search in Google Scholar

Buffardi, Annalisa & Derrick de Kerckhove. 2011. Il sapere digitale. Pensiero ipertestuale e conoscenza connettiva. Naples: Liguori.Search in Google Scholar

Dayter, Daria. 2016. Discursive self in microblogging: Speech acts, stories, and self-praise. Amsterdam & Philadelphia: John Benjamins.10.1075/pbns.260Search in Google Scholar

de Kerckhove, Derrick & Cristina Miranda de Almeida. 2014. The point of being. Newcastle upon Tyne: Cambridge Scholars.Search in Google Scholar

de Kerckhove, Derrick & Dionisio Ciccarese. 2022. Siamo uomini o digitali? Rome: Castelvecchi.Search in Google Scholar

Derrida, Jacques. 1984. Signéponge. New York: Columbia University Press.Search in Google Scholar

Derrida, Jacques. 2008. Mal d’archive: Une impression freudienne. Paris: Galilée.Search in Google Scholar

Eco, Umberto. 1962. Opera aperta. Milan: Bompiani.Search in Google Scholar

Eco, Umberto. 1964. Apocalittici e integrati. Milan: Bompiani.Search in Google Scholar

Eco, Umberto. 1976. Trattato di semiotica generale. Milan: Bompiani.Search in Google Scholar

Eco, Umberto. 1997. Kant e l’ornitorinco. Milan: Bompiani.Search in Google Scholar

Eugeni, Ruggero. 2021. Capitale algoritmico. Cinque dispositivi postmediali (più uno). Brescia: Morcelliana.Search in Google Scholar

Ferraris, Maurizio. 2021. Documanità. Filosofia del nuovo mondo. Roma & Bari: Laterza.Search in Google Scholar

Floridi, Luciano. 2014. The onlife manifesto: Being human in a hyperconnected era. London: Springer Open.10.1007/978-3-319-04093-6Search in Google Scholar

Floridi, Luciano. 2019. The logic of information. Oxford: Oxford University Press.Search in Google Scholar

Foucault, Michel. 1966. Les mots et les choses. Paris: Éditions Gallimard.Search in Google Scholar

Foucault, Michel. 1969. L’archéologie du savoir. Paris: Éditions Gallimard.Search in Google Scholar

Foucault, Michel. 1972. Histoire de la folie à l’âge classique. Paris: Éditions Gallimard.Search in Google Scholar

Fraser, Nancy. 2022. Cannibal capitalism: How our system is devouring democracy, care, and the planet and what we can do about it. London & New York: Verso.Search in Google Scholar

Gonzales, Amy L. 2014. Text-based communication influences self-esteem more than face-to-face or cellphone communication. Computers in Human Behavior 39. 197–203. https://doi.org/10.1016/j.chb.2014.07.026.Search in Google Scholar

Guillén-Nieto, Victoria. 2023. Hate speech: Linguistic perspectives. Berlin & Boston: de Gruyter.10.1515/9783110672619Search in Google Scholar

Han, Byung-Chul. 2021. Undinge: Umbrüche der Lebenswelt. Berlin: Ullstein.Search in Google Scholar

Heidegger, Martin. 2015 [1950]. Holzwege. Frankfurt am Main: RoteReihe Klostermann.10.5771/9783465142362Search in Google Scholar

Hjørland, Birger. 2010. The foundation of the concept of relevance. Journal of the American Society for Information Science and Technology 61(2). 217–237. https://doi.org/10.1002/asi.21261.Search in Google Scholar

Huang, Hsin-Yi. 2016. Examining the beneficial effects of individual’s self-disclosure on the social network site. Computers in Human Behavior 57. 122–132. https://doi.org/10.1016/j.chb.2015.12.030.Search in Google Scholar

Husserl, Edmund. 1948. Erfahrung und Urteil. Untersuchungen zur Genealogie der Logik. Hamburg: Klassen Verlag.Search in Google Scholar

Husserl, Edmund. 1966. Analysen zur passiven synthesis. Dordrecht: Kluwer Academic.Search in Google Scholar

Husserl, Edmund. 1976. Die Krisis der Europäischen Wissenschaften und die transzendentale Phänomenologie. Eine Einleitung in die phänomenologische Philosophie. Dordrecht & The Hague: Springer, Martinus Nijhoff.10.1007/978-94-010-1335-2Search in Google Scholar

Kafka, Franz. 2022 [1920]. Ein landarzt. Kleine erzählungen. Prag: Vitalis Verlag.Search in Google Scholar

Kierkegaard, Søren. 1992. Concluding unscientific postscript to “philosophical fragments.” Princeton, NJ: Princeton University Press.10.1515/9781400846993Search in Google Scholar

Kotler, Philip. 2023. Marketing 6.0: The future is immersive. Hoboken, NJ: John Wiley.Search in Google Scholar

Lin, Ruoyun, Ana Levordashka & Sonja Utz. 2016. Ambient intimacy on twitter. Cyberpsychology: Journal of Psychosocial Research on Cyberspace 10(1). Article 6. https://doi.org/10.5817/cp2016-1-6.Search in Google Scholar

Lorusso, Anna Maria. 2018. Postverità. Rome & Bari: Laterza.Search in Google Scholar

Merleau-Ponty, Maurice. 1996 [1962]. Sens et non-sens. Paris: Gallimard Éditions.10.14375/NP.9782070743551Search in Google Scholar

McLuhan, Marshall. 1994 [1964]. Understanding media: The extensions of man. Cambridge, MA: MIT Press.Search in Google Scholar

Miller, Vincent. 2008. New media, networking, and practice culture. Convergence 14. 387–400. https://doi.org/10.1177/1354856508094659.Search in Google Scholar

Monico, Francesco. 2020. Fragile. Un nuovo immaginario del progresso. Roma: Meltemi.Search in Google Scholar

Nietzsche, Friedrich. 2017. The will to power. London: Penguin Classics.Search in Google Scholar

Ong, Walter J. 1982. Orality and literacy: The technologizing of the word. London: Routledge.10.4324/9780203328064Search in Google Scholar

Paolucci, Claudio. 2017. Umberto Eco. Tra ordine e avventura. Milan: Bompiani.Search in Google Scholar

Ponzio, Augusto. 2008. La dissidenza cifrematica. Bari: B.A. Graphis Spirali.Search in Google Scholar

Saracevic, Tefko. 2007. Relevance: A review of the literature and framework for thinking on the notion in information science. Part II: Nature and manifestation of relevance. Journal of the American Society for Information Science and Technology 58(3). 1915–1933. https://doi.org/10.1002/asi.20682.Search in Google Scholar

Schutz, Alfred. 1962. On multiple realities. In Collected papers 1: The problem of social reality, 207–259. The Hague: Martinus Nijhoff.10.1007/978-94-010-2851-6_9Search in Google Scholar

Schutz, Alfred. 1964a. Making music together: A study in social relationship. In Collected papers 2: Studies in social theory, 159–178. The Hague: Martinus Nijhoff.10.1007/978-94-017-6854-2_8Search in Google Scholar

Schutz, Alfred. 1964b. The problem of rationality in the social world. In Collected papers 2: Studies in social theory, 64–88. The Hague: Martinus Nijhoff.10.1007/978-94-017-6854-2_3Search in Google Scholar

Schutz, Alfred. 1967. The phenomenology of the social world. Evanston: Northwestern University Press.Search in Google Scholar

Schutz, Alfred. 2011 [1951]. Reflections on the problem of relevance. In Collected papers 5: Phenomenology and social sciences, 93–109. Dordrecht: Springer.10.1007/978-94-007-1515-8_4Search in Google Scholar

Schutz, Alfred & Thomas Luckmann. 1973. The structures of the life-world, 1. Evanston, IL: Northwestern University Press.Search in Google Scholar

Silvestri, Filippo. 2010. Segni significati intuizioni. Sul problema del linguaggio nella fenomenologia di Edmund Husserl. Milan: Mimesis.Search in Google Scholar

Silvestri, Filippo. 2012. Sulla costituzione nell’esperienza di alcune logiche del pensiero. In costante riferimento ad Esperienza e giudizio di Husserl. Lecce: Pensa Multimedia.Search in Google Scholar

Soergel, Dagobert. 1976. Is user satisfaction a hobgoblin? Journal of the American Society for Information Science 27(4). 256–259. https://doi.org/10.1002/asi.4630270411.Search in Google Scholar

Sperber, Dan & Deirdre Wilson. 1995. Relevance: Communication and cognition, 2nd edn. Oxford: Blackwell.Search in Google Scholar

Sperber, Dan & Deirdre Wilson. 2002. Pragmatics, modularity and mind-reading. Mind & Language 17. 3–23.10.1111/1468-0017.00186Search in Google Scholar

Strassheim, Jan. 2016. Type and spontaneity: Beyond Alfred Schutz’s theory of the social world. Human Studies 39(4). 493–512. https://doi.org/10.1007/s10746-016-9382-8.Search in Google Scholar

Strassheim, Jan. 2018. Relevance and irrelevance. In Jan Strassheim & Hisashi Nasu (eds.). Relevance and irrelevance: Theories, factors, and challenges, 1–18. Berlin & Boston: de Gruyter.10.1515/9783110472509-001Search in Google Scholar

Strassheim, Jan & Hisashi Nasu. 2018. Relevance and irrelevance: Theories, factors, and challenges. Berlin & Boston: de Gruyter.10.1515/9783110472509Search in Google Scholar

Thompson, Clive. 2008. Brave new world of digital intimacy. The New York Times. https://www.nytimes.com/2008/09/07/magazine/07awareness-t.html (accessed 30 September 2024).Search in Google Scholar

Wilson, Deirdre & Dan Sperber. 2012. Meaning and relevance. Cambridge: Cambridge University Press.10.1017/CBO9781139028370Search in Google Scholar

Yus, Francisco. 2018. Relevance from beyond propositions. The case of online identity. In Jan Strassheim & Hisashi Nasu (eds.). Relevance and irrelevance: Theories, factors, and challenges, 119–140. Berlin & Boston: de Gruyter.10.1515/9783110472509-006Search in Google Scholar

Zappavigna, Michele. 2016. Social media photography: Construing subjectivity in Instagram images. Visual Communication 15(3). 271–292. https://doi.org/10.1177/1470357216643220.Search in Google Scholar

Zlatev, Jordan. 2023. The intertwining of bodily experience and language: The continued relevance of Merleau-Ponty. Histoire Épistémologie Langage 45(1). 41–63. https://doi.org/10.4000/hel.3373.Search in Google Scholar

Zubov, Shoshana. 2019. The age of surveillance capitalism: The fight for a human future at the new frontier of power. Rome: Luiss University Press.Search in Google Scholar

Received: 2024-09-03
Accepted: 2024-10-05
Published Online: 2024-10-22
Published in Print: 2024-09-25

© 2024 the author(s), published by De Gruyter, Berlin/Boston

This work is licensed under the Creative Commons Attribution 4.0 International License.

Downloaded on 19.11.2025 from https://www.degruyterbrill.com/document/doi/10.1515/sem-2024-0151/html
Scroll to top button