Abstract
The key role played by online platforms in the neo-intermediation of the public debate requires a review of current tools for mapping the digital information ecosystem, highlighting the political nature of such an analysis: Starting from a synoptic overview of the main models of platform governance, we try to understand whether the ongoing European shift towards the Limited Government Regulation (LGR) model will be able to counterbalance the “systemic opinion power” of the giant platforms and restore the “health” of the digital information ecosystem. A close analysis of the European Digital Services Act (DSA) has highlighted some limitations in achieving its goals, because of the features of the LGR on the one hand, and the disruptive features of the algorithmic neo-intermediation phenomenon on the other. Thus, we suggest a tripartite regulatory model, that can be defined as “neo-editorial accountability.” However, increasing users’ critical algorithmic awareness is an essential prerequisite for implementing the suggested template, and mitigating an outstanding effect of the LGR model: the normalization of the ideological assumptions underlying informational capitalism.
1 Introduction
The renewed global debate over the organization of the communication order has necessarily extended media analysis to what may be labeled the neo-intermediation of information flows in the context of what is variously referred to as global “platform capitalism” (Srnicek, 2017), “informational capitalism” (Cohen, 2019), or “big data capitalism” (Fuchs, 2019). The digital metamorphosis of the structural power of the new systemic constraints requires a review of the current tools for mapping and investigating the digital information ecosystem. By highlighting the political nature of such an analysis, relations of authority are reconstructed, new political entities are created, and novel interpretative frameworks are established. The enormous volume of online activity occurring through “big tech,” and the opaque way in which such content is managed, curated, and distributed to multiple users, has made evident the “de facto” outsourcing of the monitoring of online public debate. This includes a small number of private media corporations that support and govern the overall global space that people use to communicate (Balkin, 2018; Langvardt, 2019). As a result of the growing power of platforms – not only as major information intermediaries on a global scale, but also as providers of infrastructure technology (Webb, 2019) – legacy media are also in a worrying state of dependence. Digital platforms are able to redefine the conditions under which legacy media operate by directing and shaping the ways in which political power is exercised, whether public or private (Helberger, 2020). Indeed, it is now clear that there is a need to address the metamorphosis of power relations between the social players behind the “hybrid forms of control” (Cobbe, 2020) of content production, and access to digital infrastructure.
Emergency scenarios such as the cross-border impact of information wars on global geopolitics, from the pandemic information crisis to the Russo-Ukrainian war, have forced policy makers to focus on platforms supplying disinformation in the short term. As a result, the push toward automated content moderation has been reinforced (Meyer and Hanot, 2020). Yet, an almost fully automated system of self-policing carries with it the danger of hiding the inner workings of platforms as “governors” of public speech in which the fundamentally political nature of content moderation is executed by algorithms (Gillespie, 2018; Gorwa et al., 2020).
Thus, a cross-disciplinary approach is necessary to develop adequate regulatory proposals and draw the attention of researchers to the ethical challenges underlying the functioning of datafication, commodification, and selection algorithms (Burrell and Fourcade, 2021; Kitchin, 2017). Indeed, these are analytical prisms that help us understand the way in which the ecosystem modifies power relations: There is no doubt that the analysis of the forms and algorithmic dissemination strategies of public discourse, under the new systemic constraints, sheds light on the nature and kinds of responsibilities that we must attribute to them in addressing rights and public values (and the surrounding debates).
Starting from a comparative analysis of the growing international “regulatory field” of the subject of platform governance (Schlesinger and Kretschmer, 2020), this essay will focus on the ongoing European Commission strategy to build a European digital sovereignty: In particular, the European regulatory reshaping of platforms’ editorial accountability will be under scrutiny.
The purpose is to address the following questions:
What are the limits and potentialities of the new European digital governance model in dealing with such a disruptive phenomenon as algorithmic neo-intermediation of information?
What might be the potential effects of the new legislation’s enforcement on the health of the digital information ecosystem, and, thus, on the resilience of an autonomous and authentically plural public sphere?
This contribution is organized in two sections: The first section provides a brief synoptic framework of the main models of platform governance that emerge from a review of the literature, in order to frame the ongoing European digital strategy from a theoretical point of view. The second section centers on the close analysis of some crucial provisions of the Digital Services Act (DSA regulation 2022/2065) which dictates the new rules for hosting providers by reshaping their liability regime. It will make clear that the limitations of this regulatory framework are inherent to both the type of governance under which Regulation 2022/2065 falls, and the special features of neo-intermediation. Therefore, by focusing on the regulatory challenges that have arisen from algorithmic moderation systems (the so-called Recommender systems [RS]) – which represent one of the main breaking points compared to traditional forms of information broadcasting – we will seek to predict their likely impact on the cognitive pluralism of the “neo-intermediated” public information space. Regulatory processes, in fact, co-produce the algorithms that regulate (Campo et al., 2018; Cheney-Lippold, 2018). Thus, recognizing these as socio-technical constructs (Beer, 2017; Bucher, 2018; Airoldi, 2021; Hildebrandt, 2022) shall be the first step in constructing an analytical direction that will open up the possibilities of advancing the cognitive link in which the elements in question are resolved. Insofar, it will enable a multilevel analysis capable of contributing to the lines of development of an integrated regulatory framework at the supranational level.
As for methodology, the comparative analysis of the main models of platform governance, which have been elaborated on in the doctrine, will take Drahos’ (2017) broad theory of regulation as a parameter. In this multidimensional view, the state is seen as part of a network in which the tasks of regulation are redistributed to actors within it in various ways. Next, we will seek to shed light on the possible consequences for the health of the digital information ecosystem of what at an abstract level is a shift toward a Limited Governance Regulation model (LGR) resulting from the introduction of new legislation. In other words, we will try to understand if the (re)positioning of public authority will result in an effective rebalancing of the power of actors involved in governance in favor of users’ freedom of expression, a prerequisite that is essential to ensure the resilience of an authentically democratic digital public sphere.
In a subsequent phase we analyze reasons why the enforcement of public intervention in the form of LGR fails to counterbalance the “systemic opinion power” (Helberger, 2020) of platforms, confirming the limitations of the model already identified in the doctrine at a general level.
Therefore a solution will be proposed that can obviate the criticalities peculiar to the LGR, that is, a multisourcing model of regulating, which can determine a redistribution of both editorial accountability, and regulatory production, in line with the “tripartism regulatory model” described in 1992 by Ayres and Braithwaite (1992), arisen within the theory of “responsive regulation” (Drahos and Krygier, 2017). Finally, the need for “critical data literacy” (Nichols and Smith, 2021) will be emphasized both as a prerequisite for making the proposed model of accountability feasible, which we call “neo-editorial,” and to obviate the principal limitation of LGR, that is, the depoliticization of the ideological assumptions that underly informational capitalism.
2 Preliminary questions. Platform accountability: Beyond the opinion power
Content moderation and the prescriptive regulation of media accountability have a long history. The “opinion power” assigned to the media, and, thus, the ability to affect social and political institutions, is the main reason they have always been subject to a different regime of liability and transparency, compared to private companies operating in different sectors (Garton Ash, 2016).
Media accountability can be defined as those “voluntary or involuntary processes by which media are directly or indirectly accountable to their society for the quality and/or consequences of publication” (McQuail, 2005, p. 207). As such, it is part of inter-institutional accountability, a dimension that influences the quality of democracy (Morlino, 2012). At the same time, it involves the relationship between the media and their audiences, as the latter can hold the media directly accountable without the intermediation of public institutions (McQuail, 2005).
However, any statement about the principles that should inform the regulatory policies of platforms inevitably raises the complex question of what a media company is, and whether digital platform companies can be considered as such (van Drunen, 2020). Precisely because of their “amphibious” nature, digital platforms have rarely been discussed from the perspective of opinion power (Helberger, 2020). Since they do not fall under the concept of media in the traditional sense, insofar as they do not publish their own content, they have been traditionally considered neutral hosting providers, falling within the area of the so-called “safe harbor” principle. On the basis of this principle, the E-commerce Directive 2000/31 has granted them a regime of exemption from liability for illegal or harmful third-party content. Thus, the first issue that the current regulating approaches to platform accountability need to address is a reframing of the meaning of “media companies,” by including global-scale digital information gatekeepers (Gillespie, 2014). However, this is only a preliminary issue.
The neutral appearance of platforms’ activities is the major feature of digital content moderation based on algorithmic data mining. Thereby, the main challenge for current regulatory attempts, aimed at introducing new liability regimes, is the extent and pervasiveness through which digital platforms can algorithmically control, shape, and personalize the global information flow, through predictive analysis based on behavioral data in the full quantification of human experience (Napoli, 2019; Hildebrandt, 2022). Beyond explicit exercises of power, the ability to shape and direct the flow of political information and communication, as well as the power to amplify or downgrade particular voices, can also be as much a consequence of algorithmic ordering (which is never neutral) as of conscious design. As a result of this discontinuity with traditional content intermediation historically carried out by media companies, we prefer to speak of “neo-intermediation” instead of “reintermediation” (Giacomini, 2018) to better define the activity carried out by platforms. Although the gatekeeping process has been extensively studied by multiple disciplines, in the digital ecosystem some important changes have occurred: (a) the editorial role delegated to the algorithms, (b) the growing role of audiences as secondary gatekeepers for which users co-determine what makes the news, and (c) the change in the position of the journalist from gatekeeper to gate watcher.
In particular, from the moment it is born to when it reaches its widest audience, information is modeled, filtered, and hidden within a dense mixture of elements that come together in the algorithmic infrastructure of social media and digital platforms (Moeller and Helberger, 2018).
Thus, the concept of neo-intermediation is intended to focus on the central role of recommendation and personalization algorithms, which are new gatekeeping infrastructures, in combination with the role played by third-party mediators, also known as data brokers. From a theoretical point of view, the identification of the phenomenon of neo-intermediation and the limits of the concept of disintermediation, allows us to focus our attention on the processes of selection and dissemination of information content, so much so that we can speak of “algorithmic publishing” (platform press) and the social and political responsibility of online platforms towards the public sphere. The gatekeeping process, which drives how certain events are deemed more newsworthy than others, involves both how institutions or influential individuals determine what information to convey to recipients, and the moral perspective with its own stereotypes and biases. Indeed, the notion of “algorithmic public opinion” (Airoldi and Gambetta, 2018) emerges, which is inevitably influenced by the governance policies of online platforms (Friedman et al., 2013). As suggested by Natalie Helberger (2020) the “systemic opinion-power” hidden under the algorithmic neo-intermediation phenomena based on datafication, commodification, and selection “is not only the power to influence political processes (such as democratic will formation) but it is political power in its own right.”
In this scenario, the EU strategy to build a so-called “European Digital Sovereignty” is the turning point toward the “new internet era” (Floridi, 2021) in the European context. The current Digital Services Act is mainly aimed at trying to counterbalance the infrastructural platform power, in order to ensure a safer transnational digital space, in which fundamental rights are protected. The European model represents an attempt to democratize and “constitutionalize” the internet (De Gregorio, 2020; Santaniello, 2021). While promoting the values and principles of liberal democracies, it opposes isolationist instances with a polyphonic approach, which is also open to input from civil society (Santaniello, 2022).
Nevertheless, a fundamental premise of such a sovereignty is to ensure the conditions for thinking and reshaping a space, that would be capable of being defined as a European digital public sphere, or rather, a transnational public sphere (Schlesinger, 2020). This would be a precondition for legitimizing the sovereignty of supranational institutions in a post-Westphalian context of supposedly sovereign states. The metaphorical space of the public sphere is indeed a crucial hermeneutical grid for thinking about tactics of media governance; thereby the possibility of resilience of its own autonomy will have to remain the counterfactual parameter used to evaluate their results, being media, in turn, the primary means to create a public sphere and to practice self-government (Garton Ash, 2016).
3 Comparative analysis: Platform accountability models review
The aim of this section is to give a synoptic framework of the main models of platform governance up to now elaborated in the doctrine, and to show how there has been a shift, along an ideal continuum that takes European regulation from Industry Self Regulation to a second model: the Limited Government Regulation (LGR), as mentioned above. Indeed, the EU claims its role as rule maker, dictating a coercive ex ante discipline aimed at regulating, among other things, the private activity of information intermediation. The aim of this intervention is to ensure the safeguarding of fundamental rights related to freedom of expression and information, which need to be carefully balanced with interrelated rights such as privacy, dignity, equality, and non-discrimination.
We define regulation – as suggested by Drahos and Krygier (2017, p. 4) – as “a dynamic multilevel process” in which “the state becomes part of a network and the tasks of regulation are redistributed in various ways among actors within the network.” This includes non-legal forms of norm making, which opens up a number of possible entry points for empirical study. The concept describes a system capable of generating regulation from many actors, at different levels, and using a variety of instruments to communicate and enforce their chosen norm (Drahos, 2017, p. 764).
Recent studies, conducted over the period from 2017 to 2020, have mapped the international regulatory landscape through quantitative processing techniques, allowing interventions and relationships to be categorized into three macro areas according to the homogeneity of drivers (Flew and Gillett, 2021): competition, moderation, and rights. On the basis of these studies, we will go on to enucleate the operations and relationships of the agencies in charge of prescriptive regulation of cultural and informational content, as they exercise power over the production, circulation, and consumption of the communication stream.
With specific reference to the theme of the Social Platform Accountability, the most recent literature (De Blasio and Selva, 2021) has identified four ideal regulation paradigms from empirical evidence at the European level: (a) accountability set by law, (b) co-decided accountability, (c) regulated self-regulation, and (d) pure self-regulation.
They are based on three elements: (a) principles of media legislation; (b) main actors, identifying the role played by public institutions, media companies, professionals and audiences; and (c) tools used to ensure media accountability (Eberwein et al., 2018; Fengler et al., 2014), as well as “any non-state means to empower the media to the public” (Bertrand, 2000, p. 108).
Nevertheless, in order to frame the current European Strategy from a political point of view, we will follow an approach which, while encompassing the described models, is mainly based on the criterion of overall expansiveness of governance tactics (Rochefort, 2020; Gorwa, 2019; Rahman, 2018). It allows us to screen the purposes and effectiveness of the interventions by assessing their impact on informational capitalism constraints as a counterfactual perspective.
The main models of platform governance that have emerged on the basis of this criterion are (a) industry self-regulation, (b) limited government regulation, and (c) comprehensive government regulation.
Industry self-regulation
Industry self-regulation is a minimalist form of public government, focused on freedom of information, freedom from state interference, and direct media responsiveness to their audience. Private actors adopt rules voluntarily, going beyond any regulatory requirements or setting any new standards in areas where there is a lack of government regulations or standards (Haufler, 2001).
It includes:
Pure self-regulation. Historically oriented towards this model are the United States and Scandinavian countries.
Regulated self-regulation. This may include public-private partnerships, bringing civil society organizations, academics, and other stakeholders together to establish regulatory cooperation between state authorities and platform companies (De Blasio and Selva, 2021), such as the European Code of Practice on Disinformation agreed in 2018 by EU institutions and big media players. It fixes self-regulatory standards to fight disinformation, and is going to work in combination with the DSA and the upcoming regulation on the Transparency and Targeting of Political Advertising. It is a model of governance involving players from government and the market, where decision-making is distributed through a polycentric arrangement: Implementation responsibilities are largely taken on by companies, while the role of state authorities is limited to the role of orchestrators that steer the cooperation between companies, academia, and organizations in civil society that agree on principles and procedural mechanisms. However, the lack of enforcement of sanctions by public authorities, has led to a lack of effectiveness of these models (Di Mascio et al., 2021).
Limited government regulation
Limited government regulation refers to the application of legally defined standards for the conduct of the sector by public authorities (Kraft and Furlong, 2013), in the form of (a) co-decided accountability, and (b) accountability set by law. Examples of this model are: (a) the French Loi Avia (n. 2020-766); (b) the German NetzDG (Network Enforcement Act, 2017); and (c) the EU regulation 2022/2065 (DSA). However, limited regulation can be conceived as existing over a spectrum of policy actions ranging from higher levels of coercion and intervention, to forms that provide ample freedom for online platforms to determine the exact form that oversight takes.
Comprehensive government regulation
Comprehensive government regulation would attempt to reorganize the entire system with the aim of remedying the causes of the dysfunction and not merely mitigating the symptoms. This is a tactic similar to that defined as structuralist regulation (Rahman, 2018) aimed at limiting the same structure and business models of online platforms by altering the dynamics of the markets in which they operate.
The following options fall under this model:
Antitrust laws are the primary instrument for dealing with the immense power of platforms whose editorial decisions affect the public discourse in the countries in which they operate, due to their sheer scale (Napoli, 2019). This is also known as the Breaking Model (Rochefort, 2020). In this view, dispersing excessive concentration of power is essential to preventing private companies from becoming guardians of public interest (Helberger et al., 2021).
Another way is to consider social media platforms as public utilities (also known as the public service media [PSM] approach). Like the first option, this would imply a comprehensive regulation of platforms as indispensable infrastructure for the modern economy. A public utility approach could involve separating the conduit functions of platforms from their paid services (Rahman, 2018). This would involve dictating coercive rules for certain areas, aimed at balancing the news personalization with goals of universality and diversity.
The third variation of this approach might include the creation of new public platforms offering an alternative to private companies (Helberger et al., 2021).
Since it appears that using recommender systems (RS) for online streaming is becoming the new norm also for PSM (Hildén, 2022), future research could explore how news apps in particular are designed, and how PSM try to balance news personalization with goals of universality and diversity. These systems, and the ethical issues involved, are in fact the disruptive feature of neo-intermediation, and still need in-depth, cross-disciplinary examination, whichever governance tactics would be adopted.
4 From Industry Self-Regulation to Limited Government Regulation: The end of an era
The DSA regulatory watershed: What is changing?
The forthcoming DSA is marking a shift from the industry self-regulation model to the model of limited government regulation in the form of “accountability set by law,” providing an ex-ante regulation of digital platforms’ activities for the first time.
The approval and enforcement of the DSA, set for the prescriptive regulation of the circulation of information content, is mainly focused on holding platforms accountable, in relation to the circulation of illicit and harmful content. The proposed reform imposes new obligations on service providers proportionate to their role, size, and impact in the market (European Commission, 2020).
Thus, for the first time, the safe harbor principle, which provides exemption from liability of intermediaries and for illegal or harmful third-party content (Directive 2000/31/EC), will be amended.
Nevertheless, a close analysis of the ongoing regulation clearly shows the critical points of the Limited Government Regulation adopted by the Commission in its new policy approach to platform governance. In order to expose and analyze the limitations and critical findings advanced, we will proceed to the analytical review of some key provisions of the DSA aimed at redefining the liability regime of the large online platforms focusing on moderation and rights.
On the basis of the literature review and related documents concerning the DSA, the main directives aimed on holding platforms accountable can be summarized as follows:
Definition of platform liability, in relation to the effective control exercised over content, and providing for a scaled mechanism in relation to their function and size;
obligation to remove illegal content and other restrictive measures (Art. 16);
proactive role of platforms in countering disinformation (Art.7);
specific provisions aimed at implementing algorithm transparency (Art. 27) as a form of user empowerment;
orientation to a multilateral code of conduct according to the model of co-decided responsibility.
The reframing of platform accountability
Article 16 of the DSA introduces – for the first time on a pan-European level – the so-called “notification and takedown” mechanism, already adopted by the German NetzDG of 2017 and the French Avia law of 2020 (Heldt, 2019). It is typical of the second governance model described above in the form of accountability set by law. It seems to introduce a kind of culpa in vigilando, imposing on platforms a legal duty to quickly analyze and, if necessary, remove not only any contents notified as illegal, but also content flagged as harmful, or simply not compliant with community standards.
This model, however, has some limitations. First, there is no clear definition of what is harmful and what is illegal. The task of defining the area of unlawful content is left to national regulations, with the effect, however, of segmenting the moderation policies of intermediaries within the legislative boundaries dictated by the Member States, even if the intention clearly pursued is the harmonization of the overall European regulatory landscape. Above all, it is actually unclear whether the focus must be only on illegal, or also on “non-standard contents” (Stolton, 2020).
However, the subsequent Article 17 (4, e) provides that: “Where the decision [of remotion] is based on the alleged incompatibility of the information with the terms and conditions of the provider, a reference to the contractual ground relied on and explanations as to why the information is considered to be incompatible with that ground.” Therefore, from the combined analysis of the two provisions, we must implicitly infer that the right/duty to act and eventually remove contents is applied to both illegal and non-standard content (Turillazzi et al., 2022).
This may involve the risk of drifting toward precautionary censorship by media companies that often lack the skills, time, as well as the democratic legitimacy to carry out this kind of legal screening. In this way, the binding nature of their own political and editorial line seems to be ratified by law. The result could be a “sanification” (Cobbe, 2020) and standardization of public discourse to a private corporation’s political-editorial line, excluding alternative voices or non-mainstream communities.
We also have to add that, on the basis of Article 23 (p. 4), platforms shall suspend users from their own services in the case of repeated abuse. The exact conditions that could give rise to behavior that constitutes misuse are to be determined by the policies of the platforms (Buri and van Hoboken, 2022). Legitimizing a well-established praxis – also labelled “de-platformization” (Van Dijck et al., 2021) – platform operators are given not only the right/duty to set the limits of what is acceptable and what is not, what is understood to be correct and what is not, on the basis of their own general conditions, but also which users to leave free to publish and which to condemn to digital ostracism. In sum, as already revealed by some scholars, such an approach might be potentially counterproductive, bringing as a collateral effect, the reinforcement – through its legitimization – of the platforms’ “systemic opinion power” and thereby their political power (Helberger, 2020).
The provision of Article 8 does not seem capable of counterbalancing this side effect, which, in order to protect freedom of expression, highlights the absence of precautionary or proactive fact-finding obligations for platforms over the information flow. Platforms are, in fact, increasingly bound to a very short time window for content takedowns that effectively necessitates their use of an ex ante algorithmic content filtering system to avoid incurring penalties. According to European Digital Rights (2020), the DSA seems to follow the principle of “delete first, think later,” which would create a system of privatized content control with arbitrary rules beyond judicial and democratic scrutiny (Gawer and Srnicek, 2021). In doing so, the provision of the DSA seems to reinforce the push toward automated content moderation systems carrying with it the danger of hiding the fundamentally political nature of content moderation executed by algorithms (Gorwa, et al. 2020; Kramsch, 2020).
The third way: The shared responsibility and risk
The state regulatory option is not always the best or at any rate the only solution since it cannot penetrate, on its own, into some areas. In our case, for example, it leaves considerable discretion to platforms in their brokering activities, with an effect that is paradoxically contrary to their intent. However, it would be difficult, if not impossible, to dictate universally valid, normatively binding standards to define once and for all what is appropriate and what is not (Art. 16). Equally, it would not be possible to replace or appropriate proprietary platform algorithms to regulate implicit neo-intermediation (Art. 27). Moreover, traditional top-down control systems, typical of publisher accountability, might no longer be a viable answer. Therefore, noting the multisourcing and multilevel nature of governance, which necessarily sees a redistribution of sources, a shared (tripartite) approach which also covers accountability could overcome the limitations of the model. This would be better adapted to all the listed disruptive features of neo-intermediation, going beyond the host-editor dichotomy.
In order to avoid drifts toward censorship, regulation of neo -intermediation based on risk and liability sharing between user and platforms, should be a third way. This would exempt platforms from legal sanctions and, therefore, from their censorship power-duty regarding “sensitive” content in the event of free and secure user identification. The model would be based on users’ free choice to identify themselves, and thus to assume legal liability for their own content. The hypothesis supported here is based on a choice of identification, and seems to be able to guarantee a relationship based on transparency and accountability between users and platforms (Bosshard, 2020). Such a purpose, which we could call the “neo-editorial accountability” model, should be framed as a third alternative to the publisher-hosting provider dichotomy. We argue that it might be better suited to the new reality and the active engagement of users in both the production and dissemination of content. Indeed, the traditional top-down control systems, typical of publisher liability, can no longer be a viable answer.
Neo-editorial accountability does not intend to exempt platforms from liability and thus from control of the information flows conveyed through them. On the contrary, it intends to distribute liability in such a way that the attribution of a power-duty to respond does not translate into a right to censor or to increase discretion. This is without prejudice to the platforms’ obligation to remove content that violates civil and criminal laws, including through automated “emergency” filtering systems. Automated monitoring systems would then remain fundamental for detecting illegal content, and their conduct would be as automated as that carried out by “bots.” However, added to this would be, as we shall see, the “antibodies” created by strengthening critical media literacy for self-defense against fraudulent and manipulative behavior.
A shared responsibility best fits the concept of neo-intermediation, in which the issuer and recipient of information can coincide. However, this also means that the users must have an awareness of exactly what role they have as both consumer and producer of the information and the consequences (including those that are legal) of their activity. The so-called “prosumer” (Sorice, 2022) must have awareness of the possible harmfulness of their content uploaded to the Web, for which they assume legal responsibility through identification. However, in addition to this it is crucial that they also have critical awareness of the fact that implicit data uploading or simple information fruition – the so-called “behavioral surplus” (Zuboff, 2019) – feeds personalization algorithms, helping to indirectly determine their own agenda setting. Indeed, explicit moderation, which we have just analyzed through a reading of Article 16, is not the only form of moderation that characterizes neo-intermediation. The other is an implicit form based on algorithmic profiling and personalization systems.
Recommender systems (RS) represent the most important personalization engines (Hildebrandt, 2022). By RS we mean data-driven, computer-based software tools and techniques that provide suggestions for elements that may be valuable for a user (Ricci et al., 2015). Given that personalization can be implemented in many ways, it might be useful to start with a brief synoptic overview of the main models through which RS effectively work.
Recommendations could be based on:
previous media use and the similarity of consumed content (behavioral and content-based filtering) (Montaner et al., 2003);
active user feedback (knowledge-based filtering) (Burke, 2001);
social networks, where users’ media consumption is also recommended to their social networking site connections (social-based recommender systems) (Sun et al., 2015);
comparability or even on the similarity (for some simplified categories) of the user with others (collaborative filtering). The basic idea behind is that users with similar consumption patterns tend to like similar content. The result is selective exposure to media content which contributes to societal polarization (Airoldi and Gambetta, 2018). By constructing, manipulating, and strengthening these homogenizing categories, data-driven personalization, therefore, works on the premise of “divide and rule.” The audiences of the platforms are fragmented into homogeneous categories whose boundaries are established, and known, only by the platform managers.
Taking up the well-known Habermasian hermeneutic paradigm, RS seem to add to the information asymmetry of top-down traditional broadcasting the strategic and potentially manipulative action of peer-to-peer communication, no longer tempered by a rational validation constraint “imposed by heterogeneity and the unknowability of the mass audience” (Habermas, 1984). Through so-called psychographic data collection techniques and “hyper-nudging” (Yeung, 2017), which act on the totality of information and not only on statistical samples, the actor has a profound knowledge of the “citizen-user,” who is then easily manipulated by targeted, and sectoral communication (Giacomini, 2018, 2020). The effect is the segmentation of audiences, capable of breaking up the control traditionally exercised by the “autonomous public sphere” which, according to the Habermasian ideal, is capable of communicatively generating a critical power of the institutions of the center, while legitimizing their power.
Making explicit the invisible functioning of RS is the first step to ensuring a re-balancing of information asymmetry, and of this need the European legislator seems to have taken note. In response to the recent information crisis, Article 27 of the DSA wants to tackle transparency issues concerning personalization systems, by encouraging platforms to make the profiling parameters used transparent, and to possibly provide users with a choice of multiple options.
Even though the European Data Protection Supervisor’s (EDPS) Opinion (1/2021) that suggested a general ban on profiling in commercial and political advertising has been disregarded, Article 29 contains a list of provisions related to information disclosure. It requires platforms to set out the main parameters of their RS, and to provide the opportunity for users to choose between different options. Nevertheless, transparency could be understood in a very different manner. The provision is silent on the question of what the other options should be, or how they could align with public values and fundamental rights. Above all, it does not provide an obligation for platforms to enable users to choose recommendation algorithms from third “neutral” parties. At the same time, the non-profiling option, laid out by Article 38, but only in relation to very large on-line platforms (VLOPs cfr. DSA), could be difficult – if not impossible – to use in the mare magnum of online information (Hildebrandt, 2022). In sum, it seems to lack its very objective: Transparency cannot be considered an end in itself, and information is not enough without a real possibility of action and choice. The algorithmic RS may continue to speak the same hegemonic vocabulary as the platforms in which they operate, with the same related cognitive biases and manipulative mechanisms. The risks this involves for fundamental rights and the autonomy of the public sphere are now well-known. If not constrained, pervasive data collection and monitoring will carry on being the pattern and lead to surveillance by online platforms (Helberger et al., 2021). It does not seem to offer a real chance to challenge the dominant RS on platforms, and a real alternative to their hegemonic vocabulary. Consequently, the article seems to obscure, once again, the fundamentally political nature of profiling and personalization executed on a large scale (Gorwa et al., 2020).
5 Concluding remarks
Through the concept of discursive closure, we can see how particular values and (infra)structures are naturalized, neutralized, and legitimated, with the active marginalization of likely alternatives, even on an imaginal level. Approaches based on technological solutionism (Dencik and Hintz, 2017), like the Limited Government Regulation, which inspires the ongoing “European digital package”, would not address regulatory issues inherent in fundamental problems such as the systemic opinion power of big platforms. None of the provisions of the ongoing regulation are aimed at creating countervailing powers, even though this is essential to preventing certain social media platforms from becoming “quasi-governments” of online speech (Cohen, 2019; Helberger, 2020). Their vocabulary, in turn, enforces the discursive depoliticization of structural phenomena such as “Dataism”, the gradual normalization of datafication, as a new paradigm in science and society (Van Dijck, 2014; Dencik, 2018). In so doing, it seems to lack a crucial requirement for considering the democratic tactics of governance in the digital ecosystem: the restitution of the intellectual privacy of individuals (Eskens, 2020), understood as control of both “incoming” and “outgoing” information flow (Rodotà, 2014). This is a primary condition for ensuring the resilience of autonomy and cognitive self-sovereignty, and, therefore, for reflecting on the possible resilience of an autonomous public sphere.
While we are aware that a sectoral approach cannot be a definitive answer to the problem, we have described a shared model of accountability that seeks to ensure that the empowerment of platforms does not result in a right to censorship or go on to increase discretion, if not arbitrariness, in defining the standards allowed in public discourse.
Furthermore, empowering “prosumers” means that they at the time of identification choice, should be empowered to assess the potential harmfulness of their own content through adequate disclosure and media literacy. The enforcement of the latter should be a prerequisite for the feasibility of such a model as outlined in this paper. For the implementation of such media literacy, it is possible to envisage triangulation: bodies specialized in specific issues, such as independent intermediaries between regulators and platforms. They would aim to assist the producer, at their request, in the assessment of the compliance or non-compliance of their content. Depending on the nature of the filtering powers attributed to them on a case-by-case basis, these intermediaries could, if authorized, be given responsibility for publications. These actors, therefore, would be both regulators and regulated. This would be a further application of the model of “regulatory tripartism” (Ayres and Braithwaite, 1992), which, recognizing that society cannot rely exclusively on law and its agencies of implementation, thinks about models of regulation that draw third parties into the circle of regulation. (Drahos and Krygier, 2017). This also requires that public interest groups receive the information available to a regulator and the opportunity to participate in regulatory negotiation. They would then have a role not only in enforcement and compliance, but also in increasing the regulatory capacity of the network society.
In our case, these intermediate bodies would go on to constitute a third neutral element, which both through the production of rules “from below” and through the enforcement of users’ critical media literacy (Nichols and Smith, 2021; Markham, 2019) could help self-immunize the health of the digital ecosystem by creating a survival habitat for a democratic public sphere.
We argue that the main objective to ensure the effectiveness of any tactic of governance should be, first and foremost, to find tools capable of unveiling the new forms of structural oppression exercised by the cultural hegemony. Unlike in the past, this could be done in an invisible, and therefore more pervasive and insidious manner by the seemingly neutral algorithms that neo-intermediate public discourse (Kramsch, 2020; Amoore, 2020). This also means challenging the “normalizing” perceptual paradigms (Mertala, 2020) imposed by the dominant political discourse both by private corporations and by government institutions. Last, it should be added that carrying on research in this field also means unveiling such ideological normalizations, which in turn justify the construction of “data relations” (Couldry and Meijas, 2019), ensuring the “natural” conversion of daily life into a data stream, without which neo-intermediation could not exist in its current form.
References
Airoldi, M. (2021). The machine habitus. Towards a sociology of algorithms. John Wiley & Sons.Search in Google Scholar
Airoldi, M. (2020). The spectrum of the algorithm and the social sciences. Critical perspectives on intelligent machines and automation of polis inequalities. Polis (Italy), XXXIV, 111–128. https://doi.org/10.1424/96442Search in Google Scholar
Airoldi, M., & Gambetta, D. (2018). On the myth of algorithmic neutrality. The Lab’s Quarterly, 20(3), 25–46.Search in Google Scholar
Amoore, L. (2020). Cloud ethics. Algorithms and the attributes of ourselves and others. Duke University Press.10.1215/9781478009276Search in Google Scholar
Ayres, I., & Braithwaite, J. (1992). Responsive regulation. Oxford University Press.10.1093/oso/9780195070705.001.0001Search in Google Scholar
Balkin, J. M. (2018). Free speech is a triangle. Columbia Law Review, 118(7), 2011–2056.Search in Google Scholar
Bertrand, C. J. (2000). Media ethics and accountability systems. Transaction Publishers.Search in Google Scholar
Bucher, T. (2018). If… then. Algorithmic power and politics. Oxford University Press.10.1093/oso/9780190493028.001.0001Search in Google Scholar
Burrell, J., & Fourcade, M. (2021). The society of algorithms. Annual Review of Sociology, 47(1), 213–237. https://doi.org/10.1146/annurev-soc-090820-02080010.1146/annurev-soc-090820-020800Search in Google Scholar
Beer, D. (2017). The social power of algorithms. Information, Communication & Society, 20(1), 1–13.10.1080/1369118X.2016.1216147Search in Google Scholar
Bosshard, M. (2020). La “censura privata” dei contenuti politici sui social network tra mito sociale e realtà giuridica [The “private censorship” of political content on social networks between social myth and legal reality]. Fondazione David Hume. https://www.fondazionehume.it/Search in Google Scholar
Buri, I., & van Hoboken, J. (2022, June 24). The DSA supervision and enforcement architecture. DSA Observatory. https://dsa-observatory.eu/2022/06/24/the-dsa-supervision-and-enforcement-architecture/Search in Google Scholar
Burke, R., Felfernig, A., & Göker, M. H. (2011). Recommender systems: An overview. AI Magazine, 32(3), 13–18. https://doi.org/10.1609/aimag.v32i3.2361AI10.1609/aimag.v32i3.2361Search in Google Scholar
Campo, E., Martella, A., & Ciccarese, L. (2018). Algorithms as a social construction. Neutrality, power and opacity. The Lab’s Quarterly, 20(3), 47–72.Search in Google Scholar
Cheney-Lippold, J. (2018). We are data: Algorithms and the making of our digital selves. NYU Press.10.2307/j.ctt1gk0941Search in Google Scholar
Cobbe, J. (2020). Algorithmic censorship by social platforms: Power and resistance. Philosophy & Technology, 34, 739–766. https://doi.org/10.1007/s13347-020-00429-0.10.1007/s13347-020-00429-0Search in Google Scholar
Cohen, J. (2019). Between truth and power: Legal constructions of informational capitalism. Oxford University Press.10.1093/oso/9780190246693.001.0001Search in Google Scholar
Couldry, N., & Mejias, U. A. (2019). The costs of connection: How data is colonising human life and appropriating it for capitalism. Stanford University Press.10.1515/9781503609754Search in Google Scholar
De Blasio, E., & Selva, D. (2021). Who is responsible for disinformation? European approaches to social platforms’ accountability in the post-truth era. American Behavioral Scientist, 65(6), 825–846. https://doi.org/10.1177/000276422198978410.1177/0002764221989784Search in Google Scholar
De Gregorio, G. (2020). The rise of digital constitutionalism in the European Union (2019). International Journal of Constitutional Law, 19(1), 41–70. https://ssrn.com/abstract=350669210.1093/icon/moab001Search in Google Scholar
Dencik, L. (2018). Surveillance realism and the politics of imagination: Is there no alternative? Krisis, Journal for Contemporary Philosophy, 1–3.10.21827/krisis.38.1.38829Search in Google Scholar
Dencik, L., & Hintz, A. (2017). Civil society in an age of surveillance: Beyond techno-legal solutionism? Civil Society Futures. https://civilsocietyfutures.org/civil-society-in-anage-of-surveillance-beyondtechno-legalsolutionismSearch in Google Scholar
Di Mascio, F., Barbieri, M., Natalini, A., & Selva, D. (2021). Covid-19 and the information crisis of liberal democracies: Insights from anti-disinformation action in Italy and EU. Partecipazione e Conflitto, 14(1), 221–240.Search in Google Scholar
Drahos, P. (2017). Regulating capitalism’s processes of destruction. In P. Drahos (Ed.), Regulatory theory: Foundations and applications (pp. 761–784). ANU Press.10.22459/RT.02.2017.43Search in Google Scholar
Drahos, P., & Krygier, M. (2017). Regulation, institutions, and networks. In P. Drahos (Ed.), Regulatory theory: Foundations and applications (pp. 1–22). ANU Press.10.22459/RT.02.2017.01Search in Google Scholar
Eberwein, T., Fengler, S., & Karmasin, M. (Eds.). (2018). European handbook of media accountability. Routledge.Search in Google Scholar
Eskens, S. (2020). The personal information sphere: An integral approach to privacy and related information and communication rights. Journal of the Association for Information Science and Technology, 71, 1116–1128. https://doi.org/10.1002/asi.2435410.1002/asi.24354Search in Google Scholar
European Commission. (2020, December 15). The Digital Services Act: Ensuring a safe and accountable online environment. European Commission. https://ec.europa.eu/info/strategy/priorities-2019-2024/europe-fit-digital-age/digital-Search in Google Scholar
European Data Protection Supervisor. (2021). EDPS Opinions on the Digital Services Act and the Digital Markets Act. European Data Protection Supervisor. https://edps.europa.eu/press-publications/press-news/press-releases/2021/edps-opinions-digital-services-act-and-digital_enSearch in Google Scholar
European Digital Rights. (2020). The EU’s attempt to regulate Big Tech: What it brings and what is missing. European Digital Rights (EDRi). https://edri.org/our-work/eu-attempt-to-regulate-big-tech/Search in Google Scholar
Fengler, S., Eberwein, T., Mazzoleni, G., & Porlezza, C. (Eds.). (2014). Journalists and media accountability. Peter Lang.10.3726/978-1-4539-1247-8Search in Google Scholar
Flew, T., & Gillett, R. (2021). Platform policy: Evaluating different responses to the challenges of platform power. Journal of Digital Media and Policy, 12(2), 231–246. https://doi.org/10.1386/jdmp_00061_110.1386/jdmp_00061_1Search in Google Scholar
Floridi, L. (2021). The end of an era: From self regulation to hard law for the digital industry. Philosophy & Technology, 34, 619–622. https://doi.org/10.1007/s13347-021-00493-010.1007/s13347-021-00493-0Search in Google Scholar
Friedman, B., Kahn, P. H., Borning, A., & Huldtgren, A. (2013). Value sensitive design and information systems. In N. Doorn, D. Schuurbiers, I. van de Poel, & M. Gorman (Eds.), Early engagement and new technologies: Opening up the laboratory. Philosophy of Engineering and Technology, vol 16. (pp. 55–95). Springer. https://doi.org/10.1007/978-94-007-7844-3_410.1007/978-94-007-7844-3_4Search in Google Scholar
Fuchs, C. (2019). Karl Marx in the age of big data capitalism. In D. Chandler, & C. Fuchs (Eds.), Digital objects, digital subjects: Interdisciplinary perspectives on capitalism, labour and politics in the age of big data (pp. 53–71). London University of Westminster Press.10.16997/book29.dSearch in Google Scholar
Garton Ash, T. (2016). Free speech. Ten principles for a connected world. Atlantic Books.Search in Google Scholar
Gawer, A., & Srnicek, N. (2021). Online platforms: Economic and societal effects. Study Panel for the Future of Science and Technology, EPRS | European Parliamentary Research Service.Search in Google Scholar
Giacomini, G. (2020). Habermas 2.0. A philosophical approach to neo-intermediation and to the (enhanced) return of strategic action. Reasoning Practice, 1/2020, 31–50.Search in Google Scholar
Giacomini, G. (2018). Towards neo-intermediation. The power of large digital platforms and the public sphere. Iride: Filosofia e discussione pubblica, 2018(3), 457–468. https://doi.org/10.1414/92394Search in Google Scholar
Gillespie, T. (2018). Custodians of the internet: Platforms, content moderation, and the hidden decisions that shape social media. Yale University Press.10.12987/9780300235029Search in Google Scholar
Gillespie, T. (2014). The relevance of algorithms. In T. Gillespie, P. J. Boczkowski, & K. A. Foot (Eds.), Media technologies: Essays on communication, materiality, and society (pp. 167–193). MIT Press.10.7551/mitpress/9780262525374.003.0009Search in Google Scholar
Gorwa, R. (2019). What is platform governance? Information, Communication & Society, 22(11), 854–871.10.1080/1369118X.2019.1573914Search in Google Scholar
Gorwa, R., Binns, R., & Katzenbach, C. (2020). Algorithmic content moderation: Technical and political challenges in the automation of platform governance. Big Data & Society, 7. https://doi.org/10.1177/205395171989794510.31235/osf.io/fj6pgSearch in Google Scholar
Habermas, J. (1984). Theory of communicative action, volume one: Reason and the rationalization of society. Beacon Press.Search in Google Scholar
Haufler, V. (2001). A public role for the private sector: Industry self-regulation in a global economy. Carnegie Endowment for International Peace. https://doi.org/10.2307/j.ctt6wpjtw10.2307/j.ctt6wpjtwSearch in Google Scholar
Helberger, N. (2020). The political power of platforms: How current attempts to regulate misinformation amplify opinion power. Digital Journalism, 8, 842–854.10.1080/21670811.2020.1773888Search in Google Scholar
Helberger, N., Van Drunen, M., Vrijenhoek, S., & Möller, J. (2021). Regulation of news recommenders in the Digital Services Act: Empowering David against the very large online Goliath. Internet Policy Review. https://policyreview.info/articles/news/regulation-news-recommenders-digital-services-act-empowering-david-against-very-largeSearch in Google Scholar
Heldt, A. (2019). Reading between the lines and the numbers: An analysis of the first NetzDG reports. Internet Policy Review, 8(2), 1–18. https://doi.org/10.14763/2019.2.139810.14763/2019.2.1398Search in Google Scholar
Hildebrandt, M. (2022). The issue of proxies and choice architectures. Why EU law matters for recommender systems. Frontiers in Artificial Intelligence 5, 789076. https://doi.org/10.3389/frai.2022.78907610.3389/frai.2022.789076Search in Google Scholar
Hildén, J. (2022). The public service approach to recommender systems: Filtering to cultivate. Television & New Media, 23(7), 777–796. https://doi.org/10.1177/1527476421102010610.1177/15274764211020106Search in Google Scholar
Kitchin, R. (2019). Thinking critically about and researching algorithms. Information, Communication & Society, 20, 1–16. https://doi.org/10.1080/1369118X.2016.115408710.1080/1369118X.2016.1154087Search in Google Scholar
Kraft, M. E., & Furlong, S. R. (2013). Public policy: Politics, analysis and alternatives. CQ Press.Search in Google Scholar
Kramsch, C. (2020). The political power of the algorithm. Technology and Language, 1(1), 45–48. https://doi.org/10.48417/technolang.2020.01.1Search in Google Scholar
Langvardt, A. W. (2019). Business law the ethical global and e-commerce environment (17th ed.). McGraw-Hill.Search in Google Scholar
Markham, A. N. (2019). Critical pedagogy as a response to datafication. Qualitative Inquiry, 25(8), 754–760. https://doi.org/10.1177/107780041880947010.1177/1077800418809470Search in Google Scholar
McQuail, D. (2005). McQuail’s mass communication theory (5th ed.). Sage.10.4135/9780857024374Search in Google Scholar
Mertala, P. (2020). Data (il)literacy education as a hidden curriculum of the datafication of education. Journal of Media Literacy Education, 12(3), 30–42.10.23860/JMLE-2020-12-3-4Search in Google Scholar
Meyer, T., & Hanot, C. (2020, September 28). How platforms are responding to the “disinfodemic”. Disinfo.eu. https://www.disinfo.eu/publications/how-platforms-are-responding-to-the-disinfodemic)Search in Google Scholar
Moeller, J., & Helberger, N. (2018). Beyond the filter bubble: Concepts, myths, evidence and issues for future debates. University of Amsterdam.Search in Google Scholar
Montaner, M., López, B., & Esteva, J. L. (2003). A taxonomy of recommender agents on the internet. Artificial Intelligence Review, 19, 285–330.10.1023/A:1022850703159Search in Google Scholar
Morlino, L. (2012). Changes for democracy: Actors, structures, processes. Oxford University Press.10.1093/acprof:oso/9780199572533.001.0001Search in Google Scholar
Napoli, P. M. (2019). Social media and the public interest: Media regulation in the disinformation age. Columbia University Press.10.7312/napo18454Search in Google Scholar
Nichols, T. P., & Smith, A. (2021). Critical literacy, digital platforms, and datafication. Handbook of critical literacies. https://doi.org/10.4324/9781003023425-4010.4324/9781003023425-40Search in Google Scholar
Rahman, K. S. (2018). Regulating informational infrastructure: Internet platforms as the new public utilities. Georgetown Law and Technology Review, 2(2), 234–248 https://ssrn.com/abstract=3220737Search in Google Scholar
Ricci, F., Rokach, L., & Shapira, B. (2015). Recommender systems: Introduction and challenges. In F. Ricci, L. Rokach, & B. Shapira (Eds.), Recommender systems handbook (pp. 1–34). Springer.10.1007/978-1-4899-7637-6_1Search in Google Scholar
Rochefort, A. (2020). Regulating social media platforms: A comparative policy analysis. Communication Law and Policy, 25(2), 225–260.10.1080/10811680.2020.1735194Search in Google Scholar
Rodotà, S. (2014). Il mondo nella rete: Quali i diritti, quali i vincoli [The world in the net: What rights, what constraints]. Laterza.Search in Google Scholar
Santaniello, M. (2022). Sovranità digitale e diritti fondamentali: Un modello europeo di Internet governance [Digital sovereignty and fundamental rights: A European model of Internet governance]. Rivista italiana di informatica e diritto, 4(1), 5–5. https://doi.org/10.32091/RIID0058Search in Google Scholar
Santaniello, M. (2021). La regolazione delle piattaforme e il principio della sovranità digitale [Platform regulation and the principle of digital sovereignty]. Rivista di Digital Politics, 3, 579–600. https://doi.org/10.53227/103806Search in Google Scholar
Schlesinger, P. (2020). After the post-public sphere. Media, Culture & Society, 42(7–8), 1545–1563.10.1177/0163443720948003Search in Google Scholar
Schlesinger, P., & Kretschmer, M. (2020). The changing shape of platform regulation. Media@LSE. https://blogs.lse.ac.uk/medialse/2020/02/18/the-changing-shape-of-platform-regulation/Search in Google Scholar
Sorice, M. (2022). Comunicazione politica e opinione pubblica [Political communication and public opinion]. In L. Gherardi (Ed.), Lezioni brevi sull’opinione pubblica (pp. 33–43). Meltemi.Search in Google Scholar
Srnicek, N. (2017). Platform capitalism. Polity Press.Search in Google Scholar
Stolton, S. (2020). Digital Services Act should avoid rules on “harmful” content, Big Tech tells EU. Euractiv. https://www.euractiv.com/section/digital/news/digital-services-act-should-avoid-rules-on-harmful-content-big-tech-tells-eu/Search in Google Scholar
Sun, Z., Han, L., Huang, W., Wang, X., Zeng, X., Wang, M., & Yan, H. (2015). Recommender systems based on social networks. Journal of Systems and Software, 99, 109–119.10.1016/j.jss.2014.09.019Search in Google Scholar
Turillazzi, A., Casolari, F., Taddeo, M., & Floridi, L. (2022). The Digital Services Act: An analysis of its ethical, legal, and social implications. SSRN Electronic Journal, 10.2139/ssrn.4007389.10.2139/ssrn.4007389Search in Google Scholar
Van Dijck, J., de Winkel, T., & Schäfer, M. T. (2021). Deplatformization and the governance of the platform ecosystem. New Media & Society, 0(0). https://doi.org/10.1177/1461444821104566210.1177/14614448211045662Search in Google Scholar
Van Dijck, J. (2014). Datafication, dataism and dataveillance: Big data between scientific paradigm and ideology. Surveillance & Society, 12(2), 197–208.10.24908/ss.v12i2.4776Search in Google Scholar
Van Drunen, M. Z. (2020). The post-editorial control era: How EU media law matches platforms’ organisational control with cooperative responsibility. Journal of Media Law, 12(2), 166–190. https://doi.org/10.1080/17577632.2020.179606710.1080/17577632.2020.1796067Search in Google Scholar
Webb, A. (2019). The Big Nine. How the tech titans & their thinking machines could warp humanity. Public Affairs.Search in Google Scholar
Yeung, K. (2017). “Hypernudge”: Big data as a mode of regulation by design. Information, Communication & Society, 20(1), 118–136.10.1080/1369118X.2016.1186713Search in Google Scholar
Zuboff, S. (2019). The age of surveillance capitalism: The fight for a human future at the new frontier of power. Public Affairs.Search in Google Scholar
© 2023 the author(s), published by De Gruyter.
This work is licensed under the Creative Commons Attribution 4.0 International License.
Articles in the same Issue
- Titelseiten
- Editorial
- No innocents: Platforms, politics, and media struggling with digital governance
- Articles
- Combatting disinformation with crisis communication: An analysis of Meta’s newsroom stories
- Promoting responsible AI: A European perspective on the governance of artificial intelligence in media and journalism
- Platform regulation and “overblocking” – The NetzDG discourse in Germany
- The “neo-intermediation” of large on-line platforms: Perspectives of analysis of the “state of health” of the digital information ecosystem
- Where to next with Australia’s News Media and Digital Platforms Mandatory Bargaining Code?
Articles in the same Issue
- Titelseiten
- Editorial
- No innocents: Platforms, politics, and media struggling with digital governance
- Articles
- Combatting disinformation with crisis communication: An analysis of Meta’s newsroom stories
- Promoting responsible AI: A European perspective on the governance of artificial intelligence in media and journalism
- Platform regulation and “overblocking” – The NetzDG discourse in Germany
- The “neo-intermediation” of large on-line platforms: Perspectives of analysis of the “state of health” of the digital information ecosystem
- Where to next with Australia’s News Media and Digital Platforms Mandatory Bargaining Code?