Startseite Tackling Hate Speech in the Digital Space: Germany’s Plans on an Act Against Digital Violence and its Impact on Ethno-Cultural Minorities
Artikel Open Access

Tackling Hate Speech in the Digital Space: Germany’s Plans on an Act Against Digital Violence and its Impact on Ethno-Cultural Minorities

  • Kyriaki Topidi

    Dr. habil. Kyriaki Topidi is the head of the research cluster Culture & Diversity and joined ECMI in 2019. She holds a degree in law from the Robert Schuman Faculty of Law in Strasbourg, an MA in International Studies from the University of Birmingham, a PhD in European Studies from Queen’s University Belfast, and a habilitation in Comparative Constitutional Law from the University of Fribourg. She has lectured extensively and conducted research in the areas of Public International Law, European Law, Human Rights, and Comparative Law. In the past, she has held research positions in various institutions and was a senior lecturer at the Faculty of Law of the University of Lucerne in Switzerland. She has also served as the managing director of the Centre for Comparative Constitutional Law and Religion at the same institution. She has been a guest scholar at Fordham University (US), the IDC (Israel), the Max Planck Institute in Halle (Germany), the Institute of Law and Religion of the University of Fribourg (Switzerland), the Institute of Comparative Law of the University of Paris 2 – Assas (France), as well as the National Law School Delhi (India), among others. Her research interests focus on diversity management, minority protection rights and mechanisms, with a special interest in religion. She is the author and editor of several volumes, including “EU Law, Minorities and Enlargement” (Intersentia 2010), “Constitutional Evolution in Central and Eastern Europe: Expansion and Integration in the EU” (Ashgate 2011), “Transnational Legal Process and Human Rights” (Ashgate, 2013), and “Religion as Empowerment: Global Legal Perspectives” (Routledge 2016). More recently, she has edited a collection on “Normative Pluralism and Human Rights” (Routledge 2018). Her latest monograph focuses on “The Right to Difference and Comparative Religious Diversity in Education” (Routledge 2020), and her latest co-edited work focuses on “Minority Recognition and the Diversity Deficit” (Hart 2022). At the ECMI, she is currently researching minority identity and digital governance, as well as the intersection of minorities with social movements, with an upcoming edited volume on “Minority Rights and Social Change: Norms, Actors, and Strategies” (Routledge 2023). She is the co-editor of the book series “Routledge Advances on Minority Studies” (RAMS).

    EMAIL logo
    und Moritz Malkmus

    Moritz Malkmus (Dipl.-Jur.) is a research assistant to Professor Rainer Hofmann at Goethe University Frankfurt. He studied law in Frankfurt and Vilnius, earning his First State Examination with distinction in 2021. His doctoral research, part of the LOEWE research focus “Minority Studies: Language and Identity” (funded by the Hessian Ministry of Higher Education, Research, Science and the Arts until August 2024), explores the protection of new minorities under international law. His previous publications cover (EU) fundamental rights and minority issues as well as humanitarian law.

Veröffentlicht/Copyright: 19. November 2024

Abstract

The increasing significance of online communication services is not yet matched by adequate protections for particularly vulnerable groups, such as minorities, who are disproportionately affected by various forms of digital violence. This article examines a legislative initiative introduced by the German government in 2023, which aims to supplement the EU’s Digital Services Act (DSA) and address certain enforcement gaps in safeguarding individual rights. The analysis reveals that the initiative still faces substantial legal challenges, especially given the harmonizing nature of the DSA, which may require a more refined approach to its ambitious goals. However, a review of European Court of Human Rights (ECtHR) case law concerning the state’s obligation to protect individuals from hate speech suggests that the primary issues outlined in the draft should continue to be pursued. The article contextualizes the initiative within the socio-technical environment where ethno-cultural minorities are targeted online, particularly through hate speech. By situating the initiative within the current socio-legal framework and incorporating a comparative perspective – drawing on regulatory models from Canada, the UK, and Australia – it highlights key challenges that lie ahead.

1 Introduction

Digital tools have expanded the possibility to use digital forms of violence with real life implications. These tools can be, for instance, stalker-ware or spy apps that have notoriously proliferated gender-based violence (Dritter Gleichstellungsbericht 2022). Digital violence is particularly fast growing in the context of online hate speech, video gaming or in employment contexts. As is widely debated (DeCook et al. 2022; Pennington 2018, 620–636; Zheng and Walsham 2021), the specific features of the internet afford the possibility to offenders to commit acts of aggression against the most vulnerable, including minority group members, in easy ways. Anonymity, quick dissemination and complicated processes for content removals are some of the most often cited distinguishing factors of the digital ecosystem that tend to facilitate digital violence.

For minority and vulnerable groups that come under attack online due to protected characteristics such as race, ethnicity, gender or religion, hate speech and other forms of digital violence imply often a withdrawal from democratic participation processes[1] or a denial of fundamental rights such as those of expression or association in an attempt to limit digital exposure to harm and violence for themselves and their families. HateAid (2021, 4) reports, for instance, that more than 70 per cent of hate content reported to the German Federal Police Department has been attributed to right wing extremist groups attacking members of such groups. The harm caused to vulnerable users is multifaceted and covers in addition to the violation of their personal rights, mental health impact, physical well-being but also economic loss if for instance they are engaged in digital work. Digital violence may also be treated, in a number of cases, as the preliminary stage to physical, offline violence.

For the moment, digital violence, described in the terms as set above, appears to develop within a legal vacuum. The regulation of the digital spaces thus, in such circumstances, becomes a pressing need insofar as it reiterates the questions of content moderation and evidence gathering against potential offenders but also extends to the discussion on new types of digitally enabled offences and harms caused. Law enforcement authorities commonly fail to distinguish between digitally enabled offences and more conventional ones, leading to reduced reporting rates for digital violence that undermine the development of coherent regulatory policies in Germany and elsewhere. In the absence of robust action by social media companies in this direction, the current initiative of the German government brings to the fore some of the most challenging concerns surrounding regulation in this area, namely the need for the effective combatting of digital violence against vulnerable users, including minority groups, the oversight of algorithm-driven and hybrid techniques moderating harmful content, the collection of evidence in cases of digital violence but also the need for research that would allow better calibrated regulatory interventions.

Within this framework, the aim of the current contribution is double: first, it provides an embedded legal analysis of the context in which the initiative is rooted. In a second part, it connects the initiative with the socio-technical frame within which ethno-cultural minorities are victimized online, most evidently through hate speech. In this way, we attempt to situate the initiative in socio-legal terms in the current regulatory environment, including in a comparative perspective, highlighting the major challenges ahead.

2 The Federal Government’s Initiative

2.1 Legislative Context

On 17 February 2024, the European Union’s (EU) Digital Services Act (DSA)[2] became applicable. Together with the Digital Markets Act (DMA),[3] which already applies since 2 May 2023, the EU has comprehensively revised its regulatory framework for digital platforms.[4] In the wake of these sweeping legislative changes, the German government launched an initiative to supplement EU legislation with specific measures to enhance the judicial enforcement of the subjective rights of those affected by digital violence.[5] These plans include replacing certain provisions of the 2017 Netzwerkdurchsetzungsgesetz (Network Enforcement Act),[6] which the DSA has largely rendered obsolete, with the Digitale-Dienste-Gesetz (Act on the implementation of the DSA) and the Gesetz gegen digitale Gewalt (Act against Digital Violence). While the Federal Government submitted a draft of the former bill to the Bundesrat in December 2023 (Drucks. 676/23), the draft of the latter bill, originally announced for fall 2023, is still pending.[7] As of now, there is only a concept paper[8] published by the Federal Ministry of Justice in April 2023, which outlines the key elements of the latter bill and has already attracted considerable academic attention (Härting and Adamek 2023, 316–320; Lück 2023, 740–746; Maurer 2024, 257–263; Panahi 2023, 556–62; Schäfer 2023, 734–40; Sehl, Pfleger, and Suliak 2023; Valerius 2023, 142; Weck 2023, 12–15). The Ministry’s paper proposes strengthening private information procedures, allowing victims to obtain court orders to suspend social media accounts, and requiring social networks to appoint a person in Germany authorised to receive service. These measures are general in nature and not specifically tailored to the needs of minorities. However, if thoughtfully implemented, they could bolster efforts against digital hate speech and enhance the protection of individual rights, thereby having a significant impact, particularly for minority groups, who are disproportionately targeted online (cf. Topidi 2019; The UN Special Rapporteur on Minority Issues 2021, 21; Faloppa et al. 2023; FRA 2023, 35 et seq.).

2.2 Scope of the Initiative

Unlike existing approaches to platform regulation that primarily focus on the behavior of gatekeepers, the initiative shifts focus to the (judicial) enforcement of subjective rights. As outlined by Schäfer (2023, 734 et seq.), Panahi (2023, 557), Lück (2023, 741), Valerius (2023, 142–3) and Maurer (2024, 259), the Ministry’s initiative draws on a notion previously unfamiliar to the German legal system: “digital violence”. This term is paraphrased as “violations of the right of personality in the digital space” (BMJ 2023a, 1), with examples including “insults, threats or defamation” (BMJ 2023b, 1 [translations by the authors]). In addition, one proposal regarding the private information procedure even extends to all absolute rights, including the right to establish and carry on a business (Recht am eingerichteten und ausgeübten Gewerbebetrieb) (BMJ 2023a, 3). Overall, however, the paper should not be interpreted as introducing a new category of individual rights infringements, nor as altering the established rules of democratic discourse (BMJ 2023a, 2; Schäfer 2023, 734–5; Weck 2023, 12; critically, Panahi, supra note 11, p. 562).

This approach has been criticized, e.g. by GFF (2023, 1, 3), Maurer (2024, 259–6) and Lück (2023, 742), for being overly broad: By citing an untruthful restaurant review as a violation of the right to establish and carry on a business, it would lack any discernible connection to the protection against ‘digital violence’. Conversely, the proposal has been considered too narrowly focused on individual rights, leaving a gap where hateful contents do not violate subjective rights but rather general legal interests protected by objective law, e.g. in cases of incitement (GFF 2023, 1, 3, 5; Lück 2023, 742–3). On the other hand, Schäfer (2023, 736–7), argued that the concept of personality rights, as recognized by the German Federal Court of Justice (Judgement of 14 June 2022 – VI ZR 172/20), may be broad enough to cover hateful content affecting entire groups, provided that members of that group can be individualized. Examining similar debates, one might even consider whether alternative concepts, such as ‘online harm’ (as discussed by Farrand 2024), would be more suitable for effectively addressing the issue at hand. This range of opinions underscores the need for further conceptual clarification. In particular, the Ministry will need to consider whether its focus on individual rights sufficiently addresses all behaviours it seeks to regulate under the notion of ‘digital violence’ (cf. Maurer 2024, 260).

2.3 Proposed Measures

The Ministry proposed three specific measures: The first is to modify the existing information procedure under the Telekommunikation-Telemedien-Datenschutz-Gesetz (Telecommunications and Telemedia Data Protection Act)[9] to facilitate the identification of individuals publishing infringing content. This is achieved by expanding the scope of the information procedure to the disclosure of user data, e.g. IP addresses, and extending its application to messenger services (BMJ 2023a, 2–4. Cf. Panahi 2023, 557–8; Schäfer 2023, 735 et seq.; Lück 2023, 742 et seq.; Weck 2023, 12; Maurer 2024, 257–8). However, as Lück (2023, 743) points out, in some cases (e.g. shared access), disclosing IP addresses does not necessarily lead to an individualized person, and Maurer (2024, 260) notes that there are relatively simple ways to hide IP addresses. In addition, the information procedure shall be extended to infringements of all absolute rights (BMJ 2023a, 3). In this context, concerns have been raised that this extended procedure could be misused to target anonymous speech that, while unwelcome, does not violate any rights (GFF 2023, 1, 3, 5; Lück 2023, 742–3; Maurer 2024, 261).

The second proposal introduces a new instrument allowing courts, upon request, to order social networks to suspend accounts used to disseminate hateful content (BMJ 2023a, 4–5; Lück 2023, 742 et seq.; Maurer 2024, 258; Panahi 2023, 559–60; Schäfer 2023, 738–9; Weck 2023, 12–3). According to the concept paper, this measure is subject to the following conditions: First, content moderation must have proven insufficient, and there must be a risk of recurring serious violations of personality rights. Second, the account holder must be notified of the blocking request and given an opportunity to be heard. Third, the suspension must be proportionate and limited to a reasonable duration. Given the severity of this measure, it has been appropriately noted that the legal threshold should be set relatively high (Schäfer 2023, 739). However, the proposal lacks clarity on whether only a specific account can be blocked, or if other accounts held by the same individual (potentially even on related platforms) can also be suspended to reduce the risk of circumvention. Furthermore, it is unclear under which circumstances a suspension would be possible for a first offence (Lück 2023, 744; Schäfer 2023, 739).

The third proposal requires social networks to appoint a person in Germany authorized to receive service of process (BMJ 2023a, 6; Lück 2023, 746; Maurer 2024, 258; Panahi 2023, 560; Schäfer 2023, 739–40; Weck 2023, 12–3). While this obligation already existed under previous legislation,[10] the proposal now extends it to the service of extrajudicial documents, such as requests to remove illegal content. The Ministry considers this to be an essential prerequisite for informing service providers of infringing statements and for properly documenting this information in the event of legal disputes with foreign-based providers.[11]

2.4 Legal Obstacles to the Initiative

2.4.1 International Jurisdiction of German Courts for Information Procedures

While the legal proposals outlined in the concept paper are certainly open to debate, several fundamental obstacles remain to be addressed. The first proposal, which aims to extend the existing information procedure, raises the question of whether German courts would even have jurisdiction (ratione loci) to issue corresponding information orders to platform service providers based in other EU countries (cf. Roth 2024). Such jurisdictional questions are governed, in principle, by Regulation (EU) No 1215/2012.[12] Under Article 4 para. 1 of the said Regulation persons domiciled in a Member State shall, subject to the provisions of this Regulation, be sued in the courts of that Member State. Special jurisdictions may be established under Articles 7 et seq. of the same Regulation. However, a recent decision by the German Federal Court of Justice (III ZB 25/21) indicates that establishing such jurisdiction for the information sought under the first proposal may be challenging (cf. Roth 2024; Glocker 2024, 52). In the cited case, the plaintiff, who sold products through an online platform, sought a court order for the disclosure of basic subscriber information (Bestandsdaten) against a Luxembourg-based platform operator. The requested information pertained to customers who had reported allegedly false violations of the platform’s guidelines by the seller. The Federal Court of Justice dismissed the plaintiff’s appeal, ruling that the assertion of tortious claims against a third party is insufficient to establish jurisdiction under Regulation (EU) No 1215/2012 for proceedings concerning the disclosure of such basic subscriber information (Case III ZB 25/21, para. 17–9; Glocker 2024, 52; Roth 2024). A similar situation could arise under the planned Digital Violence Act, if platform users request the disclosure of data to assert infringements of their absolute rights by third parties (cf. Roth 2024).

2.4.2 Country-of-Origin Principle

Another obstacle identified by Cole and Ukrow (2023, 17), Mantz (2024, 36) and Roth (2024) is the country-of-origin principle, enshrined in Article 3 of the E-Commerce Directive,[13] which continues to apply[14] following the DSA’s entry into force. As the Court of Justice of the EU (CJEU) recalled in a recent preliminary ruling procedure discussed by Raue (2024, 204–5), Mantz (2024, 34–7) and Roth (2024), the said principle states that “information society services should be supervised at the source of the activity” (Case C-376/22, 40, 50 [emphasis added]). Deviations from this principle are permissible only under certain conditions, such as those set out in Article 3 para. 4 of the said Directive. In the legal dispute underlying the preliminary ruling, Austria had sought to bring foreign platform providers within the scope of national legislation[15] that aimed at protecting users of such platforms, relying on Article 3 para. 4 of the E-Commerce Directive. However, the CJEU held that this provision

must be interpreted as meaning that general and abstract measures aimed at a category of given information society services described in general terms and applying without distinction to any provider of that category of services do not fall within the concept of measures taken against a ‘given information society service’ within the meaning of that provision. (Case C-376/22, 64)

Thus, the CJEU reaffirmed the fundamental principle of the above-mentioned EU legislation, which limits the general and abstract regulation of these services to the Member State in which they are established (Raue 2024, 205). While it has been rightly pointed out, prior to the above decision, that Article 14 para. 3 of the E-Commerce Directive allows for some executive, judicial and (presumably) legislative discretion at the national level (Cole and Ukrow 2023, 20), it should be noted that this provision was not carried over verbatim into Article 6 para. 4 of the DSA (Hofmann 2023, 6). Unlike Article 14 para. 3 of the E-Commerce Directive, Article 6 para. 4 of the DSA no longer permits Member States to establish procedures governing the removal or disabling of access to information. This is likely to have the effect that such measures will now have to be assessed in terms of the extent to which they are covered (and excluded) by Article 16 of the DSA. Otherwise, the regulatory competence for general and abstract measures remains with the country of origin.

2.4.3 Relationship to the DSA

The remaining questions concern the compatibility of the Ministry’s proposals with the DSA, in particular the extent to which certain provisions of the DSA, within their scope of application, afford Member States legislative latitude. If the DSA excluded complementary national rules, the proposed legislation would conflict with EU law, which enjoys primacy of application.[16] There is currently a detailed study of these issues by Cole and Ukrow (2023), cited and (partly) confirmed by several authors (Lück 2023, 745–6; Panahi 2023, 559 [with regards to Article 23 para. 1 DSA]; Schäfer 2023, 738, 740; Weck 2023, 13–4), which argues that the proposed Digital Violence Act would be compatible with the DSA, while others (Härting and Adamek 2023, 316–320; Maurer 2023, 259; Panahi 2023, 560–2 [with regards to the third proposal]) doubt this assessment (at least for some of the proposals). The Ministry, largely following Cole and Ukrow (2023), assumes that the Act against Digital Violence will be applicable alongside the DSA, as the latter provides only little guidance on the private rights of users and certainly no information about the enforcement of such rights (BMJ 2023c, 6; Libor 2023, 234; Lück 2023, 745). Since the compatibility of the first proposal with the DSA has not been contested, even by more critical commentators (Härting and Adamek 2023, 319), subsequent discussions will focus on proposals two and three.

With regard to the court-ordered suspension of accounts (second proposal), it has been argued that the potentially conflicting norm, Article 23 para. 1 of the DSA, would only harmonise the responsibility of service providers to take certain measures and protection against misuse, which should be distinguished from the judicial enforcement of personality rights (Cole and Ukrow 2023, 1, 16, 34 et seq.; Panahi 2023, 559; Schäfer 2023, 738–9). In support of this view, reference was made to Article 6 para. 4 of the DSA, which explicitly provides for “the possibility for a judicial or administrative authority, in accordance with a Member State’s legal system, to require the service provider to terminate or prevent an infringement”. Therefore, the DSA would leave room for national measures to enforce personality rights, as provided for in the second proposal. This reading may also be supported by the case law of the CJEU on the corresponding provision of Article 14 of the E-Commerce Directive (Cole and Ukrow 2023, 20; Schäfer 2023, 740). In interpreting Article 14 para. 3 of the Directive, which largely corresponds to Article 6 para. 4 of the DSA (Cole and Ukrow 2023, 20; Hofmann 2023, 6), the CJEU stated that the liability exception for hosting providers “is without prejudice to the power of the national courts or administrative authorities to require the host provider concerned to terminate or prevent an infringement, including by removing the illegal information or by disabling access to it” (Case C-18/18, 24). This case law was cited, for example, by the Oberlandesgericht Frankfurt am Main (6 U 154/22, 88–90) in a decision on a national regulation[17] containing elements structurally similar to those proposed for the Domestic Violence Act. However, it should be borne in mind that any comparison with the legal situation under the E-Commerce Directive, though still applicable, is not necessarily conclusive for the DSA. The primary objection to the compatibility of the second proposal was that it would impose additional (national) requirements on service providers that exceed existing obligations under Article 23 para. 1 of the DSA, potentially undermining its harmonising nature (cf. Härting and Adamek 2023, 319–20; Götz 2023, 453–4). Consequently, lawmakers face the challenge of crafting a legal basis for the proposed court orders that does not interfere with the (harmonised) liability regime applicable to service providers (cf. Panahi 2023, 559). In this context, it has been suggested that, instead of blocking a user’s account, such orders should be limited to the distribution of individual illegal content (Mantz 2024, 36), although this would probably reduce the effectiveness of that instrument.

Serious doubts were also expressed as to the compatibility of the third proposal with Articles 12 (on points of contact for recipients of the service) and 13 (on legal representatives) of the DSA (Härting and Adamek 2023, 319; Panahi 2023, 560 et seq). These provisions oblige service providers to “designate a single point of contact to enable recipients of the service to communicate directly and rapidly with them” (Article 12 para. 1 of the DSA) and, for services established outside the EU, to designate a “legal representative in one of the Member States where the provider offers its services” (Article 13 para. 1 of the DSA). Some have argued that these provisions had a fully harmonising effect on users’ ability to contact intermediary services (Härting and Adamek 2023, 319; Panahi 2023, 560 et seq.) or claimed that such a national obligation to appoint additional persons authorised to receive service would contribute to the fragmentation of the internal market, which the DSA seeks to prevent (Panahi 2023, 561–2). Furthermore, critics assert that this obligation would undermine the ‘notice and action’ mechanisms established under Article 16 of the DSA, as it would enable users to notify service providers of illegal content, thereby triggering their liability under Article 6 para. 1 of the DSA. From this point of view, there would only be room for national legislation under the assumption that the DSA (Articles 12 and 13) does not provide for formal service (Schäfer 2023, 740).

As this brief overview illustrates, several obstacles remain, indicating a need to streamline the project. Whether the planned Act against Digital Violence is compatible with the above-mentioned provisions of the DSA can therefore only be determined once the bill has been drafted (cf. Panahi 2023, 559). In doing so, the Ministry will have to prioritize compatibility with the DSA and the most recent case law, carefully considering the issues raised above.

2.5 Fundamental Rights Obligations

The Ministry’s proposal also necessitates a balance among several fundamental rights, including the general right of personality, freedom of speech, informational self-determination, and the privacy of telecommunications. While some of these issues have been outlined in the above-mentioned works (Cole and Ukrow 2023, 41–2; Lück 2023, 743–4; Panahi 2023, 558, 560; Weck 2023, 13–4), little attention[18] has been paid to the specific situation of minority groups, particularly regarding potential fundamental rights requirements for strengthening enforcement measures against hate speech. The foundation for such considerations lies in the well-researched (e.g. FRA 2023, 22; Frosio and Geiger 2023, 31–77; Grote and Wenzel 2022, 124; Korpisaari 2022, 352–377; Topidi 2019) case law of the European Court of Human Rights (ECtHR) on hate speech and freedom of expression.[19] In this context, it is firmly established that the most serious forms of hate speech fall outside the protection of Article 10 by virtue of Article 17 of the European Convention on Human Rights (ECHR).[20] Expressions that do not meet this threshold may be restricted in accordance with the general principles outlined in Article 10 para. 2 of the ECHR (cf. Grote and Wenzel 2022, 124; FRA 2023, 22). In addition, the Strasbourg Court has substantiated its case law in two relevant aspects: first, regarding the liability of private entities, i.e. platform administrators and account holders, and second, concerning the possible obligations of the state to take action against hate speech.

With its landmark decision in Delfi v Estonia (appl. no. 64569/09), the Grand Chamber of the ECtHR provided notable clarifications on the balance between freedom of expression and the protection against hate speech in the digital space (including the liability of platform administrators), which were more recently refined in a different factual context in Sanchez v France (appl. no. 45581/15; cf. Kupsch 2021; Korpisaari 2022, 352–77; FRA 2023, 22–3). In the latter case, after recalling the general principles governing the question whether a particular interference is “necessary in a democratic society” (Article 10 para. 2 of the ECHR), the ECtHR stated:

Since tolerance and respect for the equal dignity of all human beings constitute the foundations of a democratic, pluralistic society, it follows that, in principle, it may be considered necessary in certain democratic societies to penalise or even prevent all forms of expression that propagate, encourage, promote or justify hatred based on intolerance (including religious intolerance), provided that any ‘formalities’, ‘conditions’, ‘restrictions’ or ‘penalties’ imposed are proportionate to the legitimate aim pursued (appl. no. 45581/15, para. 149).

The Court then reiterates its principles on hate speech, as set out in Perinçek v Switzerland (appl. no. 27510/08, 204–8), and emphasizes:

Bearing in mind the need to protect the values underlying the Convention, and considering that the rights under Articles 10 and 8 of the Convention deserve equal respect, a balance must be struck that retains the essence of both rights. While the Court acknowledges that important benefits can be derived from the Internet in the exercise of freedom of expression, it has also found that the possibility of imposing liability for defamatory or other types of unlawful speech must, in principle, be retained, constituting an effective remedy for violations of personality rights (appl. no. 45581/15, 162 [emphasis added]; see also appl. no. 64569/09, 110).

In Sanchez v France, the court even went so far as to extend the private liability to the (non-)removal of hateful third-party comments on the politician’s Facebook comment page, which he administered (cf. Kupsch 2021; Korpisaari 2022, 369). The idea, that states may impose liability on platform administrators that fail to remove hateful contents was further reinforced in the case of Zöchling v Austria (appl. no. 4222/18; critically Tuchtfeld 2023). While it is important to bear in mind the differences between platform administrators, as addressed in the above-mentioned cases, and hosting service providers, who are covered by the DSA’s liability privilege, the ECtHR provides some clarity on the criteria for identifying unlawful contents, on which the DSA is silent (see Frosio and Geiger 2023, 65; Korpisaari 2022, 372 et seq.).

However, in addition to issues of liability and content moderation – both of which are governed by different legal standards depending on the actors involved – ECHR case law could support calls to strengthen national legislation on the enforcement of individual rights in cases of hate speech. A particularly relevant case in this context is Beizaras and Levickas v Lithuania (appl. no. 41288/15), which dealt with hateful and homophobic comments on a social networking platform (cf. Kundrak 2020, 19–39; Peters and Altwicker 2022, 104). After emphasising that “pluralism and democracy are built on genuine recognition of, and respect for, diversity” (para. 107), the ECtHR recalled

the States’ positive obligation to secure the effective enjoyment of the rights and freedoms under the Convention. This obligation is of particular importance for persons holding unpopular views or belonging to minorities, because they are more vulnerable to victimisation […].

Positive obligations on the State are inherent in the right to effective respect for private life under Article 8; these obligations may involve the adoption of measures even in the sphere of the relations of individuals between themselves. While the choice of the means to secure compliance with Article 8 in the sphere of protection against acts of individuals is in principle within the State’s margin of appreciation, effective deterrence against grave acts where essential aspects of private life are at stake requires efficient criminal-law provisions […].

The Court has acknowledged that criminal sanctions, including against the individuals responsible for the most serious expressions of hatred, inciting others to violence, could be invoked only as an ultima ratio measure […] (appl. no. 41288/15, 108, 110, 111).[21]

The court went on recalling its case law on the positive dimension of Article 14 of the ECHR, the notion of discrimination and the required burden of proof. It concluded that the relevant authorities failed “to discharge their positive obligation to investigate in an effective manner whether the comments regarding the applicants’ sexual orientation constituted incitement to hatred and violence” (appl. no. 41288/15, 129). Although it has been argued that it would be “far-fetched” to conclude that the ECtHR has generally created a “new positive obligation to investigate and prosecute hate speech“ (Kundrak 2020, 35), the case demonstrates that states that fail to take action against clearly hateful content after being informed of it are in breach of the Convention.

This case law could encourage member states to explore new avenues for the effective protection of personality rights online, especially where the current regime of content moderation and criminal sanctions (with their limited deterrent effects), proves insufficient (see the data at Sehl, Pfleger, and Suliak 2023). Specifically, in cases of manifestly unlawful hate speech against particularly vulnerable persons, such as members of minorities, fundamental rights considerations may therefore argue for the introduction of a legal basis for court orders aimed at terminating or preventing an infringement, as envisaged by the German government. Furthermore, it would make a notable contribution to Germany’s compliance with its obligations under international law, such as those arising from Article 6 of the Framework Convention for the Protection of National Minorities.[22] For the reasons outlined above, it is important to ensure that such legislation is limited to this purpose and does not override the liability regime applicable to service providers as such, as provided for in the DSA.

Against this background, the proposal’s victim-centered approach to strengthening individual rights enforcement adds a crucial layer to the complex interplay among gatekeepers, regulatory authorities, users and courts. By equipping those affected with the necessary legal instruments, this proposal empowers individual users to assert their rights through state courts, particularly in cases where previous content moderation and mitigation strategies have proven inadequate. Although this represents only a small fraction of the broader legislative efforts to foster a socially acceptable (i.e. non-harmful) online environment, it encourages the actorness of the corresponding stakeholders. Enhancing the role of users – especially those from minority groups frequently exposed to harmful content – by improving their access to effective legal remedies therefore offers a promising complement to existing approaches.

3 The Socio-Technical Context of the German Initiative and its Minority-Relevant Dimensions

3.1 Comparative Perspectives

Most national and supra-national legislative initiatives in the direction of addressing digital forms of violence include within their regulatory scope content that sexually victimizes children and intimate content circulated without consent. However, they are equally concerned with hate speech, incitement to violence, promotion of genocide or terrorism which are of direct linkage to ethno-cultural minority groups and their members.

The German government’s concept paper on digital violence (BMJ 2023a) comes at a time when digital patterns of anti-minority discrimination and hate speech highlight stigmatisation, assaults on individuals’ worth, and dehumanisation. Racialisation processes online additionally create an impact on their victims, regardless of whether they constitute instances of everyday racism or more significant attacks. They also tend to conflate within racialised hostility ethnic, racial, and religious backgrounds. As worryingly, minority group members can understand online hate speech and racism as inevitable and as a direct consequence of one’s online visibility (Bivens 2017, 880–898; Christin and Lu 2023, 1–24; Duffy, Poell, and Nieborg 2019, 1–8; Topidi and Metcalfe 2024a). Within this framework, intentionality to cause harm is relevant and can be understood as the extent to which respondents experience online hate as reflecting discriminatory intentions either on an individual or a group basis.

Within a complex frame where the online-offline dimensions of discrimination and harm become blurred, the proposed legislation contributes to the discussion about the role of and expectations from technology when regulating harm and violence produced online. At the same time, the legislative project in question confirms the observation that technology has shifted from being a distinct sector of regulation to becoming an additional policy layer for all publicly regulated processes and goods.

From the perspective of ethno-cultural minority groups, the online space has become, as mentioned, a source of discrimination (see the important findings of Haimson et al. 2021; Haugen 2023; Noble 2018; Topidi and Metcalfe 2024b). The question that the proposed legislation reiterates is what could be the adequate means to support the exercise of rights of vulnerable individuals and groups that are experiencing harm and oppression online. More than that, the same set of proposed rules more broadly inscribe in the efforts to strike the right balance between innovation and rights protection. It is also noticeable, that the projected legislation comes after the enactment of the EU DSA, suggesting the willingness of the German legislator to complement EU law with national legislation with all the legal caveats already discussed in previous sections.

Overall, the approach pursued by the previous attempts at regulation, the NetzDG as well as the more recent DSA at EU level, rely on a strict regulatory frame based on the ‘notice and action’ model. The model obliges big platforms to block or delete harmful content within prescribed periods. They are also required to have in place accessible and user-friendly individual complaint mechanisms for illegal online content. Lastly, they are under an obligation to report criminal content to law enforcement authorities. This is not, however, the only regulatory approach to the question adopted by states at the moment: the UK and Canadian examples have adopted a more ‘system-based’ approach that is characterized by the duty of platforms to act responsibly and the propensity to look at the issue from a more systemic and ex ante decision-making perspective (Rinceanu and Stephenson 2024). Such duty implies detailed risk assessments, mitigation strategies as well as regular evaluation of their efficiency (Rinceanu and Stephenson 2024). For example, the Canadian Bill C-63 seeking to enact the Online Harms Act, tabled in February 2024,[23] considers the establishment of a distinct regulatory body, named the Digital Safety Commission, whose task is to consider complaints and conduct independent investigations of cases in ways that facilitate regulatory oversight of platforms and Big Tech while aiming to distribute regulatory power more evenly. Compared to other similar online safety acts, the Canadian example adopts a vague definition of regulated services, leaving considerable room to regulation implementation.[24] It additionally emphasizes the need for platforms to report on their moderation systems and practices in a way that allows a better understanding of the interrelationships at play and preempt regulatory challenges of the future. It has drawn inspiration from Australia’s Online Safety Act (OSA) 2021.

The earlier Australian Act against online harm has been designed around the expectation that online service providers should become proactive in protecting users against harmful online content.[25] The Act, similarly, had taken a risks- and harms-based approach that is coordinated through an eSafety Commissioner.[26] The latter’s tasks include prevention, protection and identification of emerging trends in online harm. The Commissioner in question also enjoys discretionary powers including those of making regulations. The Australian Act additionally introduces the power to require internet service providers to block access to violent content, the duty of online platforms and service providers to detect and remove illegal or restricted content as well as duty-based measures against tech companies requiring them to take reasonable steps to maintain a safe and free from violence online space. The Act is currently subjected to independent review to determine its effectiveness by virtue of Section 239A of the OSA, with findings expected in late October 2024 (Global Compliance News 2024). The review was brought forward by one year, among other reasons, due to the need to enhance the protection against hate speech in conjunction with other legislative initiatives. Indicatively, the Australian government is considering enhancing hate speech legislation through a planned Religious Discrimination Bill. The regulatory intersection between the OSA and religious discrimination at federal level suggests the limitation of the current national regime against online harm and violence. It could also be construed as a warning mechanism calling for comprehensive and coordinated regulatory regimes in this area.

In sum, the common characteristic feature observed in both the Canadian and Australian regulatory attempts places a designated regulatory body in a position that enjoys broad authorities with a lack of oversight, relying on a widely framed duty of care instilled on tech companies and platforms (Anjum 2024). At the moment, the broad common direction of regulatory efforts within national contexts suggests the shifting of the basis of access to information in cases of digital violence from platforms’ internal terms of service to the law, which has been a constant claim of vulnerable groups in the past few years. Such a shift has repercussions for the regulatory regime of the online space as it presents platforms with increased levels of accountability of the content they moderate.

Still, regardless of the approach chosen to online harm mitigation, the key regulatory issues concern the definition of online violence and harmful content, the economic incentives for platforms to moderate content more efficiently, the risks of platform overreach in terms of potential violations and the chilling effect on the exercise of individual rights and the transparency and accountability concerns as matters of good governance. The extent to which any approach considers the regulation of online spaces an issue of ‘mass speech administration’ as opposed to an individualized system of error correction is not irrelevant either (Hasinoff and Schneider 2022; Peterson-Salahuddin 2024, 1–13). This is because the implications of online harm and violence defy individual cases of offenders and victims and point to a more complex web of systemic interrelationships that largely replicate discriminating patterns of the offline world. Broadly speaking, the definition of harmful content, including in Canadian Bill C-63 or the German proposal, focuses on harms that users experience when viewing content posted by others. It neglects how platforms contribute to the virality and proliferation of harmful content through algorithmic and other moderation tools (Hrynyshyn 2024). The systemic dimensions of stigmatizing and discriminating processes online echo thus discrimination and disadvantage targeting minority groups in the physical world.

In comparative terms, among the most crucial hurdles in the regulation of online violence one can additionally identify the failure to provide legal grounds obliging online platforms to cooperate in the identification of perpetrators (HateAid 2021) together with the absence of the definition of what constitutes digital violence obstructing the maintenance of user safety online. So far, platforms arbitrarily have been withholding information from law enforcement authorities, even following the issuing of court orders. On an exceptional basis, their cooperation is forthcoming in cases where negative publicity is at stake (Topidi and Metcalfe 2024a). In the absence of the knowledge on how digital technologies are affecting the social fabric of societies, which is of direct correlation and impact to minority groups, digital violence continues to spread through polarization, alienation and mental as well as physical forms of violence. Outstanding issues that are largely left unaddressed by the initiatives in question concern victim-centered, intersectional regulatory solutions to online violence requiring frameworks that are built around consent, confidentiality, intersectionality and accountability.

3.2 Responses to the Public Consultation Process: The Minority Group Angle in the German Context

A survey of the reactions to the public consultation process launched in 2023 and sourced from a variety of civil society actors that are engaged in ethno-cultural minority group policy making and representation reveals a number of (recurring) structural policy concerns with the current proposal in similar terms to the ones discussed in the previous section. These points concern conceptual, procedural as well as governance related points that the proposed legislation touches upon, particularly from the perspective of ethno-cultural minority groups in Germany.

3.2.1 Conceptual Challenges Highlighted by Minority Civil Society Bodies

The proposal, as it stands, raises the already mentioned concern in relation to how the concept of digital violence is defined in the text (Amadeu Antonio Stiftung 2023, 1, 3; Amnesty International 2023, 1). The absence of a clear conceptualization of what constitutes digital violence for the purposes of regulation ensues lack of clarity on the kind of offences that can be included in the scope of the projected legislation. On this point, the proposal of the Zentralrat Deutscher Sinti und Roma (2023, 1) suggests extending the coverage of the concept to “any form of illegal hate crime and the spreading of fake news” (alle Formen von Hasskriminalität und strafbaren Falschnachrichten).

The draft document of the intended legislation refers only vaguely to the ‘unlawful infringement of absolute rights’ as the area of concern to the regulation. Such broad description of the area of application of the law may, however, produce adverse effects. As the experience from online content moderation processes of major platforms has shown (Bivens 2017; DeCook et al. 2022, 63–78; Gillett, Stardust, and Burgess 2022, 1–12; Kadri 2022), it can be used abusively in the hands of hate speakers to limit freedom of speech and silence minorities, jeopardize the safety of whistle-blowers or even become the object of manipulation in the hands of extremist and polarizing users (Amadeu Antonio Stiftung 2023, 1; Amnesty International 2023, 2, 3). This would explain why there have been repeated calls for measures that prevent the abuse of the law that would contribute to the further silencing of vulnerable users.

More than that, the question of enlarging the list of digital offences to personal rights is put forward to extend criminalization of offences to doxing, stalking, blackmail and incitement to violence (Amnesty International 2023, 2). The list aptly reflects the types of criminally liable conduct that affect vulnerable users such as minority group members or women disproportionally.

3.2.2 Procedural Challenges

Several procedural concerns have been flagged up from minority civil society actors in relation to the upcoming legislative framework. The first one concerns the invitation to include in the law the possibility for civil society actors to represent victims of online violence in legal proceedings. This point has been raised jointly by the Zentralrat Deutscher Sinti und Roma (2023, 2) as well as the Amadeu Antonio Stiftung (2023, 4).

Connected to the issue of legal representation, are, secondly, the issues of financial accessibility of legal proceedings against online violence to all but as importantly the development of resources both material and immaterial within law enforcement agencies to address these incidents. As mentioned in the Canadian and British context, the regulation of online harms commonly includes the establishment of specialized bodies that either exercise oversight in this area or more narrowly constitute specialized public bodies that offer services to victims. Similar proposals have been put forward in the German context by some of the stakeholders with exposure in this field (Amnesty International 2023, 4).

A third salient concern raised in relation to the proposed legislation touches upon the regulation of anonymity in the digital spaces. An integral feature of the online ecosystem, anonymity functions both to encourage anonymous speakers to cause online harm and violence to minority groups members but also to penalize minority group members who are posting legal content anonymously for fear of the own security. In the latter respect, evidence related to content removals by members of racial minorities suggests that content takedowns are connected to instances where users are expressing personal identities (Haimson et al. 2021, 2). Content criticizing the ‘dominant’ group, racial injustice or emphasizing feminist viewpoints has been removed, decreasing the agency and opportunities for online participation of minority group speakers, without a clear justification for removals. The impact of these moderation policies is particularly acute for women of colour (Salty 2019). The removal of anonymous accounts that are the subject of complaints must be therefore carefully calibrated against the risk of over-eliminating accounts that are targeted as sources of hateful content without being so as part of strategic campaigns from extremist speakers.

A fourth set of procedure-related challenges concerns the centre of gravity of the proposed legislation. Some civil society organisations participating in the public consultation process have been explicit that the law should embrace a more victim-centered approach as it is clear that regulation of online hate and the harm that it creates requires more attention given to victims as vulnerable segments of society (Amadeu Antonio Stiftung 2023, 1). Observing racist content or being the target of such comments directly invites the inquiry of how online racism is experienced by its victims (Nadim 2023, 4928–45). The conflation between ethnic, religious and racial difference by victims (Garner and Selod 2015, 9–19; Nadim 2023, 4929–30) leads to their confusion: in simpler words, it is not always clear to them whether they are targeted on the basis of their religious or ethnic difference. This is a recurring point raised within regulatory efforts (e.g. see the discussion of the Online Harms Bill in the UK). More than that, the need to consider the group implications of online hate as they concern (and target) entire communities, as opposed to individualized victims, is additionally put forward (Zentralrat Deutscher Sinti und Roma 2023, 2). In this respect, there is an underlying dimension of a call for empowerment measures in favour of those targeted disproportionately by online hate and violence.

4 Concluding Remarks or why the Proposed Legislation Matters

New forms of communication between minority community members and renewed opportunities for belonging often come at the cost of exposure to considerable and disproportionate amounts of harmful content online (cf. UN Report of the Special Rapporteur on Minority Issues 2021, 21; Faloppa et al. 2023; FRA 2023, 35 et seq.; Topidi 2019). Online discourses are involved in both reflecting but also constructing social reality which is why the proposed legislation is timely and pressing from a minority rights perspective.

From the perspective of major tech platforms, hate speech is framed as a harm and safety issue (Gillett, Stardust, and Burgess 2022, 4). As an alarming phenomenon, major platforms have so far, however, chosen to consider it as an instance concerning the unacceptable expressions of individual users rather than a more structural concern. This kind of framing was deliberately preferred to justify the use of content moderation tools but also the push for users to take responsibility for their own exposure to harm (Gillett, Stardust, and Burgess 2022, 5). The choice is also aligned with a more permissive approach to hateful expressions especially when the expressions in question generate user engagement.

Perceiving the digital space as part of the public sphere and hence as a space where the state is expected to guarantee the protection of fundamental rights, creates the possibility and even the obligation for states, as the recent ECtHR analyzed above seems to hint, to reflect on the pervasive inequalities among categories of online users, taking particularly into account how one’s online presence can amplify messages, opinions or ideas. Repetitive targeting leading to stigmatization, ‘othering’ and stereotypization through abusive and provocative language shows the collapse of the line between the offline and online impact of hate speech, especially in connection to intimidation and abuse (Awan and Zempi 2015).

In fact, the correlation between hate speech and ethnic minorities is neither a novel nor an exclusively digital occurrence: it has been studied since the 1980s (Greenberg and Pyszczynski 1985, 61–72). The prevailing hateful nature of speech against minority groups online, through subtle and ambiguous stigmatization, micro-aggressions or higher intensity incitement to violence is conducive to the creation of new descriptive norms within online spaces whereby hate against minority groups becomes normalized and as such more impactful (Soral, Bilewicz, and Winiewski 2018, 136–146). Against the backdrop of the research thesis that the use of digital media has created the conditions for the normalization of violence about minorities (Soral, Liu, and Bilewicz 2020), online expressions of prejudice against minority groups entail considerable human cost on the victims (Vidgen et al. 2021, 1667–82). They additionally affect the mental health of victims, their integration and relations with the rest of society due to their systematic stigmatization.

With user engagement connected to vitriolic comments (The Wall Street Journal 2018), increase in the usage of platforms leads to increase in targeted advertisement and income for platforms. Such increases, however, come at the cost of digital victimhood as harmful content has been connected with more use of platforms (Topidi 2024). The victims’ wellbeing and autonomy are also affected (Awan and Zempi 2015), with online hate speech also setting the basis for serious limitations of the freedom of expression, freedom of association and political participation of vulnerable segments of contemporary European societies (ECRI 2022). Hateful online content is strikingly harmful towards the social and cultural capital of the communities concerned: it transgresses often to the offline world in the form of physical violence, discrimination and marginalization (Perrigo 2020). In some cases, such content succeeds in constructing narratives that spread fear and anger against minorities (Evolvi 2022, 9–25). These narratives, while initially alternative, through carefully designed social media strategies become mainstreamed and ultimately ‘normalized’.[27] Depending on the particular context, there is a general tendency within the most used platforms, such as Twitter and Facebook (Meta), for right-wing populist parties and outlets (or individuals who identify with them), to use their access to large audiences on social media in order to amplify such hate speech.[28]

Even more explicitly, the impact of online hate speech at a group-based level has additional implications: anti-minority online hate speech reinforces group-based divisions and disintegration processes (Vidgen et al. 2021). In response to targeting and stigmatisation, experiences of online violence and discrimination contribute to the formation of minority group defensive self-identification mechanisms that are built around ‘oppositional consciousness’. This kind of consciousness can be described as ‘an empowering mental state that prepares members of an oppressed group to act to undermine, reform or overthrow a system of human domination’ (Mansbridge and Morris 2001, 4).

Any attempt of regulation of hate speech without a fuller understanding of its effects within digital spaces remains a challenge in the absence of comparable data on its impact on individuals that use them (Belfer Centre and Shorenstein Centre 2023, 17). Causality between hate speech and hate crimes (i.e. the establishment of a clear relationship of cause between hate speech and specific harms) is, also, largely missing, especially as online experiences may vary across individuals and the interplay between the offline and the online worlds remains blurred (Belfer Centre and Shorenstein Centre 2023, 20). To the extent that the projected law would encourage more clarity on the causal link between hate speech and violations of fundamental rights, it could meaningfully contribute to the development of social consensus on why reversing hate speech both online and offline represents a worthwhile public policy objective. It could also mitigate the withdrawal and self-censorship of targeted victims from the digital public space.[29] This kind of clarity is particularly relevant for the German context where hate speech online has been found to be produced and distributed by a limited number of users with the intention to create a distorted representation of public opinion.[30] Subject to the grey zones highlighted above from both a legal and a diversity management perspective, the regulation of online violence remains salient insofar as the effect of hate speech attempts to shape social consensus on the acceptability of harming people of different ethnicity and/or faith (Kreißel et al. 2018, 25).


Corresponding author: Dr. habil. Kyriaki Topidi, Head of the Research Cluster Culture & Diversity, ECMI, Flensburg, Germany, E-mail:

About the authors

Kyriaki Topidi

Dr. habil. Kyriaki Topidi is the head of the research cluster Culture & Diversity and joined ECMI in 2019. She holds a degree in law from the Robert Schuman Faculty of Law in Strasbourg, an MA in International Studies from the University of Birmingham, a PhD in European Studies from Queen’s University Belfast, and a habilitation in Comparative Constitutional Law from the University of Fribourg. She has lectured extensively and conducted research in the areas of Public International Law, European Law, Human Rights, and Comparative Law. In the past, she has held research positions in various institutions and was a senior lecturer at the Faculty of Law of the University of Lucerne in Switzerland. She has also served as the managing director of the Centre for Comparative Constitutional Law and Religion at the same institution. She has been a guest scholar at Fordham University (US), the IDC (Israel), the Max Planck Institute in Halle (Germany), the Institute of Law and Religion of the University of Fribourg (Switzerland), the Institute of Comparative Law of the University of Paris 2 – Assas (France), as well as the National Law School Delhi (India), among others. Her research interests focus on diversity management, minority protection rights and mechanisms, with a special interest in religion. She is the author and editor of several volumes, including “EU Law, Minorities and Enlargement” (Intersentia 2010), “Constitutional Evolution in Central and Eastern Europe: Expansion and Integration in the EU” (Ashgate 2011), “Transnational Legal Process and Human Rights” (Ashgate, 2013), and “Religion as Empowerment: Global Legal Perspectives” (Routledge 2016). More recently, she has edited a collection on “Normative Pluralism and Human Rights” (Routledge 2018). Her latest monograph focuses on “The Right to Difference and Comparative Religious Diversity in Education” (Routledge 2020), and her latest co-edited work focuses on “Minority Recognition and the Diversity Deficit” (Hart 2022). At the ECMI, she is currently researching minority identity and digital governance, as well as the intersection of minorities with social movements, with an upcoming edited volume on “Minority Rights and Social Change: Norms, Actors, and Strategies” (Routledge 2023). She is the co-editor of the book series “Routledge Advances on Minority Studies” (RAMS).

Moritz Malkmus

Moritz Malkmus (Dipl.-Jur.) is a research assistant to Professor Rainer Hofmann at Goethe University Frankfurt. He studied law in Frankfurt and Vilnius, earning his First State Examination with distinction in 2021. His doctoral research, part of the LOEWE research focus “Minority Studies: Language and Identity” (funded by the Hessian Ministry of Higher Education, Research, Science and the Arts until August 2024), explores the protection of new minorities under international law. His previous publications cover (EU) fundamental rights and minority issues as well as humanitarian law.

  1. Research ethics: Not applicable.

  2. Informed consent: Not applicable.

  3. Author contributions: The authors have accepted responsibility for the entire content of this manuscript and approved its submission.

  4. Use of Large Language Models, AI and Machine Learning Tools: None declared.

  5. Conflict of interest: The authors state no conflict of interest.

  6. Research funding: None declared.

  7. Data availability: Not applicable.

References

Angst, Doris. 2018. “Commentary of Article 6 of the Framework Convention.” In The Framework Convention for the Protection of National Minorities, edited by R. Hofmann, T. H. Malloy, and D. Rein, 148–66. Leiden, Boston: Brill/Nijhoff.10.1163/9789004339675_011Suche in Google Scholar

Anjum, Samaya. 2024. “The Global Human Rights Implications of Canada’s Online Harms Act.” Via Confluence Blog, 18 April 2024. https://globalnetworkinitiative.org/the-global-human-rights-implications-of-canadas-online-harms-act/ (accessed August 21, 2024).Suche in Google Scholar

Awan, Imran, and Irene Zempi. 2015. We Fear for Our Lives: Online and Offline Experiences of Anti-Muslim Hate Crime. London: Tell MAMA. https://irep.ntu.ac.uk/id/eprint/25975 (accessed August 15, 2024).Suche in Google Scholar

Belfer Centre and Shorenstein Centre. 2023. Democracy and Internet Governance Initiative: Towards Digital Platforms and Public Purpose. Final Report. https://www.belfercenter.org/sites/default/files/files/publication/DIGI%20Final%20Report_July5_2023.pdf (accessed May 15, 2024).Suche in Google Scholar

Bernhard, Lukas, and Lutz Ickstadt. 2024. Lauter Hass – leiser Rückzug: Wie Hass im Netz den demokratischen Diskurs bedroht. https://kompetenznetzwerk-hass-im-netz.de/wp-content/uploads/2024/02/Studie_Lauter-Hass-leiser-Rueckzug.pdf (accessed April 3, 2024).Suche in Google Scholar

Bivens, Rena. 2017. “The Gender Binary Will Not Be Deprogrammed: Ten Years of Coding Gender on Facebook.” New Media & Society 19 (6): 880–98. https://doi.org/10.1177/1461444815621527.Suche in Google Scholar

Christin, Angèle, and Yingdan Lu. 2023. “The Influencer Pay Gap: Platform Labor Meets Racial Capitalism.” New Media & Society 26: 1–24. https://doi.org/10.1177/14614448231164995.Suche in Google Scholar

Cole, Marc D., and Jörg Ukrow. 2023. Der EU Digital Services Act und verbleibende nationale (Gesetzgebungs-)Spielräume. Saarbrücken. https://freiheitsrechte.org/uploads/documents/Demokratie/Marie-Munk-Initiative/DSA_Gutachten_Cole_Ukrow.pdf (accessed January 20, 2024).Suche in Google Scholar

DeCook, Julia R., Kelley Cotter, Shaheen Kanthawala, and Kali Foyle. 2022. “Safe from “Harm”: The Governance of Violence by Platforms.” Policy & Internet 14 (1): 63–78. https://doi.org/10.1002/poi3.290.Suche in Google Scholar

Duffy, Brooke E., Thomas Poell, and David B. Nieborg. 2019. “Platform Practices in the Cultural Industries: Creativity, Labor and Citizenship.” Social Media and Society 5 (4): 1–8. https://doi.org/10.1177/2056305119879672.Suche in Google Scholar

Eckes, Christine, Tobias Fernholz, Daniel Geschke, Anja Klaßen, and Matthias Quent. 2018. “Hass im Netz – der schleichende Angriff auf die Demokratie.” https://www.idz-jena.de/fileadmin/user_upload/Bericht_Hass_im_Netz.pdf (accessed September 15, 2024).Suche in Google Scholar

Eifert, Martin, Axel Metzger, Heike Schweitzer, and Gerhard Wagner. 2021. “Taming the Giants: The DMA/DSA Package.” Common Market Law Review 58 (4): 987–1028. https://doi.org/10.54648/cola2021065.Suche in Google Scholar

ESafety Commissioner. 2024. “Learn About the Online Safety Act.” https://www.esafety.gov.au/newsroom/whats-on/online-safety-act (accessed August 22, 2024).Suche in Google Scholar

EU Agency for Fundamental Rights (FRA). 2023. Online Content Moderation – Current Challenges in Detecting Hate Speech. Luxembourg: EU Publications Office. https://data.europa.eu/doi/10.2811/923316.Suche in Google Scholar

Evolvi, Giulia. 2022. “Religion and the Internet: Digital Religion, (Hyper)mediated Spaces and Materiality.” Zeitschrift für Religion, Gesellschaft und Politik 6 (1): 9–25. https://doi.org/10.1007/s41682-021-00087-9.Suche in Google Scholar

Faloppa, Federico, Antonio Gambacorta, Richard Odekerken, and Robert van der Noordaa. 2023. Study on Preventing and Combating Hate Speech in Times of Crisis. Strasbourg: Council of Europe. https://rm.coe.int/-study-on-preventing-and-combating-hate-speech-in-times-of-crisis/1680ad393b (accessed August 15, 2024).Suche in Google Scholar

Farrand, Benjamin. 2024. “How do we Understand Online Harms? The Impact of Conceptual Divides on Regulatory Divergence Between the Online Safety Act and Digital Services Act.” Journal of Media Law: 1–23. https://doi.org/10.1080/17577632.2024.2357463.Suche in Google Scholar

Frosio, Giancarlo, and Christophe Geiger. 2023. “Taking Fundamental Rights Seriously in the Digital Services Act’s Platform Liability Regime.” European Law Journal 29 (1–2): 31–77. https://doi.org/10.1111/eulj.12475.Suche in Google Scholar

Garner, Steve, and Saher Selod. 2015. “The Racialization of Muslims: Empirical Studies of Islamophobia.” Critical Sociology 41 (1): 9–19. https://doi.org/10.1177/0896920514531606.Suche in Google Scholar

Gillett, Rosalie, Zahra Stardust, and Jean Burgess. 2022. “Safety for Whom? Investigating How Platforms Frame and Perform Safety and Harm Interventions.” Social Media + Society 8 (4): 1–12. https://doi.org/10.1177/20563051221144315.Suche in Google Scholar

Global Compliance News. 2024. “Australia: Review of the Online Safety Act and Related Reforms on Online Harms.” https://www.globalcompliancenews.com/2024/03/15/https-insightplus-bakermckenzie-com-bm-data-technology-australia-review-of-the-online-safety-act-and-related-reforms-on-online-harms_02202024/ (accessed August 22, 2024).Suche in Google Scholar

Glocker, Felix. 2024. “Keine internationale Zuständigkeit deutscher Gerichte für die Bestandsdatenauskunft bei Diensteanbietern aus EU-Ausland.” Gewerblicher Rechtsschutz und Urheberrecht in der Praxis 16 (2): 52. https://www.beck-shop.de/GRUR-Prax/product/30753? Suche in Google Scholar

Götz, Johanna. 2023. “Rechtsdurchsetzung von „meldenden Personen“ gegenüber Online-Plattformen nach dem DSA, Zur abschließenden Regelung der Rechtsbehelfe durch den DSA.” Computer und Recht 39 (7): 450–5. https://doi.org/10.9785/cr-2023-390715.Suche in Google Scholar

Greenberg, Jeff, and Tom Pyszczynski. 1985. “The Effect of an Overheard Ethnic Slur on Evaluations of the Target: How to Spread a Social Disease.” Journal of Experimental Social Psychology 21 (1): 61–72. https://doi.org/10.1016/0022-1031(85)90006-X.Suche in Google Scholar

Grote, Rainer, and Nicola Wenzel. 2022. “Kapitel 18: Die Meinungsfreiheit.” In EMRK/GG: Konkordanzkommentar zum europäischen und deutschen Grundrechtsschutz, edited by O. Dörr, R. Grote, and T. Marauhn. Tübungen: Morh Siebeck.Suche in Google Scholar

Guerin, Cécile, and Zoé Fourel. 2021. “A Snapshot Analysis of Anti-Muslim Mobilisation in France After Terror Attacks.” https://www.isdglobal.org/digital_dispatches/a-snapshot-analysis-of-anti-muslim-mobilisation-in-france-after-terror-attacks/.Suche in Google Scholar

Haimson, Oliver L., Daniel Delmonaco, Peipei Nie, and Andrea Wegner. 2021. “Disproportionate Removals and Differing Content Moderation Experiences for Conservative, Transgender and Black Media Users: Marginalization and Moderation Gray Areas.” Proceedings of the ACM on Human-Computer Interaction 5 (Cscw2): 1–13. https://doi.org/10.1145/3479610.Suche in Google Scholar

Härting, Niko, and Max Valentin Adamek. 2023. “Lässt der Digital Services Act Raum für ein ‚Gesetz gegen digitale Gewalt.” Computer und Recht 39 (5): 316–20. https://doi.org/10.9785/cr-2023-390510.Suche in Google Scholar

Hasinoff, Amy A., and Nathan Schneider. 2022. “From Scalability to Subsidiarity in Addressing Online Harm.” Social Media + Society 8 (3). https://doi.org/10.1177/20563051221126041.Suche in Google Scholar

HateAid. 2021. “Statement – Consultation of the Council of Europe/GREVIO – General Recommendation on the Digital Dimension of Women.” https://hateaid.org/wp-content/uploads/2021/05/Protection-of-women-against-digital-violence.pdf (accessed July 22, 2024).Suche in Google Scholar

Haugen, Frances. 2023. The Power of One: Blowing the Whistle on Facebook. London: Hodder & Stoughton.Suche in Google Scholar

Heger, Alexander, and Raven Kirchner. 2023. “Die Europäische Union auf dem Weg in eine europäische digitalisierte Gesellschaft?” In Digitalisierung im Recht der EU, edited by R. Kirchner, A. Heger, R. Hofmann, and S. Kadelbach, 19–136. Baden-Baden: Nomos.10.5771/9783748939986-19Suche in Google Scholar

Heger, Alexander, and Moritz Malkmus. 2024. “Realignment of the German Fundamental Rights Review: Implications of the ‘Right to be Forgotten’ Decisions for the Application of the CFR as a Relevant Standard.” In On the Relation Between the EU Charter of Fundamental Rights and National Fundamental Rights: A Comparative Analysis in the European Multilevel Court System, edited by A. Heger, and M. Malkmus, 101–29. Cham: Springer.10.1007/978-3-031-52685-5_6Suche in Google Scholar

Hofmann, Franz. 2023. “Artikel 6.” In Digital Services Act, edited by Franz Hofmann, and Benjamin Raue. Baden-Baden: Nomos.Suche in Google Scholar

Hong, Mathias. 2022. “Hassrede und Desinformation als Gefahr für die Demokratie – und die Meinungsfreiheit als gleiche und positive Freiheit im Zeitalter der Digitalisierung.” Rechtswissenschaft 13 (1): 126–74. https://doi.org/10.5771/1868-8098-2022-1.Suche in Google Scholar

Hrynyshyn, Derek. 2024. “The Online Harms Act Doesn’t Go Far Enough to Protect Democracy in Canada.” Via The Conversation, 19 March 2024. https://theconversation.com/the-online-harms-act-doesnt-go-far-enough-to-protect-democracy-in-canada-224929 (accessed August 15, 2024).Suche in Google Scholar

Kadri, Thomas. 2022. “Juridical Discourse for Platforms.” Harvard Law Review Forum 163 (2): 163–204.Suche in Google Scholar

Korpisaari, Päivi. 2022. “From Delfi to Sanchez – When can an Online Communication Platform be Responsible for Third-Party Comments? An Analysis of the Practice of the ECtHR and Some Reflections on the Digital Services Act.” Journal of Media Law 14 (2): 352–77. https://doi.org/10.1080/17577632.2022.2148335.Suche in Google Scholar

Kreißel, Philip, Julia Ebner, Alexander Urban, and Jakob Guhl. 2018. Hate at the Push of a Button, Right-Wing Troll Factories and the Ecosystem of Coordinated Hate Campaigns Online. ISD Report. https://www.isdglobal.org/isd-publications/hate-at-the-push-of-a-button/ (accessed August 15, 2024).Suche in Google Scholar

Kundrak, Viktor. 2020. “Beizaras and Levickas v. Lithuania: Recognizing Individual Harm Caused by Cyber Hate?” East European Yearbook on Human Rights 3 (1): 19–39. https://doi.org/10.5553/EEYHR/258977642020003001002.Suche in Google Scholar

Kupsch, Frederic. 2021. “Watch Your Facebook Comment Section! Holding Politicians Criminal Liable for Third Parties’ Hate Speech – No Violation of Freedom of Expression Under the ECHR.” Via Völkerrechtsblog. https://doi.org/10.17176/20211015-164744-0 (accessed January 19, 2024).Suche in Google Scholar

Libor, Christine. 2023. “BMJ: Eckpunktepapier gegen digitale Gewalt.” Archiv für Presserecht 54 (3): 234. https://doi.org/10.9785/afp-2023-540313.Suche in Google Scholar

Lück, Benjamin. 2023. “Kritische Würdigung des Eckpunktepapiers des BMJ für ein Gesetz gegen digitale Gewalt im Lichte der aktuellen Herausforderungen.” Zeitschrift für Urheber und Medienrecht 67 (11): 740–6.Suche in Google Scholar

Mansbridge, Jane J., and Aldon Morris, eds. 2001. Oppositional Consciousness: The Subjective Roots of Social Protest. Chicago: University of Chicago Press.10.7208/chicago/9780226225784.001.0001Suche in Google Scholar

Mantz, Reto. 2024. “Herkunftslandprinzip versus NetzDG – Wie geht es weiter mit den Pflichten von Diensteanbietern?” Gewerblicher Rechtsschutz und Urheberrecht 126 (1–2): 34–6.Suche in Google Scholar

Maurer, Johannes. 2024. “Ein Gesetz gegen digitale Gewalt? Das Eckpunktepapier des BJM zwischen offenen Fragen und falschen Hoffnungen.” Neue Juristische Online-Zeitschrift 24 (10): 257–63.Suche in Google Scholar

Nadim, Marjan. 2023. “Making Sense of Hate: Young Muslims’ Understandings of Online Racism in Norway.” Journal of Ethnic and Migration Studies 49 (19): 4928–45. https://doi.org/10.1080/1369183X.2023.2229522.Suche in Google Scholar

Noble, Safiya Umoja. 2018. Algorithms of Oppression. New York: New York University Press.Suche in Google Scholar

Panahi, Tahireh. 2023. “Gesetz gegen digitale Gewalt – auf Kollisionskurs mit dem DSA?” Multimedia und Recht 26 (8): 556–62.Suche in Google Scholar

Pennington, Rosemary. 2018. “Social Media as Third Spaces? Exploring Muslim Identity and Connection in Tumblr.” International Communication Gazette 80 (7): 620–36. https://doi.org/10.1177/1748048518802208.Suche in Google Scholar

Perrigo, Billy. 2020. “It Was Already Dangerous to be Muslin in India. Then Came the Coronavirus.” https://time.com/5815264/coronovirus-india-Islamophobia-coronajihad (accessed August 15, 2024).Suche in Google Scholar

Peters, Anne, and Tilmann Altwicker. 2022. “Das Diskriminierungsverbot.” In EMRK/GG: Konkordanzkommentar zum europäischen und deutschen Grundrechtsschutz, edited by O. Dörr, R. Grote, and T. Marauhn. Tübingen: Mohr Siebeck.Suche in Google Scholar

Peterson-Salahuddin, Chelsea. 2024. “Repairing the Harm: Toward an Algorithmic Reparations Approach to Hate Speech Content Moderation.” Big Data & Society 11 (2): 1–13. https://doi.org/10.1177/20539517241245333.Suche in Google Scholar

Peukert, Alexander. 2023. “Modi der Plattformregulierung in den Bereichen Urheberrecht, Hassrede und Desinformation.” In Digitalisierung im Recht der EU, edited by R. Kirchner, A. Heger, R. Hofmann, and S. Kadelbach, 137–68. Baden-Baden: Nomos.10.5771/9783748939986-137Suche in Google Scholar

Raue, Benjamin. 2024. “Grenzen staatlicher Regulierung von Online-Diensten.” Neue Juristische Wochenschrift 77 (4): 204–5.Suche in Google Scholar

Rinceanu, Johanna, and Randall Stephenson. 2024. “Differential Diagnosis in Online Regulation: Reframing Canada’s Systems-Based Approach.” Eucrim Issue 3/24 https://eucrim.eu/articles/differential-diagnosis-in-online-regulation/ (accessed July 22, 2024).10.30709/eucrim-2024-007Suche in Google Scholar

Roth, Anne. 2024. “„Ein Jahr kein Digitale-Gewalt-Gesetz“ via Netzpolitik.org.” https://netzpolitik.org/2024/bundesjustizministerium-ein-jahr-kein-digitale-gewalt-gesetz/ (accessed May 13, 2024).Suche in Google Scholar

Salty. 2019. “Exclusive: An Investigation into Algorithmic Bias in Content Policing in Instagram.” https://saltyworld.net/algorithmicbiasreport-2 (accessed June 30, 2024).Suche in Google Scholar

Schäfer, Alexander. 2023. “Das Eckpunktepapier des Bundesjustizministeriums für ein Gesetz gegen digitale Gewalt.” Zeitschrift für Urheber und Medienrecht 67 (11): 734–40.Suche in Google Scholar

Schmuck, Desirée, Jörg Matthes, and Frank Hendrik Paul. 2017. “Negative Stereotypical Portrayals of Muslims in Right-Wing Populist Campaigns: Perceived Discrimination, Social Identity Threats, and Hostility Among Young Muslim Adults.” Journal of Communication 67 (4): 610–34. https://doi.org/10.1111/jcom.12313.Suche in Google Scholar

Sehl, Markus, Linda Pfleger, and Hasso Suliak. 2023. “Gutachten zu Spielräumen nach DSA: Wie das Gesetz gegen digitale Gewalt aussehen könnte.” Legal Tribune Online. https://www.lto.de/persistent/a_id/51497/ (accessed January 19, 2024).Suche in Google Scholar

Soral, Wiktor, Michał Bilewicz, and Mikołaj Winiewski. 2018. “Exposure to Hate Speech Increases Prejudice through Desensitization.” Aggressive Behavior 44 (2): 136–46. https://doi.org/10.1002/ab.21737.Suche in Google Scholar

Soral, Wiktor, James Liu, and Michał Bilewicz. 2020. “Media of Contempt: Social Media Consumption Predicts Normative Acceptance of Anti-Muslim Hate Speech and Islamoprejudice.” International Journal of Conflict and Violence 14 (1): 1–13. https://doi.org/10.4119/ijcv-3774.Suche in Google Scholar

The Wall Street Journal. 2018. The Facebook Files. https://www.wsj.com/articles/the-facebook-files-11631713039?reflink=share_mobilewebshare (accessed August 15, 2024).Suche in Google Scholar

Topidi, Kyriaki. 2019. “Words that Hurt (1): Normative and Institutional Considerations in the Regulation of Hate Speech in Europe.” ECMI Working Paper No. 118. https://ssrn.com/abstract=3488707 (accessed January 19, 2024).10.2139/ssrn.3488707Suche in Google Scholar

Topidi, Kyriaki. 2024. “Attacking Muslim Minority Women Online: An Intersectional and Multi-Actor Framework on Digital Hate Speech in the UK.” Journal of Religion in Europe. Special Issue on Digital Religion.10.1163/18748929-bja10125Suche in Google Scholar

Topidi, Kyriaki, and Jody Metcalfe. 2024a. “Digital Self-Representation of Minority Wom*n: An Intersectional Analysis of Sámi Content Creators.” In Minority Women and Intersectionality: Agency, Power, and Participation, edited by A. C. Budabin, J. Metcalfe, and S. Pandey. Routledge.Suche in Google Scholar

Topidi, Kyriaki, and Jody Metcalfe. 2024b. “Digital (Mis)-Representations: Understanding Ethno-Cultural Minority Identity Formation Online.” Digital Society 3: 45. https://doi.org/10.1007/s44206-024-00133-y.Suche in Google Scholar

Tuchtfeld, Erik. 2023. “Be Careful What You Wish For: The Problematic Desires of the European Court of Human Rights for Upload Filters in Content Moderation.” VerfBlog 2023/9/23. https://verfassungsblog.de/be-careful-what-you-wish-for/ (accessed August 15, 2023).Suche in Google Scholar

Valerius, Brian. 2023. “Das geplante‚ Gesetz gegen digitale Gewalt.” Zeitschrift für Rechtspolitik 56 (5): 142–4.Suche in Google Scholar

Vidgen, Bertie, Tristan Thrush, Zeerak Waseem, and Douwe Kiela. 2021. “Learning from the Worst: Dynamically Generated Datasets to Improve Online Hate Detection.” In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), 1667–82.10.18653/v1/2021.acl-long.132Suche in Google Scholar

Weck, Thomas. 2023. The DSA and Digital Violence: Entrepreneurial Freedom and Special Responsibility. https://doi.org/10.2139/ssrn.4667280 (accessed January 19, 2024).Suche in Google Scholar

Zheng, Yingqin, and Geoff Walsham. 2021. “Inequality of What? An Intersectional Approach to Digital Inequality under Covid-19.” Information and Organisation 31 (1): 100341. https://doi.org/10.1016/j.infoandorg.2021.100341.Suche in Google Scholar

Case Law

(German) Federal Constitutional Court, Order of the Second Senate of 14 October 2004 – 2 BvR 1481/04.Suche in Google Scholar

(German) Federal Court of Justice, Decision of 28 September 2023 – III ZB 25/21.Suche in Google Scholar

(German) Federal Court of Justice, Judgement of 14 June 2022 – VI ZR 172/20.Suche in Google Scholar

CJEU, Case C-376/22 Google Ireland and Others ECLI:EU:C:2023:835.Suche in Google Scholar

CJEU, Case C-18/18 Glawischnig-Piesczek v Facebook ECLI:EU:C:2019:821.Suche in Google Scholar

ECtHR, Judgment of the Grand Chamber, 15 March 2012, application no. 41029/04, Aksu v Turkey.Suche in Google Scholar

ECtHR, Judgment of the Grand Chamber, 16 June 2015, application no. 64569/09, Delfi v Estonia.Suche in Google Scholar

ECtHR, Judgment of the Grand Chamber, 15 October 2015, application no. 27510/08, Perinçek v Switzerland.Suche in Google Scholar

ECtHR, Judgment of the Fourth Section, 12 April 2016, application no. 64602/12, R.B. v Hungary.Suche in Google Scholar

ECtHR, Judgment of the Fourth Section, 5 September 2023, application no. 4222/18, Zöchling v. AustriaSuche in Google Scholar

ECtHR, Judgment of the Second Section, 14 January 2020, application no. 41288/15, Beizaras and Levickas v Lithuania.Suche in Google Scholar

ECtHR, Judgment of the First Section, 16 February 2021, application no. 12567/13, Budinova and Chaprazov v Bulgaria.Suche in Google Scholar

ECtHR, Judgment of the Fourth Section, 16 February 2021, application no. 29335/13, Behar and Gutman v Bulgaria.Suche in Google Scholar

ECtHR, Judgment of the Grand Chamber, 15 May 2023, application no. 45581/15, Sanchez v France.Suche in Google Scholar

Oberlandesgericht Frankfurt a. M., Judgement of 21 December 2023 – 6 U 154/22.Suche in Google Scholar

Oberlandesgericht Frankfurt a. M., Judgement of 25 January 2024 – 16 U 65/22.Suche in Google Scholar

International Treaties and National Laws

(Australian) Online Safety Act 2021 No. 76, 2021, https://www.legislation.gov.au/C2021A00076/latest/text (accessed August 15, 2024).Suche in Google Scholar

BILL C-63, An Act to enact the Online Harms Act, to amend the Criminal Code, the Canadian Human Rights Act and An Act respecting the mandatory reporting of Internet child pornography by persons who provide an Internet service and to make consequential and related amendments to other Acts, https://www.parl.ca/DocumentViewer/en/44-1/bill/C-63/first-reading (accessed August 15, 2024).Suche in Google Scholar

Convention for the Protection of Human Rights and Fundamental Freedoms, European Treaty Series No. 5.Suche in Google Scholar

Directive 2000/31/EC of the European Parliament and of the Council of 8 June 2000 on certain legal aspects of information society services, in particular electronic commerce, in the Internal Market, OJ L 178, 17.7.2000, 1–16.Suche in Google Scholar

Framework Convention for the Protection of National Minorities, Europe Treaty Series No. 157.Suche in Google Scholar

International Convention on the Elimination of All Forms of Racial Discrimination, 660 UNTS 195.Suche in Google Scholar

International Covenant on Civil and Political Rights, 999 UNTS 171.Suche in Google Scholar

Kommunikationsplattformen-Gesetz (German Law on measures for the protection of users of communications platforms), BGBl. I, 151/2020.Suche in Google Scholar

Netzwerkdurchsetzungsgesetz (German Network Enforcement Act), BGBl. I 2017, 3352.Suche in Google Scholar

Regulation (EU) 2022/1925 of the European Parliament and of the Council of 14 September 2022 on contestable and fair markets in the digital sector and amending Directives (EU) 2019/1937 and (EU) 2020/1828 (Digital Markets Act), OJ L 265, 12.10.2022, 1–66.Suche in Google Scholar

Regulation (EU) 2022/2065 of the European Parliament and of the Council of 19 October 2022 on a Single Market For Digital Services and amending Directive 2000/31/EC (Digital Services Act), OJ L 277, 27.10.2022, 1–102.Suche in Google Scholar

Regulation (EU) No 1215/2012 of the European Parliament and of the Council of 12 December 2012 on jurisdiction and the recognition and enforcement of judgments in civil and commercial matters (recast), OJ L 351, 20.12.2012, 1–32.Suche in Google Scholar

Telekommunikation-Telemedien-Datenschutz-Gesetz (German Telecommunications and Telemedia Data Protection Act), BGBl. I 2021, 1982 and I 2022, 1045.Suche in Google Scholar

Telemediengesetz (German Telemedia Act), BGBl. I 2007, 179, 251; 2021 I, 1380.Suche in Google Scholar

Legislative Materials

Amadeu Antonio Stiftung. 2023. „Stellungnahme zum Eckpunktepapier für ein Gesetz gegen digitale Gewalt.“ https://www.bmj.de/SharedDocs/Gesetzgebungsverfahren/DE/2023_Digitale_Gewalt.html (accessed January 31, 2024).Suche in Google Scholar

Amnesty International. 2023. „Stellungnahme zum Eckpunktepapier für ein Gesetz gegen Digitale Gewalt (04/2023).“ https://www.bmj.de/SharedDocs/Gesetzgebungsverfahren/DE/2023_Digitale_Gewalt.html (accessed January 31, 2024).Suche in Google Scholar

Bundesministerium der Justiz (BMJ). 2023a. „Eckpunkte für ein Gesetz gegen digitale Gewalt“ https://www.bmj.de/SharedDocs/Gesetzgebungsverfahren/DE/2023_Digitale_Gewalt.html (accessed January 19, 2024).Suche in Google Scholar

Bundesministerium der Justiz (BMJ). 2023b. „Gesetz gegen digitale Gewalt – Kurzpapier zum Eckpunktepapier.“ https://www.bmj.de/SharedDocs/Gesetzgebungsverfahren/DE/2023_Digitale_Gewalt.html (accessed January 19, 2024).Suche in Google Scholar

Bundesministerium der Justiz (BMJ). 2023c. „Gesetz gegen digitale Gewalt – Erläuterungspapier.“ https://www.bmj.de/SharedDocs/Gesetzgebungsverfahren/DE/2023_Digitale_Gewalt.html (accessed January 19, 2024).Suche in Google Scholar

Entwurf eines Gesetzes zur Durchführung der Verordnung (EU) 2022/2065 des Europäischen Parlaments und des Rates vom 19. Oktober 2022 über einen Binnenmarkt für digitale Dienste und zur Änderung der Richtlinie 2000/31/EG sowie zur Durchführung der Verordnung (EU) 2019/1150 des Europäischen Parlaments und des Rates vom 20. Juni 2019 zur Förderung von Fairness und Transparenz für gewerbliche Nutzer von Online-Vermittlungs- diensten und zur Änderung weiterer Gesetze, Bundesrat Drucks. 676/23.Suche in Google Scholar

Gesellschaft für Freiheitsrechte (GFF). 2023. “Stellungnahme zu den Eckpunkten des Bundesministeriums der Justiz zum Gesetz gegen digitale Gewalt.” https://www.bmj.de/SharedDocs/Gesetzgebungsverfahren/DE/2023_Digitale_Gewalt.html (accessed January 31, 2024).Suche in Google Scholar

Zentralrat Deutscher Sinti und Roma. 2023. „Stellungnahme zum Eckpunktepapier des Bundesministeriums der Justiz für ein Gesetz gegen digitale Gewalt.“ https://www.bmj.de/SharedDocs/Gesetzgebungsverfahren/DE/2023_Digitale_Gewalt.html (accessed January 31, 2024).Suche in Google Scholar

(International) Monitoring and Expert Organs

Advisory Committee on the Framework Convention for the Protection of National Minorities, 5th Opinion on Germany, 3 February 2022, ACFC/OP/V(2021)6.Suche in Google Scholar

Dritter Gleichstellungsbericht. 2022. Digital Violence, Fact Sheet 12, available at https://www.dritter-gleichstellungsbericht.de/kontext/controllers/document.php/208.2/4/edf4ec.pdfSuche in Google Scholar

European Commission against Racism and Intolerance (ECRI) Annual Report. 2022. https://rm.coe.int/ar2022-ecri23-16-eng/1680ab5b52 (accessed August 15, 2024).Suche in Google Scholar

Report of the Special Rapporteur on minority issues, 3 March 2021, A/HRC/46/57.Suche in Google Scholar

Received: 2024-09-23
Accepted: 2024-10-16
Published Online: 2024-11-19
Published in Print: 2024-10-28

© 2024 the author(s), published by De Gruyter on behalf of Zhejiang University

This work is licensed under the Creative Commons Attribution 4.0 International License.

Heruntergeladen am 4.11.2025 von https://www.degruyterbrill.com/document/doi/10.1515/ijdlg-2024-0013/html?lang=de
Button zum nach oben scrollen