Startseite Online hate: A European communication perspective
Artikel Öffentlich zugänglich

Online hate: A European communication perspective

  • Heidi Vandebosch ORCID logo EMAIL logo und Tobias Rothmund ORCID logo
Veröffentlicht/Copyright: 5. September 2024

1 Research on online hate in the last decades: Overview and gaps

Previous reviews of the existing scientific literature on online hate illustrate that this topic has increasingly attracted scholarly attention from different disciplines (Paz et al., 2020; Tontodimamma et al., 2021; Vergani et al., 2024; Waqas et al., 2019). The origin of research on online hate is often situated in the first decade of the 21st century (especially from 2005 onwards), while the second decade is described as a period of growth and consolidation. Currently, investigating online hate is a main endeavor of many researchers, including Communication scholars, as this special issue also demonstrates.

Communication research most likely contributes to the social (sciences) perspective on online hate, one of three main perspectives that have been revealed by bibliometric and systematic review studies (Gracia-Calandín and Suárez-Montoya, 2023; Paz et al., 2020; Tontodimamma et al., 2021). The legal perspective on online hate often departs from the concept of “online hate speech.” Online hate speech then refers to any form of web-based communication that disparages a person or a group on the basis of “protected” characteristics such as race, ethnicity, gender, sexual orientation or religion. Central questions that are posed by legal scholars are: Should online hate speech be criminalized or not? How to balance this legislation with the legislation regarding the freedom of speech? And which exact type of online hate speech should be criminalized? The technological perspective in the scientific study of online hate often focuses on the development and evaluation of automatic text detection systems for online hate (Jahan and Oussalah, 2023). Online hate is then sometimes defined in a more strict sense (e.g., as “criminal hate speech”) or more broadly (also incorporating other types of online aggression, such as cyberbullying, that do not necessarily relate to a person’s social identity). The automatic systems often rely on machine learning approaches trained on sets of examples labeled by humans. They aim to facilitate the screening process of the large amounts of contents that are being uploaded on social media platforms daily (Van Royen et al., 2015). The social (sciences) perspective on online hate emphasizes a socio-ecological approach to online hate (Bührer et al., 2024; Weber et al., 2024). In line with this approach, online hate can be explained from, and has consequences on, different levels: the personal level, the interpersonal level, the community level, and the societal level. The social perspective also comes with diverging conceptualizations of online hate. Sometimes online hate refers to online aggression based on people’s belonging to a certain group (cfr. the legal definitions of online hate speech; Frischlich, 2023). The use of more specific labels (“racism”, “misogyny,” …), related to the social identity characteristics that are the basis for the aggression, is then even more common (see, for instance, Bliuc et al., 2018). In other instances, “online hate” is being used as an umbrella term that not only includes the former types of aggression, but also “cyberbullying” and “online harassment in the context of romantic relationships” (Bührer et al., 2024). The social perspective on online hate also comes with specific research methods, both quantitative (e.g., quantitative content analyses, surveys, social network analyses, experiments, …) and qualitative (e.g., discourse analyses, qualitative content analyses, in-depth interviews, focus groups, …; Bliuc et al., 2018; Vergani et al., 2024). Finally, the social perspective on online hate does not consider legislation or automatic detection as the only solution to the problem. On the contrary, it emphasizes that it is important to acknowledge the complexity of the problem and the different actors involved, and to promote “systemic” changes, as well as, for instance, (online media) literacy and counter speech (Blaya, 2019; Gagliardone et al., 2015).

The above mentioned systematic reviews on the topic of online hate (speech) also reveal a number of trends and gaps in the current literature and make suggestions for future research. They note, for instance, a need for consistent definitions of online hate (speech) and related concepts (Bührer et al., 2024; Matamoros-Fernández and Farkas, 2021). They also call for attention for the different parties involved: Not only perpetrators, but also victims and bystanders should be investigated (e.g., Who are they? What are the personal and environmental factors that drive their behaviors?; Vergani et al., 2024). Furthermore, they point to the need to investigate (the impact of) different types of expressions of online hate. While research in the past mainly focused on analyzing text and blatant hate speech, posted on a single platform (mainly Twitter), new studies should also include visual messages and more ordinary, everyday expressions on different communication platforms (Matamoros-Fernández and Farkas, 2021). Moreover, they should consider the interaction between online and offline hate. In addition, the systematic reviews urge for research that evaluates the impact of different types of interventions (Blaya, 2019). Finally, they plea for more (interdisciplinary) research on online hate (speech), that makes use of different types of methods (e.g., that goes beyond textual analysis and cross-sectional surveys based on self-reports; Bührer et al., 2024), and that focuses on wider geographical contexts (not only the US) and inter-country comparisons (Matamoros-Fernández and Farkas, 2021; Paz et al., 2020; Rawat et al., 2024; Waqas et al., 2019). The latter is important, as the causes and the effects of online hate, as well as the solutions that are being created, are—at least to some degree—related to specific offline geopolitical, legal and cultural contexts as well as to the prevalent digital platform ecosystems in these contexts.

2 Filling the gaps: The importance of a (European) communication perspective

Communication scholars who study online hate (speech) often depart from the social (sciences) perspective, but also bring their unique theoretical frameworks, concepts, and approaches to this field. A common way to study online hate (speech) then, is to think of online hate (speech) as a form of “communication” and to describe the phenomenon in terms of “senders/producers,” “messages/texts,” “channels/media,” “receivers/audiences,” and “effects/reception” (see also Rieger et al., 2018). This approach allows to further specify, but also already fill, some of the gaps mentioned above.

In the current special issue five articles showcase the added value of a European communication perspective. They represent a colorful, diverse palette of theoretical approaches, methods, and research populations. In this special issue, authors also use different concepts (hate speech, anti-immigrant rhetoric, visual hate, online hate, digital hate against migrants) to cover more narrowly or more broadly defined phenomena.

Kuřík et al. (2024) unravel what it means to be the victim of hate speech, based on semi-structured interviews (N = 33) with people from four EU member states (Italy, Germany, the Czech Republic, and Portugal). They describe hate speech as a continuous everyday experience on the part of the “receivers”, that moves across off- and online contexts, rather than as a sequence of separate speech acts. In this way they provide an important contribution to the ongoing debate about how hate speech can be defined and conceptualized. They also illustrate how their approach differs from the legal stance on online hate speech.

Klein (2024) investigates the anti-immigrant rhetoric of prominent radical right populist leaders in The Netherlands, Belgium, France, Germany, Italy, and the UK, across X, Facebook and Instagram. She does this from the perspective of mediatization theory and links the question of strategic motivations of political leaders with the perspective of anti-immigrant expressions based on social media affordances: Which affordances increase the likelihood of the use of pathos, logos, and ethos? The anti-immigrant rhetoric that is produced and spread via these “senders” could increase anti-immigrant attitudes and online hate speech amongst their followers. Klein in this way acknowledges some of the “environmental” influences on online hate speech, and answers the call to conduct multi-platform research.

Oehmer-Pedrazzi and Pedrazzi (2024) analyze the characteristics of different types of visual hate “messages,” including their channels, intensity, sources, and targets, through a standardized manual content analysis. In this way they complement the existing research that mainly focuses on textual messages or very specific types of visual messages (e.g., memes). They also acknowledge, like previous communication research (Said-Hung et al., 2024), that the sources of online hate speech can be collective organizations (e.g., political parties and media organizations) as well as individuals. They collected data through the citizen science approach of data donation in collaboration with established civil society organizations in Switzerland, and also paid attention to the internationalization of online hate speech, which might be particularly present in a small country with bigger neighbor states.

Hansen and colleagues (2024) explore the potential of crowd moderation (i.e., user-assisted moderation through reporting or counter-speech) to tackle online hate. Departing from a public good perspective, they test their hypotheses by using data from a large, nationally representative survey of Danish social media users (N = 24,996). Their research in this way looks at a potential solution for the problem of online hate, that relies on the actions of platform users that witness online hate speech directed at known or unknown others. In addition, they investigate how the presence of online hate on nine widely used platforms (i. e., Facebook, Twitter, TikTok, Instagram, YouTube, Tumblr, LinkedIn, Snapchat, and WhatsApp), has an impact on bystanders’ emotional reactions (e.g., feeling angry, sad, scared), and their desire to participate in online debates.

The fifth and final article, by Kirchmair et al. (2024), also considers the bystanders of digital hate. Drawing on theories related to interpersonal and intergroup behavior, they investigate the effects of personality traits (i.e., empathy and identity insecurity) and attitudes (i.e., anti-migration attitudes and social dominance orientation) on the perceived severity of digital hate against immigrants in Austria using two-wave panel data. These authors also underline how the insights from their research might benefit the development of better automatic detection systems, as machine learning approaches rely on labeling by humans, who might hold diverging views on what constitutes (severe) digital hate.

Which role do we see for communication scholars in the future? Tackling a complex and always evolving problem such as online hate (speech) requires an evidence-based, interdisciplinary, and multi-stakeholder approach that takes into account the specificities of the local context. We believe that communication scholars from different subfields could contribute to the further analysis of the problem and to its solution.

Communication scholars are especially well placed to investigate how media organizations and social media platforms operate: What are the legal, political, technological and social pressures they face? What are the logics they follow? How do these influence the policies and the concrete products they create? And how might these factors explain the occurrence and impact of online hate (Weber et al., 2023)? For instance, might (social) media “profit” from negative contents that attract attention and lead to increased user engagement and interactions? And how effective are legal initiatives (e.g., the European Digital Services Act; Turillazzi et al., 2023) in influencing platforms’ policies and actions against online hate speech (Dubois and Reepschlager, 2024)?

Furthermore, communication scholars can rely on diverse approaches and theoretical frameworks to study how audiences or users navigate (social) media environments: What makes them (passively or more actively) use (social) media, how do they process the contents and interactions they are being exposed to or engaging with, what are the “effects” thereof? These insights might form a basis for further analysis of the determinants and impact of online hate victimization, bystandership, and perpetration. For instance, what are the motives of social media users who engage in online hate (e.g., what are the perceived gratifications)? How might moods and emotions (e.g., boredom and the related need for thrill) explain their behavior (Poels et al., 2022)? How are new technologies (e.g., chatbots, artificial intelligence) being used to create and disseminate online hate, but also to cope with it (Tan et al., 2024)?

We also believe that communication scholars are well placed to further contribute to the solution of the problem. Systematic approaches that are commonly used in the field of persuasive (health) communication, for instance, allow to develop and evaluate evidence-based communication interventions, and to position them amongst other types of interventions (e.g., legislation) that are necessary to reduce the prevalence and the impact of online hate speech. A good example of an already existing overview is the “online hate speech interventions map” (Bojarskich et al., 2023).

Finally, we think that communication scholars can and should act as bridges between disciplines, and between theory and practice. In this way they can promote the collaboration, diversity, and participation that is necessary to investigate online hate speech (solutions) and to bring about real social change.

References

Blaya, C. (2019). Cyberhate: A review and content analysis of intervention strategies. Aggression and Violent Behavior, 45, 163–172. https://doi.org/10.1016/j.avb.2018.05.00610.1016/j.avb.2018.05.006Suche in Google Scholar

Bliuc, A., Faulkner, N., Jakubowicz, A., & McGarty, C. (2018). Online networks of racial hate: A systematic review of 10 years of research on cyber-racism. Computers in Human Behavior, 87, 75–86. https://doi.org/10.1016/j.chb.2018.05.026.10.1016/j.chb.2018.05.026Suche in Google Scholar

Bojarskich, V., Freihse, C., Brömme, N., Gleiß, H., & Rothmund, T. (2023). The Online Hate Speech Interventions Map [Online application]. https://nethate-itn.eu/applications/Suche in Google Scholar

Bührer, S., Koban, K., & Matthes, J. (2024). The WWW of digital hate perpetration: What, who, and why? A scoping review. Computers in Human Behavior, 159, 108321. https://doi.org/10.1016/j.chb.2024.10832110.1016/j.chb.2024.108321Suche in Google Scholar

Dubois, E., & Reepschlager, A. (2024). How harassment and hate speech policies have changed over time: Comparing Facebook, Twitter and Reddit (2005–2020). Policy & Internet, poi3.387. https://doi.org/10.1002/poi3.38710.1002/poi3.387Suche in Google Scholar

Frischlich, L. (2023). Hate and harm. Freie Universität Berlin. https://doi.org/10.48541/DCR.V12.10Suche in Google Scholar

Gagliardone, I., Gal, D., Alves, T., & Martinez, G. (2015). Countering online hate speech. United Nations Educational, Scientific and Cultural Organization.Suche in Google Scholar

Gracia-Calandín, J., & Suárez-Montoya, L. (2023). The eradication of hate speech on social media: A systematic review. Journal of Information, Communication and Ethics in Society, 21(4), 406–421. https://doi.org/10.1108/JICES-11-2022-009810.1108/JICES-11-2022-0098Suche in Google Scholar

Hansen, T. M., Lindekilde, L., Karg, S. T., Petersen, M. B., & Rasmussen, S. H. R. (2024). Combatting online hate: Crowd moderation and the public goods problem. Communications: The European Journal of Communication Research, 49(3), 444–467. https://doi.org/10.1515/commun-2023-010910.1515/commun-2023-0109Suche in Google Scholar

Jahan, M. S., & Oussalah, M. (2023). A systematic review of hate speech automatic detection using natural language processing. Neurocomputing, 546, 126232. https://doi.org/10.1016/j.neucom.2023.12623210.1016/j.neucom.2023.126232Suche in Google Scholar

Kirchmair, T., Koban, K., & Matthes, J. (2024). Four eyes, two truths: Explaining heterogeneity in perceived severity of digital hate against immigrants. Communications: The European Journal of Communication Research, 49(3), 468–490. https://doi.org/10.1515/commun-2023-013310.1515/commun-2023-0133Suche in Google Scholar

Klein, O. (2024). Anti-immigrant rhetoric of populist radical right leaders on social media platforms. Communications: The European Journal of Communication Research, 49(3), 400–420. https://doi.org/10.1515/commun-2023-011310.1515/commun-2023-0113Suche in Google Scholar

Kuřík, B., Heřmanová, M., & Charvát, J. (2024). Living hated: Everyday experiences of hate speech across online and offline contexts. Communications: The European Journal of Communication Research, 49(3), 378–399. https://doi.org/10.1515/commun-2023-011010.1515/commun-2023-0110Suche in Google Scholar

Matamoros-Fernández, A., & Farkas, J. (2021). Racism, hate speech, and social media: A systematic review and critique. Television & New Media, 22(2), 205–224. https://doi.org/10.1177/152747642098223010.1177/1527476420982230Suche in Google Scholar

Oehmer-Pedrazzi, F., & Pedrazzi, S. (2024). “An image hurts more than 1000 words?” Sources, channels, and characteristics of digital hate images. Communications: The European Journal of Communication Research, 49(3), 421–443. https://doi.org/10.1515/commun-2023-011710.1515/commun-2023-0117Suche in Google Scholar

Paz, M. A., Montero-Díaz, J., & Moreno-Delgado, A. (2020). Hate speech: A systematized review. SAGE Open, 10(4), 215824402097302. https://doi.org/10.1177/215824402097302210.1177/2158244020973022Suche in Google Scholar

Poels, K., Rudnicki, K., & Vandebosch, H. (2022). The media psychology of boredom and mobile media use: Theoretical and methodological innovations. Journal of Media Psychology, 34(2), 113–125. https://doi.org/10.1027/1864-1105/a00034010.1027/1864-1105/a000340Suche in Google Scholar

Rawat, A., Kumar, S., & Samant, S. S. (2024). Hate speech detection in social media: Techniques, recent trends, and future challenges. WIREs Computational Statistics, 16(2), e1648. https://doi.org/10.1002/wics.164810.1002/wics.1648Suche in Google Scholar

Rieger, D., Schmitt, J. B., & Frischlich, L. (2018). Hate and counter-voices in the Internet: Introduction to the special issue. Studies in Communication | Media, 7(4), 459–472. https://doi.org/10.5771/2192-4007-2018-4-45910.5771/2192-4007-2018-4-459Suche in Google Scholar

Said-Hung, E., Montero-Díaz, J., & Sánchez-Esparza, M. (2024). The promotion of hate speech: From a media and journalism perspective. Journalism Practice, 18(2), 217–223. https://doi.org/10.1080/17512786.2023.228891810.1080/17512786.2023.2288918Suche in Google Scholar

Tan, Y., Vandebosch, H., Pabian, S., & Poels, K. (2024). A scoping review of technological tools for supporting victims of online sexual harassment. Aggression and Violent Behavior, 78, 101953. https://doi.org/10.1016/j.avb.2024.10195310.1016/j.avb.2024.101953Suche in Google Scholar

Tontodimamma, A., Nissi, E., Sarra, A., & Fontanella, L. (2021). Thirty years of research into hate speech: Topics of interest and their evolution. Scientometrics, 126(1), 157–179. https://doi.org/10.1007/s11192-020-03737-610.1007/s11192-020-03737-6Suche in Google Scholar

Turillazzi, A., Taddeo, M., Floridi, L., & Casolari, F. (2023). The Digital Services Act: An analysis of its ethical, legal, and social implications. Law, Innovation and Technology, 15(1), 83–106. https://doi.org/10.1080/17579961.2023.218413610.1080/17579961.2023.2184136Suche in Google Scholar

Van Royen, K., Poels, K., Daelemans, W., & Vandebosch, H. (2015). Automatic monitoring of cyberbullying on social networking sites: From technical feasibility to desirability. Telematics and Informatics, 32(1), 89–97. https://doi.org/10.1016/J.TELE.2014.04.00210.1016/j.tele.2014.04.002Suche in Google Scholar

Vergani, M., Perry, B., Freilich, J., Chermak, S., Scrivens, R., Link, R., Kleinsman, D., Betts, J., & Iqbal, M. (2024). Mapping the scientific knowledge and approaches to defining and measuring hate crime, hate speech, and hate incidents: A systematic review. Campbell Systematic Reviews, 20(2), e1397. https://doi.org/10.1002/cl2.139710.1002/cl2.1397Suche in Google Scholar

Waqas, A., Salminen, J., Jung, S., Almerekhi, H., & Jansen, B. J. (2019). Mapping online hate: A scientometric analysis on research trends and hotspots in research on online hate. PLOS ONE, 14(9), e0222194. https://doi.org/10.1371/journal.pone.022219410.1371/journal.pone.0222194Suche in Google Scholar

Weber, I., Vandebosch, H., Poels, K., & Pabian, S. (2023). Features for hate? Using the Delphi method to explore digital determinants for online hate perpetration and possibilities for intervention. Cyberpsychology, Behavior, and Social Networking, 26(7), 479–488. https://doi.org/10.1089/cyber.2022.019510.1089/cyber.2022.0195Suche in Google Scholar

Weber, I., Vandebosch, H., Poels, K., & Pabian, S. (2024). The ecology of online hate speech: Mapping expert perspectives on the drivers for online hate perpetration with the Delphi method. Aggressive Behavior, 50(2), e22136. https://doi.org/10.1002/ab.2213610.1002/ab.22136Suche in Google Scholar

Published Online: 2024-09-05
Published in Print: 2024-09-04

© 2024 Walter de Gruyter GmbH, Berlin/Boston

Heruntergeladen am 19.9.2025 von https://www.degruyterbrill.com/document/doi/10.1515/commun-2024-0097/html
Button zum nach oben scrollen