Abstract
Drawing on theories related to interpersonal and intergroup behavior, this study investigated effects of personality traits (i.e., empathy and identity insecurity) and attitudes (i.e., anti-migration attitudes and social dominance orientation) on the perceived severity of digital hate against immigrants in Austria. Findings of autoregressive path modeling using two-wave panel data revealed that empathic suffering and egalitarianism positively predicted perceived severity, while anti-migrant attitudes exhibited a negative prediction. In terms of interactions between personality and attitudes, we observed that the prediction of empathic suffering becomes less relevant for egalitarian individuals, which indicates an overwriting process that might be a promising way to counteract socially harmful digital hate perceptions. Implications for research on annotation tasks and hate interventions are discussed.
1 Introduction
In recent years, hateful online communication has become so widespread that it is now part of users’ everyday lives. For instance, Castellanos et al. (2023) showed that 65 % of their student sample had witnessed online hate speech during 2020 and 2021. Numbers for adults (36.2 % and 68.8 % for online hate and cyberbullying, respectively, Rudnicki et al., 2023) and, most importantly, subtler hate (e.g., dark inspiration, Frischlich, 2021) may be even higher. While repeated exposure to digital hate content has serious implications for people’s mental health (Lo Moro et al., 2023), it also affects societal debates (Brüggemann and Meyer, 2023) as well as societal cohesion given that (digital hate) content exposure exerts significant cognitive, attitudinal, emotional, and behavioral effects (e.g., concerning immigration, Ziegele et al., 2018). Immigrants are a persistent target of digital hate, where they are often described as a threat to national, cultural, economic, and personal security (Costello et al., 2021). This negative sentiment towards immigrants is widespread in the European Union (EU), as reflected in a current EU-wide representative survey finding that 43 % of EU citizens and even 49 % of Austrians perceive immigration as the primary challenge facing the EU (European Commission, 2021). Although numbers for the prevalence of digital hate against immigrants are sparse and should be considered with care, figures between 16 % and 24 % and 4.4 % and 9.5 % are reported in Czech and German anti-muslim Facebook groups, respectively (Hanzelka and Schmidt, 2017). Other research across European countries reports even higher numbers in a sample of immigration-related Tweets (e.g., 88 % in Lithuania in 2018 or 53 % in Cyprus in 2020; Arcila-Calderón et al., 2022). Such prevalences are alarming and require further in-depth analysis.
What reported prevalences tend to obscure is that people substantially differ concerning whether they experience supposedly hateful content as such. In other words, what may be categorized by researchers, for instance, as online hate speech might not be considered as severe or even hateful at all by some social media users. Differences in users’ severity perception of digital hate have been documented for different contents (Kümpel and Unkel, 2023) and across dispositional and sociodemographic variables (such as extraversion and social background; Akhtar et al. 2020; Sang and Stanton, 2022). Heterogeneous perceptions are highly problematic for social media platforms which are obliged to remove certain content as well as law enforcement who aim to oversee and track digital hate moderation practices, complicating their efforts and deteriorating outside plausibility. This perceptual heterogeneity has significant implications for computational approaches which are necessary to digital hate detection and moderation at scale given the importance of consistently labeled training data (Waseem, 2016). Here, research has consistently revealed disagreements among anonymous annotators concerning what is hate speech and what is not (Yin and Zubiaga, 2021). While some data-driven approaches were designed to account for these disagreements and, as a result, showed improved algorithm performance (Akhtar et al., 2020), there is a pressing need for empirical insights into what determines regular users’ varying sensitivities about the severity of digital hate content (i.e., subjective evaluation of seriousness, harm, or impact associated with instances of hateful online content before higher-level categorization or volition comes into play). Such insights would provide another layer that might help improve training data and, subsequently, algorithms that aim to be sensible to perceptive nuances and contingently engage in moderation best suited for certain content. Such algorithms are supposed to follow a human-centered approach to increase well-being, aligning with central values of the European Union (i.e., fundamental rights, human agency and oversight, transparency, nondiscrimination and fairness, societal wellbeing; see European Parliament, 2023) in order to “serve the needs of the society and the common good” (p. 9), as described in the AI Act. This recently introduced legislation on artificial intelligence (AI) in the EU aims to provide a unifying framework for the adoption of human-centric and trustworthy AI that ensures the protection of health, safety, fundamental rights, and environmental concerns through harmonized rules for AI systems’ market placement that prohibit certain practices, impose requirements for high-risk AI systems, establish transparency regulations, regulate general-purpose AI models, implement market monitoring and enforcement measures, and support innovation (European Parliament, 2023).
Consideration given to different perceptions when creating AI-driven hate detection models denotes a prerequisite to coincide with the values of the EU and, subsequently, the AI Act. This approach would acknowledge the complexity of human interactions and ensure that AI models account for a wide range of cultural, social, and contextual factors, thereby promoting fairness, accuracy, and effectiveness in identifying digital hate while minimizing biases and potential harm. Furthermore, by incorporating and considering interindividual differences, such algorithms would likely increase transparency, interpretability, and reproducibility by enabling a thorough understanding, contextual adaptation, personalized detection strategies, explanatory feedback, user-centric evaluation metrics, user-driven model development, and community engagement. By accounting for individual variations in perception, algorithmic detection models can better align with user expectations and societal norms, thus leading in all probability to increased public support through higher accuracy (Nussberger et al., 2022).
This paper provides a threefold contribution to the existing body of research. First, by directly investigating heterogeneity of perceived severity of digital hate against immigrants in a quota-based sample from Austria, a Central European country with comparatively firm digital hate regulations yet below-average scores on migrant empowerment indices (Solano and Huddleston, 2020), we address one of the most problematic issues in both conceptual discussions about digital hate typologies and practical applications of algorithmic detection and moderation systems by testing a current classification taxonomy. Second, we do so by accounting for personality traits and attitudes (as well as their interaction) with predictions being derived not from an overarching theoretical framework but instead from domain-level approaches contributing different pieces of the puzzle that facilitate a better understanding of digital hate perception. Third, we employ an advanced methodological approach by conducting an autoregressive path model for a non-student population sample that is less likely to suffer from various common methodological biases.
2 Theoretical background
Digital hate as an umbrella term encapsulates hostile behaviors that have been characterized by ambiguity in conceptual boundaries (Matthes et al., 2023). More specifically, defined as all types of norm-violating hostile expressions disseminated digitally and directed either against a specific person or groups, digital hate comprises constructs like online hate speech, cyberbullying or cyberaggression, as well as other more vaguely defined phenomena such as diverse forms of hostile actions and modes of offensive speech whose conceptualizations typically overlap up to the point where differentiation occasionally becomes arbitrary. As such, it is not necessarily limited to hate based upon group characteristics (e.g., gender, ethnicity, sexual orientation; see Gelber, 2021) but also includes other online hostilities that share an ordinary understanding of hate (see Brown, 2017) in an attempt to partially unify partially overlapping constructs in this broad field. With this overarching conceptual perspective as a starting point, we more specifically follow a recently introduced approach by Rossini (2022) who differentiates between two distinct categories of hateful speech: Incivility, characterized as violations of interpersonal communication norms, and intolerance, which pertains to expressions that threaten democratic principles and values such as equality, diversity, and freedom. These two overarching categories can be further operationalized into profanities, insults, expressions of outrage, and character assassination as subcategories of incivility, and discrimination and hostility as subcategories of intolerance (Bianchi et al., 2022).
This set of hateful behaviors is reflected in computational detection approaches, which are typically centered around phenomena, such as hate speech (Yin and Zubiaga, 2021) or cyberbullying (Perera and Fernando, 2021), but can also be more holistic, for instance, by using toxicity labels (Sheth et al., 2022). Complex machine learning (ML), which requires labels and annotated features for supervised approaches (i.e., support vector machines, naive bayes, logistic regression, decision trees, k-nearest neighbor; Mullah and Zainon, 2021) and deep learning (DL) algorithms, which only require labels (i.e., convolutional neural networks, long short term memory, bi-directional LSTM; Malik et al., 2022), are the most widely established tools for these tasks. ML is a field of AI that enables systems to statistically learn from data, while DL is a subset of ML that involves neural networks for pattern recognition (Dargan et al., 2020). DL text representation advancements have resulted in transformer-based embedding techniques that fuel developments in the field; however, performance is still improvable (Yin and Zubiaga, 2021).
Model performance is inherently dependent on the underlying dataset (Hettiachchi et al., 2023). Dataset creation often follows a top-down methodology, whereby researchers select data, inadvertently introducing bias (van Rosendaal et al., 2020), which is then annotated by crowdworkers (Akhtar et al., 2020) adhering to predetermined definitions. This approach tends to overlook individual perspectives. Once user ratings are aggregated for subsequent analysis, lower inter-rater reliability emerges together with unknown biases (Kocoń et al., 2021). Current research accounts for annotators’ topical knowledge, their rating confidence, as well as their demographics, leading to improved models (Akhtar et al. 2020); however, personality traits and attitudes are still being neglected (Kocoń et al., 2021).
Varying severity perceptions of both incivility and intolerance can be theorized through different theoretical lenses. Social constructionism broadly argues that sociocultural and historical factors shape individuals’ conceptions of reality (Burr, 2015). From this perspective, digital hate content is subject to varying interpretations because of individuals’ socialization and enculturation. This broad theoretical position can be supplemented by micro-level theories from psychology. Cognitive appraisal theory (Watson and Spence, 2007) helps elaborate how these individual realities vary from person to person. The theory posits that any stimulus evaluation varies as a function of subjectively perceived (mis-)alignment with personal goals and expectations, controllability, and causal attribution. These propositions can be transferred to perceptions of hate stimuli both offline and online, opening up the field for personality and attitudes as determinants of varying cognitive appraisals. Together, heterogeneous constructions of reality, along with their subsequent distinctive appraisals, may provide a sound theoretical background for examining varying perceptions.
Research has already identified a few factors influencing severity perceptions of digital hate incidents, including sociodemographic (e.g., age, Sang and Stanton, 2022; gender, Binns et al. 2017; education, Cowan and Hodge, 1996; culture, Lee et al., 2023; race, Sap et al., 2022; political leaning, Hettiachchi et al., 2023) and dispositional factors (e.g., extraversion and agreeableness, Sang and Stanton, 2022; moral integrity and sexist attitudes, Hettiachchi et al., 2023; knowledge about hate speech, feminist and anti-racist attitudes, Waseem, 2016; and altruism, free speech attitudes, racist beliefs, traditionalism and language purism, Sap et al., 2022). Given the challenges with heterogeneous perceptions in annotation, it is necessary to further advance through theoretically rigorous social scientific investigations. The present study addresses this need by examining how severity perceptions are predicted by personality traits and attitudes (see Figure 1 for the theoretical model).

Theoretical Model.
Personality traits
Personality traits as stable, internal cognitive, affective, and behavioral tendencies are considered common predictors of people’s perception. Aside from five core traits (i.e., extraversion, agreeableness, conscientiousness, emotional stability, and openness to experience), literature offers a great variety of lower-level characteristics that have been closely examined within digital hate perceptions, such as trait anger (Veenstra et al., 2018). However, other lower-level traits that are more directly involved in interpersonal and intergroup contexts are sometimes neglected in favor of these dominant constructs. From the broad spectrum of possible choices, we selected for further investigation trait empathy as arguably one of the most relevant socio-affective dispositional tendencies involved in the evaluation of interpersonal situations as well as identity insecurity as a timely self-related personality trait with documented consequences for perceptions of intergroup situations.
Within various interpersonal contexts, trait empathy is particularly notable given its documented link to bystander interventions (Wachs et al., 2023). Conceptually, trait empathy involves cognitive and affective processes wherein one’s own emotional experience tends to be closely aligned with emotions perceived and understood in others in response to a stimulus without confusing them for one’s own (Cuff et al., 2016). In other words, people who score higher on trait empathy are typically more sensitive to others’ emotional experiences and, thus, more likely to take them into account in their situational evaluation and decision-making. With respect to this tendency, extant evidence has established several associations between trait empathy and digital hate, including more intense perceived harms (Cowan and Khatchadourian, 2003) and a higher likelihood of countering intentions and behaviors (Wachs et al., 2023). Additionally, empathy-based counterspeech has also been shown to reduce anti-immigrant hate speech (Hangartner et al., 2021). We followed this documented trend and hypothesized:
H1: Trait empathy positively predicts the perceived severity of digital hate targeting immigrants.
Identity insecurity, considered a foundational component of an individual’s personality, holds particular relevance due to its influence on intergroup relationships (Hogg, 2007). This lower-level trait is defined as a state of uncertainty about one’s personal and social identities that typically comes with a tendency to conceal one’s true identity in public contexts and is accompanied by feelings of anxiety and mistrust towards individuals who differ from oneself (Massey and Cionea, 2023). Uncertainty identity theory (Wagoner and Hogg, 2017) postulates that individuals engage in social group identification to ease feelings of self-related uncertainty. Accordingly, identity-insecure people align their perceptions, emotions, attitudes, and behaviors with established group norms and prescriptions (Hogg, 2007), which might lead to enhanced in-group and out-group biases, including xenophobia, dehumanization, and collective violence (Wagoner and Hogg, 2017). Given that prevailing attitudes towards immigration in Austria tend toward being somewhat negative (Czymara, 2021) and policies rather strict (Solano and Huddleston, 2020), we assumed:
H2: Identity insecurity negatively predicts the perceived severity of digital hate targeting immigrants.
Attitudes
Personal attitudes comprise a set of learned cognitive, affective, and behavioral elements that represent individual positions toward external stimuli and shape their perception. In the context of the present study, thematically pertinent attitudes include anti-migrant attitudes as they bear relevance to the target group of interest, as well as social dominance orientation (SDO), further relating to group dynamics. Anti-migrant attitudes and SDO relate both to outgroup evaluations and thus are correlated; however, they describe distinct concepts (Panno, 2018). Migration attitudes are influenced by, among others, sociodemographics (e.g., gender, Valentova and Alieva, 2014; or age, Schotte and Winkler, 2018) and dispositional factors (e.g., dark triad traits, Pruysers, 2023; or political and religious affiliation, Al-Kire et al., 2022). Social identity theory (Tajfel and Turner, 2004) centers around the general proposition that individuals perceive themselves as members of distinct groups rather than as unique individuals. Grounded in in-group and out-group perceptions, biases reflecting both a preference for one’s own group and, more importantly, negative evaluations of other groups emerge that can culminate in denigrating hostilities against, for instance, immigrants (Markowitz and Slovic, 2020). Since we expect that intergroup biases affect how people perceive the severity of incivility and intolerance, we concluded:
H3: Anti-migrant attitudes negatively predict the perceived severity of digital hate targeting immigrants.
According to social dominance theory, group-based hierarchies are maintained via three primary mechanisms: institutional discrimination, aggregated individual discrimination, and behavioral asymmetry (Sidanius et al., 2016). Within this theoretical framework, SDO serves as a social-attitudinal manifestation that signifies the degree to which an individual endorses notions of group hierarchy within society and dominance of particular groups over others (La Macchia and Radke, 2020). These notions have been connected to individual acts of discrimination and participation in discriminatory intergroup processes (Pratto et al., 2006). SDO is typically divided into two subdimensions. Dominance is one’s preference for some groups to dominate others and egalitarianism is one’s preference for equity and equality among different social groups within society (Ho et al., 2012).
Applied to a diverse range of subjects, SDO has been utilized for investigating emotions, social perception, dehumanization, prejudice, and discrimination (Sidanius et al., 2016). Generally speaking, it has been shown to influence various perceptions (e.g., violence against women; Rollero et al., 2021) and behaviors related to digital hate (e.g., sexual harrassment; Tang et al., 2020). More specifically, dominance has been associated with endorsements of immigrant persecution, traditional racism, and support for warfare; conversely, egalitarianism is correlated with subtle hierarchy-enhancing legitimizing beliefs and social policies aimed at reducing social stratification (Ho et al., 2012). Accordingly, it is plausible that both dimensions also affect people’s perceptions of anti-migrant content. Thus, we assume:
H4a: Dominance negatively predicts the perceived severity of digital hate targeting immigrants.
H4b: Egalitarianism positively predicts the perceived severity of digital hate targeting immigrants.
In general, attitudes often interact with personality, that is, reduce or increase their respective influence on a given outcome. The same might be true for anti-migration attitudes and SDO which may possibly overwrite more general dispositional tendencies. More specifically, anti-migrant attitudes have been observed to operate as a moderating factor for several topically close relationships, for instance, biasing the impact of immigrant interactions on individuals’ inclinations to support immigrants (Graf and Sczesny, 2019), strengthening the link between bullshitting behavior and anti-migrant attitudes (Čavojová and Brezina, 2021), or reinforcing connections between socioeconomic deprivation and citizens’ endorsement of fringe movements (Kleinert and Schlueter, 2022).
SDO has likewise been reported to be subject to interactions with a variety of overarching factors, including group position, social context, personality, gender-related aspects, and socialization processes (Pratto et al., 2006). Most relevant for this study, SDO has been shown to moderate outcomes of empathy (Sidanius et al., 2013) and identity insecurity (Carnelley and Boag, 2019). Given their impact, it is therefore imperative to conduct further investigation into moderating effects on the perceived severity of anti-migrant speech to explore whether said findings can be transferred to this novel outcome. We ask:
RQ1: How do (a) anti-migrant attitudes, (b) dominance, and (c) egalitarianism moderate the relationships between personality traits and perceived severity of digital hate?
3 Method
This project was part of a comprehensive two-wave panel survey amongst Austrians that was conducted between July 27 and August 5, 2023, for Wave 1 (W1) and September 27 and October 6, 2023, for Wave 2 (W2). The entire survey was screened to be of minimal ethical risk by the IRB of the Department of Communication at the University of Vienna (ID: 20230705_029). Additional material (including datasets and analysis scripts) are available at https://osf.io/ynuxq/.
Participants
We recruited a quota sample representative of the Austrian population in terms of gender, age, and education through the market research institute Gapfish. Participants who (a) were Austrian citizens, (b) at least 16 years old, and (c) provided informed consent were eligible. We excluded participants who (a) stated that they were not willing to answer questions about hostile online content, (b) dropped out during completions, and (c) failed to correctly answer two attention check items in W1. We also had to exclude (d) a small number of participants who self-identified as non-binary/diverse (n = 3) because analyzing them as a covariate would either be futile or, worse, inherently flawed.
Our final sample for W1 included N = 1522 participants (age M = 48.46, SD = 15.28, range: 16–89 years), out of which n = 780 (51.3 %) identified as female and n = 742 (48.8 %) as male. Furthermore, the sample consisted of n = 1161 (76.3 %) participants without and n = 361 (23.7 %) with completed university education. After re-contact, N = 1033 participants (age M = 50.26, SD = 14.84, range: 17–89 years) completed W2, of which n = 520 (50.3 %) identified as female and n = 513 (49.7 %) as male and n = 815 (78.9 %) stated not having and n = 218 (21.1 %) having a university degree.
When compared with participants who completed both waves, we found that the dropout sample (n = 489) was significantly younger, t(920.99) = 6.67, p < .001, Cohen’s d = .37, had fewer post-secondary degrees, χ2(1) = 12.15, p < .001, Cramér’s V = .09, and scored higher on empathetic feelings, t(908.49) = –2.74, p = .006, d = –.15, and identity insecurity, t(887.58) = –2.02, p = .044, d = –.11. Since these differences can be considered neglectible or small (in case of age) in effect size, no greater concerns regarding dropout appears reasonable.
Procedure
Participants were informed about the re-contact procedure and survey content, including a trigger warning concerning hostile social media messages and self-help resources. If they indicated their willingness to proceed, we provided them with a brief overview of their rights and asked for consent.
The survey project included scales for several distinct subprojects concerning various facets of digital hate from different viewpoints to address specific research questions. Notably, only scales relevant to the present study will be reported here. Generally, we asked participants about their demographics and general social media use, which was followed by blocks about (a) their experience with digital hate (not included), (b) personality traits (including but not limited to empathy and identity insecurity), (c) exposure to digital hate aimed at certain targets (including but not limited to immigrants), and (d) attitudes (including but not limited to anti-migration attitudes and SDO). Scales within each block and items within each scale were randomized. Both surveys were identical except for the personality block only being presented in W1, because of which we put the attitudes block in between the digital hate blocks in W2.
Measures
Since considerable effort is involved when answering surveys, we intended to minimize response fatigue by adapting existing scales via selection of high-loading items. It must also be noted that we used German-translated items (for the German versions and item-level statistics, see OSF Appendix Table 1). Unless indicated otherwise, all scales used 5-point Likert scales. Descriptive information concerning averaged means are presented in Table 1.
Personality traits. Trait empathy was captured using two slightly adapted subscales, empathic suffering (i.e., a tendency to experience genuine emotions after realizing how someone might feel) and feeling others (i.e., a tendency to mirror how someone seems to feel), from the multidimensional emotional empathy scale (Alloway et al., 2016). Both subscales asked participants how much four statements apply to them (1 = not at all; 5 = completely). Reliability coefficients suggested good scores for empathic suffering, Cronbach’s α = .80, and acceptable scores for feeling others, α = .77.
Means, Standard Deviations, and Zero-Order Correlations.
|
Variable |
M (SD) |
1 |
2 |
3 |
4 |
5 |
6 |
7 |
|
1. Perceived Severity (W1) |
4.01 (1.02) |
|||||||
|
2. Perceived Severity (W2) |
3.98 (1.02) |
.55** |
||||||
|
3. Empathic Suffering |
4.09 (0.78) |
.45** |
.42** |
|||||
|
4. Feeling Others |
3.17 (0.88) |
.25** |
.22** |
.58** |
||||
|
5. Identity Insecurity |
1.81 (0.99) |
-.02 |
-.03 |
-.03 |
.20** |
|||
|
6. Anti-Migration Attitudes |
2.78 (1.34) |
-.28** |
-.32** |
-.12** |
< |.01| |
< |.01| |
||
|
7. SDO: Dominance |
2.30 (1.05) |
-.24** |
-.22** |
-.13** |
.06* |
.02 |
.53** |
|
|
8. SDO: Egalitarism |
3.97 (0.94) |
.42** |
.36** |
.32** |
.17** |
-.03 |
-.38** |
-.34** |
Note: * p < .05., ** p < .01. W1 = Wave 1, W2 = Wave 2, SDO = Social Dominance Orientation.
Identity insecurity was assessed using the individual identity insecurity subscale of the identity insecurity scale (Massey and Cionea, 2023), again asking participants how much statements apply to them (1 = not at all; 5 = completely). Reliability coefficients indicated excellent scores, α = .91.
Attitudes. To assess SDO, we used two four-item subscales for dominance and egalitarianism of the SDO7 scale (Ho et al., 2015). Specifically, we asked participants to indicate their level of agreement (1 = not at all; 5 = fully) with items about (pro-trait) dominance and con-trait anti-egalitarianism (henceforth egalitarianism). Reliability was good for both subscales, α = .83 for dominance and α = .84 for egalitarianism.
Anti-migration attitudes were measured by an adapted four-item version of the exclusionist populist attitudes scale (Hameleers et al., 2017), again asking for participants’ level of agreement (1 = not at all; 5 = fully). Reliability was excellent, α = .93.
Perceived severity of digital hate. Based on the multi-label-classifier categories reported by Bianchi et al. (2022), we constructed six brief descriptions of different digital hate manifestations: profanity (“Vulgar language, curses, or other coarse expressions”), insults (“Expressions intended to offend a person”), character assassination (“Untrue statements intended to damage the reputation of a person”), outrage (“Expressions of strong consternation and anger intended to carry others away”), discrimination (“Expressions that discriminate against certain groups of people”), and hostility (“Hateful expressions that dehumanize and threaten persons or groups of persons”). Each description was used as an item asking participants about how severely they perceive such content or language when they see it in social media posts about immigrants (1 = not at all, 5 = very). We compared a single-factor solution with a two-factor solution (proposed by Bianchi et al., 2022), resulting in them not being significantly different: Δχ2(1) < 0.01, p = .967 for W1 and Δχ2(1) = 0.237, p = .626 for W2. Accordingly, we used the parsimonious one-factor solution that does not differentiate between incivility and intolerance. Reliability turned out excellent for both waves, αs = .93.
Covariates. In addition to these variables, we entered sociodemographics as covariates. In light of prior studies indicating potential impacts on perceived hate severity, we used gender (Binns et al., 2017), age (Sang and Stanton, 2022), education (Cowan and Hodge, 1996), and political leaning (Hettiachchi et al., 2023). To capture gender and age, we asked participants with which gender they identify (1 = male, 2 = female, 3 = diverse) and how old they are, respectively. Education was measured by asking for participants’ highest completed level of education, which was then coded into groups with and without a university degree. Lastly, participants were presented with a brief description of what left and right typically refer to in political contexts and subsequently instructed to position themselves on a 7-point semantic differential (1 = very left, 7 = very right; M = 3.84, SD = 1.26).
Statistical analysis
We conducted a path analysis in R using maximum likelihood estimation with robust standard errors (MLR) as well as full-information maximum likelihood procedure (FIML; see Lee and Shi, 2021 for a detailed discussion). Variables included in interaction terms were mean-centered. Aside from entering gender, age, education, and political leaning as covariates, we controlled for the autoregressive path of perceived severity.
4 Results
Zero-order correlations are displayed in Table 1. Both empathy dimensions as well as anti-migrant attitudes and the SDO dominance subscale were highly correlated (r > .5). Since predictors were entered simultaneously into the path model, shared variance between them has been partialized out, which, at least to some extent, may alter construct meanings. Notably, assumption checks indicated some residual normality issues and decreasing variances in residual errors, possibly introducing bias (see OSF Appendix Figures 3a–e)
Results from Path Analysis.
|
Perceived Severity (W2) |
|||
|
Predictors |
B (SE) |
β |
p |
|
Perceived Severity (W1) |
.36 (.04) |
.36 |
< .001 |
|
Gender |
.14 (.06) |
.07 |
.010 |
|
Age |
.01 (.002) |
.09 |
.001 |
|
Education Group |
.01 (.06) |
.01 |
.823 |
|
Political Leaning |
-.06 (.03) |
-.07 |
.024 |
|
Empathic Suffering |
.25 (.05) |
.19 |
< .001 |
|
Feeling Others |
-.03 (.04) |
-.03 |
.427 |
|
Identity Insecurity |
-.01 (.03) |
-.01 |
.876 |
|
Anti-Migration Attitudes |
-.10 (.03) |
-.13 |
< .001 |
|
SDO: Dominance |
.02 (.03) |
.02 |
.470 |
|
SDO: Egalitarianism |
.08 (.04) |
.07 |
.039 |
|
Adj. R2 |
.37 |
||
|
Empathic Suffering*Anti-Migration Attitudes |
-.01 (.04) |
-.01 |
.856 |
|
Feeling Others*Anti-Migration Attitudes |
-.03 (.04) |
-.03 |
.488 |
|
Identity Insecurity*Anti-Migration Attitudes |
-.04 (.03) |
-.05 |
.256 |
|
Empathic Suffering*SDO: Egalitarianism |
-.14 (.05) |
-.11 |
.004 |
|
Feeling Others*SDO: Egalitarianism |
.06 (.06) |
.05 |
.257 |
|
Identity Insecurity*SDO: Egalitarianism |
-.05 (.03) |
-.05 |
.214 |
|
Empathic Suffering*SDO: Dominance |
.04 (.05) |
.03 |
.407 |
|
Feeling Others*SDO: Dominance |
.02 (.04) |
.02 |
.545 |
|
Identity Insecurity*SDO: Dominance |
.01 (.03) |
.01 |
.846 |
|
Adj. R2 |
.38 |
||
Notes: Bold indicates significant path coefficients. W1 = Wave 1, W2 = Wave 2, SDO = Social Dominance Orientation.
Main effects
Results concerning the main effects from the path analysis are summarized in Table 2 as well as visualized in Figure 1. Concerning trait empathy (H1), we only found a significant positive path for empathic suffering, B = .25, SE = .05, β = .19, p < .001, but not for feeling others, B = –.03, SE = .04, β = –.03, p = .427. That is, participants who stated that they tend to experience genuine emotions when realizing how other people might feel perceived digital hate against immigrants as more severe while those who showed a predisposition to mirror others’ feelings when observing them did not (at least given that shared variance was controlled for). For the second trait of interest, identity insecurity (H2), our analysis revealed no significant path, B = –.01, SE = .03, β = –.01, p = .876.
When it comes to anti-migrant attitudes (H3) and SDO (H4), a negative path from anti-migrant attitudes, B = –.10, SE = .03, β = –.13, p < .001, and a positive path from egalitarianism to perceived severity were significant, B = .08, SE = .04, β = .07, p = .039. Keeping in mind abovementioned shared variance with anti-migrant attitudes, the path between dominance and perceived severity did not turn out significant, B = .02, SE = .03, β = .02, p = .470. In other words, participants who stated that they think immigrants are a menace to society also experienced hateful attacks against them on social media to be less problematic. Conversely, participants who held an egalitarian attitude towards different social groups tended to perceive anti-migrant digital hate as more severe.
Interaction effects between personality and attitudes
We asked whether participants’ attitudes might interact with trait empathy and identity insecurity with regard to perceived severity (RQ1a–c). Results showed a significant interaction between empathic suffering and egalitarianism, B = .–14, SE = .05, β = –.11, p = .004, indicating that the more people said they tilt toward emphatically suffering with others, the smaller the link between their belief about whether social groups are all equal and should all have the same chances in life and severity perceptions became (see OSF Appendix for figures). No other interactions across personality and attitudinal traits were significant (see Table 2).
Covariates
Concerning covariates, we found a significant association with gender, B = .14, SE = .06, β = .07, p = .010, age, B = .01, SE = .002, β = .09, p = .001, and political leaning, B = –.06, SE = .03, β = –.07, p = .024. Participants who identified as female, were older, and leaned politically to the left were more likely to accept digital hate against immigrants as severe. Whether or not participants have completed university education made no difference, B = .01, SE = .06, β = .01, p = .823.
5 Discussion
This study’s objective was to investigate varying severity perceptions of digital hate against immigrants by assessing the impact of personality traits (i.e., trait empathy and identity insecurity) and attitudes (i.e., anti-migration attitudes and SDO) based on domain-level theories in order to advance human-centered classification algorithms as the AI Act by the EU requires. Results revealed that empathic suffering displayed the predicted positive association with perceived severity, whereas feeling others did not. These differential findings emphasize the importance of considering empathy facets when exploring perceptions of digital hate (Wachs et al., 2023). It is evident from our data that individuals with a heightened propensity to experience genuine emotions because they perceive the suffering of others are more likely to perceive digital hate directed at immigrants as severe. This suggests that emotional activation (rather than emotion mirroring) plays a more vital role in shaping one’s perception of digital hate. Together, these results prompt further investigation into the intricate relationship between empathy and varying digital hate perceptions, particularly when directed at vulnerable groups.
Conversely, we found no support for a relationship between identity insecurity and perceived severity. This may carry implications for uncertainty identity theory, since it postulates that individuals experiencing insecurity align with their in-groups to mitigate uncertainty, potentially leading to phenomena like xenophobia (Wagoner and Hogg, 2017). Given this lack of alignment, it might be that perceptions, such as the perceived severity of digital hate, may only be partially influenced by bystanders’ identity security, perhaps particularly when it is directed at marginalized groups like immigrants. Further research is needed to explore possibly more nuanced mechanisms underlying the proposed relationship, for example by considering potential moderating variables.
When it comes to the prediction via attitudes, anti-migrant attitudes turned out as expected, negatively predicting perceived severity, suggesting that topically coherent prejudices (here against immigrants) appear to color how digital hate is perceived. This aligns with prior research, which has demonstrated a connection between negative attitudes toward immigrants and their dehumanization (Markowitz and Slovic, 2020). When immigrants are perceived as lesser humans, reduced perceived severity of digital hate might be the logical consequence. Future research could explore the generalizability of this finding to other target groups of digital hate and may properly test for potential mediators being at work.
Findings partially supported the predictive role of SDO as we only found a significant positive relationship for egalitarianism. These results again highlight how differently constructed subdimensions may affect evaluations of digital hate targeting immigrants. Dominance’s statistical meaninglessness suggests that the active subjugation of outgroups may not be as prominent as one might expect. This challenges the assumption that individuals endorsing this aspect of SDO would consistently perceive digital hate targeting immigrants as less severe (Ho et al., 2012). Conversely, the positive relationship between egalitarianism and perceived severity indicates that less confrontational hierarchy-enhancing ideologies may be an important driver for increased severity perceptions. In other words, individuals who subscribe to egalitarian but still socially stratified systems may be potential allies when it comes to digital hate against immigrants – a proposition, however, that requires more evidence.
No significant interactions between personality traits and anti-migrant attitudes were identified. This has two key implications. First, it highlights the enduring positive influence of empathic suffering on digital hate perception, regardless of contradictory attitudes. In essence, individuals with higher levels of empathic suffering tend to consistently perceive digital hate as more severe, irrespective of their anti-migration attitudes. Secondly, identity insecurity consistently did not affect digital hate perception, independent of anti-migrant attitudes.
Finally, results provided partial support for interaction effects between SDO and personality traits. Again, dominance did not turn out meaningful even in interaction with empathy and identity insecurity. In contrast, a significant negative interaction between empathic suffering and egalitarianism was observed. Broadly, this result indicates that greater or weaker egalitarian attitudes are less relevant in determining whether or not one evaluates hateful communication as problematic for those people who have a strong dispositional tendency to share the suffering of others, thus partially explaining attitude-incongruent evaluations and perhaps even behaviors related to severity perceptions (e.g., bystander interventions). If this could be replicated on a state level, it might also further explain the varying effectiveness of messaging strategies. That is, triggering or inhibiting empathic suffering via certain message cues may make anti-egalitarian audience members more or less aware of harms related to digital hate, respectively, presenting an intriguing avenue for future research. Given that this interaction can also be read the other way around, our finding might also suggest that cultivating egalitarian attitudes may be crucial for countering detrimental but hardly changeable personality traits. This is essential because empathic suffering’s positive relationship with severity perception also means that low-scoring individuals are more likely to disregard attacks on immigrants. Egalitarian attitudes may overwrite this socially undesirable disposition.
Regarding theoretical implications, our findings demonstrate that social constructionist (Burr, 2015) and cognitive appraisal lenses (Watson and Spence, 2007) provide solid frameworks for predicting perceptual differences of digital hate. Both personality traits and attitudes selectively influence perceived severity, occasionally interacting with each other. Concerning previously proposed typologies (Bianchi et al., 2022), we show that people do not seem to differentiate between the different types of hate when targeted against immigrants, at least not similarly as researchers do based on fine-grained conceptual distinctions.
We further advance our understanding of heterogeneous digital hate perceptions which yields critical implications for detection methods, public policy, and social media platform governance. More specifically, we show that not only sociodemographics but also personality traits and attitudes can predict between-person differences in perceived severity. These findings offer implications for strategies aimed at sensitizing individuals to digital hate. Notably, emphasizing the role of empathic suffering, which might be triggered as a temporary state, may contribute to enhancing people’s capacity to recognize instances of digital hate as problematic and encourage intervention. The other way around, providing egalitarian framings may be promising to counteract the impact of inhibitory triggers of empathic (non-)suffering. In the context of dataset creation for detection tasks, it appears, based on our findings, increasingly necessary to carefully choose annotators by also considering personality and attitudinal characteristics to account for perceptual heterogeneity. Moreover, there is a necessity for scholarly and public discourse concerning the selection of appropriate thresholds for automated detection tools, as there exists considerable variability in individuals’ sensitivities to digital hate.
6 Limitations
From a theoretical perspective, forthcoming research could consider incorporating additional dimensions of empathy (e.g., emotional attention, Alloway et al., 2016) and identity insecurity (e.g., insecurity due to dissimilar others; Massey and Cionea, 2023), which we only covered selectively. Further, a variety of other personality traits and attitudes (e.g., toward free speech) might be relevant for severity perceptions. Additionally, we examined de-contextualized severity perceptions of social media users as potential bystanders, which might differ from when participants are instructed to imagine themselves in other roles (e.g., as moderators or annotators). Another limitation relates to our focus on attitudes and traits that naturally disregards other important factors determining severity perceptions (such as, e.g., social norms).
Methodological constraints include the assessment of perceived severity through verbal descriptions rather than tangible examples and the reliance on self-reports. While using descriptions instead of stimuli is a typical approach in sensitive contexts where forced exposure comes with ethical concerns, gauging perceived severity without direct exposure may introduce bias and might hurt external validity. On a related note, the collapse of the two-factor solution might also be induced by the specific target group investigated, showcasing that digital hate against disempowered targets is not dependent on the specific content type. Additionally, self-reports are commonly challenged regarding whether participants can (or want to) accurately specify their perceptions in such a manner. To improve, it is imperative to sensitively expose participants to authentic digital hate and consider psychophysiological or behavioral measurements in addition to self-reports. Besides measurement limitations, cross-construct dynamics, especially regarding anti-migration attitudes and SDO, might have impacted our model.
7 Conclusion
Social media companies increasingly rely on automated detection methods for digital hate and, by doing that, struggle with heterogeneous perceptions among annotators. In this study, we demonstrate the predictive value of empathic suffering and egalitarian attitudes on users’ perceptions of digital hate targeting immigrants. Our findings can be read as a call for data annotation efforts to consider these factors more thoroughly for instance, by grouping and weighing individual responses accordingly in ML and DL experiments. In this way, the goal of increasing social acceptance of algorithmic moderation and improving detection performance might be closer in reach.
References
Akhtar, S., Basile, V., & Patti, V. (2020). Modeling annotator perspective and polarized opinions to improve hate speech detection. In L. Aroyo & E. Simperl (Eds.), Proceedings of the Eighth AAAI Conference on Human Computation and Crowdsourcing (pp. 151–154). https://doi.org/10.1609/hcomp.v8i1.747310.1609/hcomp.v8i1.7473Search in Google Scholar
Al-Kire, R., Pasek, M., Tsang, J.-A., Leman, J., & Rowatt, W. (2022). Protecting America’s borders: Christian nationalism, threat, and attitudes toward immigrants in the United States. Group Processes & Intergroup Relations, 25(2), 354–378. https://doi.org/10.1177/136843022097829110.1177/1368430220978291Search in Google Scholar
Alloway, T. P., Copello, E., Loesch, M., Soares, C., Watkins, J., Miller, D., Campell, G., Tarter, A., Law, N., Soares, C., & Ray, S. (2016). Investigating the reliability and validity of the multidimensional emotional empathy scale. Measurement, 90, 438–442. https://doi.org/10.1016/j.measurement.2016.05.01410.1016/j.measurement.2016.05.014Search in Google Scholar
Arcila-Calderón, C., Sánchez-Holgado, P., Quintana-Moreno, C., Amores, J.-J., & Blanco-Herrero, D. (2022). Hate speech and social acceptance of migrants in Europe: Analysis of tweets with geolocation. Comunicar: Revista Científica de Comunicación y Educación, 30(71), 21–35. https://doi.org/10.3916/C71-2022-0210.3916/C71-2022-02Search in Google Scholar
Bianchi, F., Hills, S., Rossini, P., Hovy, D., Tromble, R., & Tintarev, N. (2022). “It’s not just hate”: A multi-dimensional perspective on detecting harmful speech online. In Y. Goldberg, Z. Kozareva & Y. Zhang (Eds.), Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing (pp. 8093–8099). https://doi.org/10.18653/v1/2022.emnlp-main.55310.18653/v1/2022.emnlp-main.553Search in Google Scholar
Binns, R., Veale, M., Van Kleek, M., & Shadbolt, N. (2017). Like trainer, like bot? Inheritance of bias in algorithmic content moderation. In G. L. Ciampaglia, A. Mashhadi, & T. Yasseri (Eds.), Social informatics (pp. 405–415). Springer International Publishing. https://doi.org/10.1007/978-3-319-67256-4_3210.1007/978-3-319-67256-4_32Search in Google Scholar
Brown, A. (2017). What is hate speech? Part 2: Family resemblances. Law and Philosophy, 36(5), 561–613. https://doi.org/10.1007/s10982-017-9300-x10.1007/s10982-017-9300-xSearch in Google Scholar
Brüggemann, M., & Meyer, H. (2023). When debates break apart: Discursive polarization as a multi-dimensional divergence emerging in and through communication. Communication Theory, 33(2–3), 132–142, https://doi.org/10.1093/ct/qtad01210.1093/ct/qtad012Search in Google Scholar
Burr, V. (2015). Social constructionism. In J. D. Wright (Ed.), International encyclopedia of the social & behavioral sciences (pp. 222–227). Elsevier. https://doi.org/10.1016/B978-0-08-097086-8.24049-X10.1016/B978-0-08-097086-8.24049-XSearch in Google Scholar
Carnelley, K. B., & Boag, E. M. (2019). Attachment and prejudice. Current Opinion in Psychology, 25, 110–114. https://doi.org/10.1016/j.copsyc.2018.04.00310.1016/j.copsyc.2018.04.003Search in Google Scholar
Castellanos, M., Wettstein, A., Wachs, S., Kansok-Dusche, J., Ballaschk, C., Krause, N., & Bilz, L. (2023). Hate speech in adolescents: A binational study on prevalence and demographic differences. Frontiers in Education, 8. https://www.frontiersin.org/articles/10.3389/feduc.2023.107624910.3389/feduc.2023.1076249Search in Google Scholar
Čavojová, V., & Brezina, I. (2021). Everybody bullshits sometimes: Relationships of bullshitting frequency, overconfidence and myside bias in the topic of migration. Studia Psychologica, 63(2), 158–174. https://doi.org/10.31577/sp.2021.02.81810.31577/sp.2021.02.818Search in Google Scholar
Costello, M., Restifo, S. J., & Hawdon, J. (2021). Viewing anti-immigrant hate online: An application of routine activity and Social Structure-Social Learning Theory. Computers in Human Behavior, 124, 106927. https://doi.org/10.1016/j.chb.2021.10692710.1016/j.chb.2021.106927Search in Google Scholar
Cowan, G., & Hodge, C. (1996). Judgments of hate speech: The effects of target group, publicness, and behavioral responses of the target. Journal of Applied Social Psychology, 26(4), 355–374. https://doi.org/10.1111/j.1559-1816.1996.tb01854.x10.1111/j.1559-1816.1996.tb01854.xSearch in Google Scholar
Cowan, G., & Khatchadourian, D. (2003). Empathy, ways of knowing, and interdependence as mediators of gender differences in attitudes toward hate speech and freedom of speech. Psychology of Women Quarterly, 27(4), 300–308. https://doi.org/10.1111/1471-6402.0011010.1111/1471-6402.00110Search in Google Scholar
Cuff, B. M. P., Brown, S. J., Taylor, L., & Howat, D. J. (2016). Empathy: A review of the concept. Emotion Review, 8(2), 144–153. https://doi.org/10.1177/175407391455846610.1177/1754073914558466Search in Google Scholar
Czymara, C. S. (2021). Attitudes toward refugees in contemporary europe: A longitudinal perspective on cross-national differences. Social Forces, 99(3), 1306–1333. https://doi.org/10.1093/sf/soaa05510.1093/sf/soaa055Search in Google Scholar
Dargan, S., Kumar, M., Ayyagari, M. R., & Kumar, G. (2020). A survey of deep learning and its applications: A new paradigm to machine learning. Archives of Computational Methods in Engineering, 27, 1071–1092. https://doi.org/10.1007/s11831-019-09344-w10.1007/s11831-019-09344-wSearch in Google Scholar
European Commission (2021). Special Eurobarometer 500 Report: Future of Europe. https://www.europarl.europa.eu/at-your-service/files/be-heard/eurobarometer/2021/future-of-europe-2021/en-foe-special-eb-report.pdfSearch in Google Scholar
European Parliament (2023). Artificial Intelligence Act. https://www.europarl.europa.eu/doceo/document/TA-9-2023-0236_EN.pdfSearch in Google Scholar
Frischlich, L. (2021). #Dark inspiration: Eudaimonic entertainment in extremist Instagram posts. New Media & Society, 23(3), 554–577. https://doi.org/10/gghnhr10.1177/1461444819899625Search in Google Scholar
Gelber, K. (2021). Differentiating hate speech: A systemic discrimination approach. Critical Review of International Social and Political Philosophy, 24(4), 393–414. https://doi.org/10.1080/13698230.2019.157600610.1080/13698230.2019.1576006Search in Google Scholar
Graf, S., & Sczesny, S. (2019). Intergroup contact with migrants is linked to support for migrants through attitudes, especially in people who are politically right wing. International Journal of Intercultural Relations, 73, 102–106. https://doi.org/10.1016/j.ijintrel.2019.09.00110.1016/j.ijintrel.2019.09.001Search in Google Scholar
Hameleers, M., Bos, L., & de Vreese, C. H. (2017). The appeal of media populism: The media preferences of citizens with populist attitudes. Mass Communication and Society, 20(4), 481–504. https://doi.org/10.1080/15205436.2017.129181710.1080/15205436.2017.1291817Search in Google Scholar
Hangartner, D., Gennaro, G., Alasiri, S., Bahrich, N., Bornhoft, A., Boucher, J., Demirci, B. B., Derksen, L., Hall, A., Jochum, M., Munoz, M. M., Richter, M., Vogel, F., Wittwer, S., Wüthrich, F., Gilardi, F. & Donnay, K. (2021). Empathy-based counterspeech can reduce racist hate speech in a social media field experiment. Proceedings of the National Academy of Sciences, 118(50), e2116310118. https://doi.org/10.1073/pnas.211631011810.1073/pnas.2116310118Search in Google Scholar
Hanzelka, J., & Schmidt, I. (2017). Dynamics of cyber hate in social media: A comparative analysis of anti-Muslim movements in the Czech Republic and Germany. International Journal of Cyber Criminology, 11(1), 143–160. https://doi.org/10.5281/zenodo.495778Search in Google Scholar
Hettiachchi, D., Holcombe-James, I., Livingstone, S., de Silva, A., Lease, M., Salim, F. D., & Sanderson, M. (2023). How crowd worker factors influence subjective annotations: A study of tagging misogynistic hate speech in tweets. arXiv. https://doi.org/10.48550/arXiv.2309.0128810.1609/hcomp.v11i1.27546Search in Google Scholar
Ho, A. K., Sidanius, J., Pratto, F., Levin, S., Thomsen, L., Kteily, N., & Sheehy-Skeffington, J. (2012). Social dominance orientation: Revisiting the structure and function of a variable predicting social and political attitudes. Personality and Social Psychology Bulletin, 38(5), 583–606. https://doi.org/10.1177/014616721143276510.1177/0146167211432765Search in Google Scholar
Ho, A. K., Sidanius, J., Kteily, N., Sheehy-Skeffington, J., Pratto, F., Henkel, K. E., Foels, R., & Stewart, A. L. (2015). The nature of social dominance orientation: Theorizing and measuring preferences for intergroup inequality using the new SDO7 scale. Journal of Personality and Social Psychology, 109(6), 1003–1028. https://doi.org/10.1037/pspi000003310.1037/pspi0000033Search in Google Scholar
Hogg, M. A. (2007). Uncertainty-identity theory. Advances in Experimental Social Psychology, 39, 69–126. https://doi.org/10.1016/S0065-2601(06)39002-810.1016/S0065-2601(06)39002-8Search in Google Scholar
Kleinert, M., & Schlueter, E. (2022). Why and when do citizens support populist right-wing social movements? Development and test of an integrative theoretical model. Journal of Ethnic and Migration Studies, 48(9), 2148–2167. https://doi.org/10.1080/1369183X.2020.176378810.1080/1369183X.2020.1763788Search in Google Scholar
Kocoń, J., Figas, A., Gruza, M., Puchalska, D., Kajdanowicz, T., & Kazienko, P. (2021). Offensive, aggressive, and hate speech analysis: From data-centric to human-centered approach. Information Processing & Management, 58(5), 102643. https://doi.org/10.1016/j.ipm.2021.10264310.1016/j.ipm.2021.102643Search in Google Scholar
Kümpel, A. S., & Unkel, J. (2023). Differential perceptions of and reactions to incivil and intolerant user comments. Journal of Computer-Mediated Communication, 28(4), zmad018. https://doi.org/10.1093/jcmc/zmad01810.1093/jcmc/zmad018Search in Google Scholar
La Macchia, S. T., & Radke, H. R. M. (2020). Social dominance orientation and social dominance theory. In V. Zeigler-Hill, & T. K. Shackelford (Eds.), Encyclopedia of personality and individual differences (pp. 5028–5036). Springer International Publishing. https://doi.org/10.1007/978-3-319-24612-3_126710.1007/978-3-319-24612-3_1267Search in Google Scholar
Lee, S., Baek, H., & Kim, S. (2023). How people perceive malicious comments differently: Factors influencing the perception of maliciousness in online news comments. Frontiers in Psychology, 14. https://doi.org/10.3389/fpsyg.2023.122100510.3389/fpsyg.2023.1221005Search in Google Scholar
Lee, T., & Shi, D. (2021). A comparison of full information maximum likelihood and multiple imputation in structural equation modeling with missing data. Psychological Methods, 26(4), 466–485. https://doi.org/10.1037/met000038110.1037/met0000381Search in Google Scholar
Lo Moro, G., Scaioli, G., Martella, M., Pagani, A., Colli, G., Bert, F., & Siliquini, R. (2023). Exploring cyberaggression and mental health consequences among adults: An Italian nationwide cross-sectional study. International Journal of Environmental Research and Public Health, 20(4). https://doi.org/10.3390/ijerph2004322410.3390/ijerph20043224Search in Google Scholar
Malik, J. S., Pang, G., & van den Hengel, A. (2022). Deep learning for hate speech detection: A comparative study. arXiv. https://doi.org/10.48550/arXiv.2202.09517Search in Google Scholar
Markowitz, D. M., & Slovic, P. (2020). Social, psychological, and demographic characteristics of dehumanization toward immigrants. Proceedings of the National Academy of Sciences of the United States of America, 117(17), 9260–9269. https://doi.org/10.1073/pnas.192179011710.1073/pnas.1921790117Search in Google Scholar
Massey, Z. B., & Cionea, I. A. (2023). A new scale for measuring identity insecurity. Communication Methods and Measures, 17(1), 40–58. https://doi.org/10.1080/19312458.2022.214463110.1080/19312458.2022.2144631Search in Google Scholar
Matthes, J., Koban, K., Bührer, S., Kirchmair, T., Weiß, P., Khaleghipour, M., Saumer, M., & Meerson, R. (2023). The state of evidence in digital hate research: An umbrella review. OSF. https://osf.io/ya456/files/osfstorage/650c22665a0a81246d04487cSearch in Google Scholar
Mullah, N. S., & Zainon, W. M. N. W. (2021). Advances in machine learning algorithms for hate speech detection in social media: A review. IEEE Access, 9, 88364–88376. https://doi.org/10.1109/ACCESS.2021.308951510.1109/ACCESS.2021.3089515Search in Google Scholar
Nussberger, A. M., Luo, L., Celis, L. E. & Crockett, M. J. (2022). Public attitudes value interpretability but prioritize accuracy in artificial intelligence. Nature Communications, 13, 5821. https://doi.org/10.1038/s41467-022-33417-310.1038/s41467-022-33417-3Search in Google Scholar
Panno, A. (2018). Social dominance and attitude towards immigrants: The key role of happiness. Social Sciences, 7(8), 126. https://doi.org/10.3390/socsci708012610.3390/socsci7080126Search in Google Scholar
Perera, A., & Fernando, P. (2021). Accurate cyberbullying detection and prevention on social media. Procedia Computer Science, 181, 605–611. https://doi.org/10.1016/j.procs.2021.01.20710.1016/j.procs.2021.01.207Search in Google Scholar
Pratto, F., Sidanius, J., & Levin, S. (2006). Social dominance theory and the dynamics of intergroup relations: Taking stock and looking forward. European Review of Social Psychology, 17(1), 271–320. https://doi.org/10.1080/1046328060105577210.1080/10463280601055772Search in Google Scholar
Pruysers, S. (2023). Personality and attitudes towards refugees: Evidence from Canada. Journal of Elections, Public Opinion and Parties, 33(4), 538–558. https://doi.org/10.1080/17457289.2020.182418710.1080/17457289.2020.1824187Search in Google Scholar
Rollero, C., Bergagna, E., & Tartaglia, S. (2021). What is violence? The role of sexism and social dominance orientation in recognizing violence against women. Journal of Interpersonal Violence, 36(21–22), NP11349–NP11366. https://doi.org/10.1177/088626051988852510.1177/0886260519888525Search in Google Scholar
Rossini, P. (2022). Beyond incivility: Understanding patterns of uncivil and intolerant discourse in online political talk. Communication Research, 49(3), 399–425. https://doi.org/10.1177/009365022092131410.1177/0093650220921314Search in Google Scholar
Rudnicki, K., Vandebosch, H., Voué, P., & Poels, K. (2023). Systematic review of determinants and consequences of bystander interventions in online hate and cyberbullying among adults. Behaviour & Information Technology, 42(5), 527–544. https://doi.org/10.1080/0144929X.2022.202701310.1080/0144929X.2022.2027013Search in Google Scholar
Sang, Y., & Stanton, J. (2022). The origin and value of disagreement among data labelers: A case study of individual differences in hate speech annotation. In M. Smits (Ed.), Information for a better world: Shaping the global future (pp. 425–444). Springer International Publishing. https://doi.org/10.1007/978-3-030-96957-8_3610.1007/978-3-030-96957-8_36Search in Google Scholar
Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., & Smith, N. A. (2022). Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. arXiv. https://doi.org/10.48550/arXiv.2111.0799710.18653/v1/2022.naacl-main.431Search in Google Scholar
Schotte, S., & Winkler, H. (2018). Why are the elderly more averse to immigration when they are more likely to benefit? Evidence across countries. International Migration Review, 52(4), 1250–1282. https://doi.org/10.1177/019791831876792710.1177/0197918318767927Search in Google Scholar
Sheth, A., Shalin, V. L., & Kursuncu, U. (2022). Defining and detecting toxicity on social media: Context and knowledge are key. Neurocomputing, 490, 312–318. https://doi.org/10.1016/j.neucom.2021.11.09510.1016/j.neucom.2021.11.095Search in Google Scholar
Sidanius, J., Cotterill, S., Sheehy-Skeffington, J., Kteily, N., & Carvacho, H. (2016). Social dominance theory: Explorations in the psychology of oppression. In C. G. Sibley, & F. K. Barlow (Eds.), The Cambridge handbook of the psychology of prejudice (pp. 149–187). Cambridge University Press. https://doi.org/10.1017/9781316161579.00810.1017/9781316161579.008Search in Google Scholar
Sidanius, J., Kteily, N., Sheehy-Skeffington, J., Ho, A. K., Sibley, C., & Duriez, B. (2013). You’re inferior and not worth our concern: The interface between empathy and social dominance orientation. Journal of Personality, 81(3), 313–323. https://doi.org/10.1111/jopy.1200810.1111/jopy.12008Search in Google Scholar
Solano, G. M. & Huddleston, T. (2020). Migrant integration policy index 2020. CIDOB and MPG.Search in Google Scholar
Tajfel, H., & Turner, J. C. (2004). The Social Identity Theory of intergroup behavior. In J. T. Jost, & J. Sidanius (Eds.), The Social Identity Theory of intergroup behavior (pp. 276–293). Psychology Press. https://doi.org/10.4324/9780203505984-1610.4324/9780203505984-16Search in Google Scholar
Tang, W. Y., Reer, F., & Quandt, T. (2020). Investigating sexual harassment in online video games: How personality and context factors are related to toxic sexual behaviors against fellow players. Aggressive Behavior, 46(1), 127–135. https://doi.org/10.1002/ab.2187310.1002/ab.21873Search in Google Scholar
Valentova, M., & Alieva, A. (2014). Gender differences in the perception of immigration-related threats. International Journal of Intercultural Relations, 39, 175–182. https://doi.org/10.1016/j.ijintrel.2013.08.01010.1016/j.ijintrel.2013.08.010Search in Google Scholar
van Rosendaal, J., Caselli, T., & Nissim, M. (2020). Lower bias, higher density abusive language datasets: A recipe. In J. Monti, V. Basile, M. P. Di Buono, R. Manna, A. Pascucci & S. Tonelli (Eds.), Proceedings of the Workshop on Resources and Techniques for User and Author Profiling in Abusive Language (pp. 14–19). https://aclanthology.org/2020. restup-1.4Search in Google Scholar
Veenstra, L., Bushman, B. J., & Koole, S. L. (2018). The facts on the furious: A brief review of the psychology of trait anger. Current Opinion in Psychology, 19, 98–103. https://doi.org/10.1016/j.copsyc.2017.03.01410.1016/j.copsyc.2017.03.014Search in Google Scholar
Wachs, S., Krause, N., Wright, M. F., & Gámez-Guadix, M. (2023). Effects of the prevention program “Hateless. Together Against Hatred” on adolescents’ empathy, self-efficacy, and countering hate speech. Journal of Youth and Adolescence, 52(6), 1115–1128. https://doi.org/10.1007/s10964-023-01753-210.1007/s10964-023-01753-2Search in Google Scholar
Wagoner, J. A., & Hogg, M. A. (2017). Uncertainty-identity theory. In V. Zeigler-Hill, & T. K. Shackelford (Eds.), Encyclopedia of personality and individual differences (pp. 1–8). Springer International Publishing. https://doi.org/10.1007/978-3-319-28099-8_1195-110.1007/978-3-319-28099-8_1195-1Search in Google Scholar
Waseem, Z. (2016). Are you a racist or am I seeing things? Annotator influence on hate speech detection on Twitter. In D. Bamman, A. S. Doğruöz, J. Eisenstein, D. Hovy, D. Jurgens, B. O’Connor, A. Oh, O. Tsur & S. Volkova (Eds.), Proceedings of the First Workshop on NLP and Computational Social Science (pp. 138–142). https://doi.org/10.18653/v1/W16-561810.18653/v1/W16-5618Search in Google Scholar
Watson, L., & Spence, M. T. (2007). Causes and consequences of emotions on consumer behaviour: A review and integrative cognitive appraisal theory. European Journal of Marketing, 41(5/6), 487–511. https://doi.org/10.1108/0309056071073757010.1108/03090560710737570Search in Google Scholar
Yin, W., & Zubiaga, A. (2021). Towards generalisable hate speech detection: A review on obstacles and solutions. PeerJ Computer Science, 7, e598. https://doi.org/10.7717/peerj-cs.59810.7717/peerj-cs.598Search in Google Scholar
Ziegele, M., Koehler, C., & Weber, M. (2018). Socially destructive? Effects of negative and hateful user comments on readers’ donation behavior toward refugees and homeless persons. Journal of Broadcasting & Electronic Media, 62(4), 636–653. https://doi.org/10.1080/08838151.2018.153243010.1080/08838151.2018.1532430Search in Google Scholar
© 2024 bei den Autoren, publiziert von Walter de Gruyter GmbH, Berlin/Boston
Dieses Werk ist lizensiert unter einer Creative Commons Namensnennung 4.0 International Lizenz.
Articles in the same Issue
- Titelseiten
- Articles
- Online hate: A European communication perspective
- Living hated: Everyday experiences of hate speech across online and offline contexts
- Anti-immigrant rhetoric of populist radical right leaders on social media platforms
- “An image hurts more than 1000 words?”
- Combatting online hate: Crowd moderation and the public goods problem
- Four eyes, two truths: Explaining heterogeneity in perceived severity of digital hate against immigrants
Articles in the same Issue
- Titelseiten
- Articles
- Online hate: A European communication perspective
- Living hated: Everyday experiences of hate speech across online and offline contexts
- Anti-immigrant rhetoric of populist radical right leaders on social media platforms
- “An image hurts more than 1000 words?”
- Combatting online hate: Crowd moderation and the public goods problem
- Four eyes, two truths: Explaining heterogeneity in perceived severity of digital hate against immigrants