Abstract
Antonymy is the lexical relation of opposition. The nature of the oppositeness may differ – e.g., contradictory (‘true’–‘false’) or gradable (‘tall’–‘short’) – and there may be variation as to the relationship in their formal encoding, whether the antonyms are expressed as distinct lexical forms (e.g., true vs. false) or if one form is derived from the other (e.g., true vs. untrue). We investigate the relationship between the two members of 37 antonym pairs across 55 spoken languages in order to see whether there are patterns in how antonymy is expressed and which of the two antonym members is more likely to be derived from the other. We find great variation in the extent to which languages use derivation (labeled “neg-constructed forms”) as an antonym-formation strategy. However, when we do find a derived form, this tends to target the member of the pair that is either lower in valence (positive vs. negative) or magnitude (more vs. less), in line with our hypotheses. We also find that antonyms that belong to a core set of property concepts are more likely to encode antonyms as distinct lexical forms, whereas peripheral property concepts are relatively more likely to encode the antonyms with derived forms.
1 Introduction
Derivational negation is famously exploited by some creators of constructed and fictional languages for the expression of antonymy. Orwell’s “Newspeak” uses this device to get rid of unnecessary words and thereby reduce the size of its vocabulary:
In addition, any word – this again applied in principle to every word in the language – could be negatived by adding the prefix un- […] By such methods it was found possible to bring about an enormous diminution of vocabulary. Given, for instance, the word good, there was no need for such a word as bad, since the required meaning was equally well – indeed, better – expressed by ungood. All that was necessary, in any case where two words formed a natural pair of opposites, was to decide which of them to suppress. Dark, for example, could be replaced by unlight, or light by undark, according to preference. (Orwell 2008 [1949]: 315)
In a similar way, in Esperanto, the number of lexical roots to be remembered by the language learner is reduced by the systematic use of a negative prefix (mal-) in the adjectival lexicon: longa–mallonga ‘long–short’, alta–malalta ‘high–low’, varmo–malvarmo ‘warm/hot–cold’, seka–malseka ‘dry–wet’, etc. The existence of such constructed and fictional examples underlines the systematic relation between the two members of antonymic pairs and their relation to negation. In this paper, we will investigate to what extent natural languages make use of similar derivational mechanisms in their expression of antonymic relations.
Antonymy is widely known as one of many types of lexical relations between words – generally defined as words with opposite meanings. However, opinions vary on the extent to which antonymy encompasses lexico-semantic versus conceptual versus pragmatic relations between words, with regard to how it can be delimited from related phenomena as well as which subtypes it covers. The common core here is nonetheless a notion of opposition, or oppositeness, although the exact nature of the oppositeness is known to vary across different types of antonyms. For example, two meanings may be contradictory or complementary, such as ‘true’ versus ‘false’, but they can also be contrary or gradable (scalar), such as ‘tall’ versus ‘short’. Given that antonymy involves opposites, it is also central to the field of negation, since negation can be used to express the opposite (negative) counterpart of something. The simplest case of morphologically derived antonyms, illustrated by such pairs as happy versus unhappy or possible versus impossible, does in fact involve what is normally analyzed as negative derivation markers (in these cases prefixes).
The distribution of negative affixes has attracted quite some attention in theoretical morphology, but most of this literature is based on English, with occasional additions of a few other well-described Germanic languages (mainly German and Swedish). Our research to a large extent follows in the footsteps of Zimmer (1964), which focuses on negative affixes across languages, but is also severely limited in its language coverage. In fact, derivational negation on the whole has so far not been subject to any cross-linguistic scrutiny, which is all the more surprising given several large-scale studies on various aspects of negation, such as the negation of declarative main clauses with verbal predicates, i.e., standard negation (e.g., Dahl 1979; Dryer 2013a, 2013b, 2013c; Miestamo 2005; Payne 1985), prohibitives (Auwera et al. 2013), negation in stative predications, i.e., ascriptive,[1] existential, locative and possessive negation (Eriksen 2011; Veselinova 2013, 2014), or negative indefinites (Haspelmath 2001; Kahrel 1996) – see Miestamo (2017) for an overview of typological work on negation.
What is well known is that across antonym pairs, there is variation as to the relationship in the formal expression of the members of the pair, whether they are expressed as distinct lexical forms (e.g., true vs. false) or if one form is derived from the other (e.g., true vs. untrue) – and as is clear from these examples, there are antonym pairs for which both expression strategies exist (i.e., untrue ≈ false). This distinction is the overarching issue addressed in this study – that is, to investigate to what extent antonymy can be expressed by distinct lexical items, as in true versus false, or by words derivationally related to each other, as in true versus untrue, or both. Rather than zooming in on one particular language, we explore this question cross-linguistically, by systematically comparing the expression of 37 antonym pairs in 55 spoken languages from different families and areas and focusing on one particular research question:
Which types of property words are typically targeted by lexical vs. derivational antonymy, and why?
We are approaching this question by testing five specific hypotheses which are partly based on suggestions in earlier research – these hypotheses will be laid out in Section 2.4. An interesting subsidiary question concerns cross-linguistic variation in the extent of lexical versus derivational antonymy in individual languages.
The issues we are interested in pertain to the broader inquiry into which parts of the vocabulary are expressed by basic/underived versus derived words, what formal devices there are in a language for forming words from other words and what meaning relations can be expressed by such devices. These, in turn, belong to lexical typology, defined as the “systematic study of cross-linguistic variation in words and vocabularies, i.e. the cross-linguistic and typological branch of lexicology” (Koptjevskaja-Tamm 2012: 373; see also Koptjevskaja-Tamm and Veselinova 2020). Our study is in its spirit and methodology inspired by Nichols et al.’s (2004) large-scale cross-linguistic research asking to what extent the members in pairs of intransitive versus transitive verbs (e.g., ‘lie’ vs. ‘lay’, ‘die’ vs. ‘kill’) are involved in derivational relations with each other, as we approach the encoding of the members of antonym pairs in a similar vein.
The structure of the paper will be as follows. In Section 2, we will outline the background to our study: lexical typology (2.1), antonymy in general (2.2), derivational antonymy (2.3), and the main hypotheses explored in this study (2.4). Section 3 is devoted to the data and methods underlying the study: the language sample (3.1), the questionnaire used for elicitation (3.2) and data processing (3.3). Section 4 presents the results of the study, and Section 5 concludes the paper and suggests directions for future research.
2 Background
In the following subsections, we will give an introduction to the field of lexical typology (2.1), a general overview of antonymy (2.2), and a closer look at derivational antonymy (2.3), which will lead us to the motivation of our research design and hypotheses (2.4).
2.1 Lexical typology and research on motivation
Lexical typology has by now firmly established itself as an important field of typological inquiry. Probably most of its significant achievements concern cross-linguistic variation in how languages categorize particular semantic domains (e.g., color, space, temperature, motion, body, etc.) by means of lexical expressions, but there is also a growing cross-linguistic research on lexical motivation, such as polysemy patterns, co-lexification, or semantic shifts associated with particular lexical expressions. Following Koch and Marzo (2007: 263), “[a] lexical item L1 is motivated with respect to a lexical item L2, if there is a cognitively relevant relation between the concept C1 expressed by L1 and the concept C2 expressed by L2 and if this cognitive relation is paralleled by a perceptible formal relation between the signifiers of L1 and L2”. Lexical motivation can apply to single lexical expressions, but also to groups and even whole classes of lexical expressions. To illustrate the former, piglet (‘a young exemplar of a pig’) is motivated by pig due to a clear cognitive relation between the concepts expressed by the two paralleled by the presence of the suffix -let, and the verb to cup (‘to form one’s hand or hands into the shape of a cup’) is cognitively related to the noun cup paralleled by the change of word-class affiliation. Moving to the more general patterns of lexical motivation, the Swedish compound päronträd ‘pear tree’ exemplifies a strategy used in denominations for several species of trees by compounding the name of the fruit they bear with the word for ‘tree’ – e.g., also äppleträd ‘apple tree’, plommonträd ‘plum tree’, etc. Typical derivational categories, such as agent, patient, instrument and locative nominals from verbs, evaluation (augmentative and diminutive), various verbal categories (causative and anti-causative, inchoative, frequentative, etc.) and word-class changing categories, or transpositions in Spencer’s (2014) terms (action nominals, denominal adjectives, etc.) all provide examples of lexical motivation applying to whole classes of lexical expressions.
There is in fact a huge complex of issues related to the question of how the lexicon, or at least some of its subparts, is organized in terms of structurally more basic versus more complex lexical expressions and what systematic motivational patterns there are (see Koptjevskaja-Tamm et al. [2015]; Koptjevskaja-Tamm and Veselinova [2020] for overviews). The relevant research is, however, strikingly uneven. Only a few word-formation strategies, in most cases well known from the more familiar languages, like the ones mentioned above, have been described for many individual languages and language families, but systematic cross-linguistic research on word-formation strategies and their functions has so far been surprisingly modest (cf. Müller et al. [2015]; Štekauer et al. [2012] for important steps in this direction).
An excellent example of lexico-typological exploration along these lines is provided by the systematic cross-linguistic research on form and meaning relationships in pairs of intransitive and transitive verbs, such as the causative synu ‘break (intr.)’ → syn-dyru ‘break (tr.)’ in Kazakh, or the anti-causative lomat’ ‘break (tr.)’ → lomat’-sja ‘break (intr.)’ in Russian. The research tradition goes back to Nedjalkov (1969) and Haspelmath (1993), but was further developed methodologically and extended to larger samples in Nichols et al. (2004). The outcome of Nichols et al.’s (2004) study is a typology of 80 genealogically and areally diverse languages based on their treatment of 18 pairs of what the authors view as semantically basic and almost universal intransitive verbs (‘sit’, ‘fear’, ‘laugh’, ‘fall’, etc.) and their transitive counterparts. The verb pairs have been selected with an eye on various parameters, among others, those known or supposed to influence derivational processes, and the languages in the sample (extended to 200 in Nichols [2018]) turn out to be relatively consistent in whether they treat intransitives as basic and transitives as derived (transitivizing languages), whether they derive intransitives by means of anti-causative morphology (detransitivizing languages), whether both intransitives and transitives are encoded by the same labile verb (neutral languages), or whether both intransitives and transitives have the same status (indeterminate languages).
In this study we want to explore to what extent another central lexical or lexico-semantic relation – antonymy – exhibits systematic formal expression. The next subsections provide a further background for this pursuit.
2.2 Earlier studies on antonymy
Antonymy has been a popular topic in lexicology, semantic theories, and logic; there is also a growing psycholinguistic, neurolinguistic, and corpus-based research literature focusing on antonymy (cf. Jones et al. [2012]; Kotzor [2021] for an overview). Antonymy evokes the notion of opposition, or oppositeness, with the traditional distinction between contradictory (or, complementary) versus contrary opposites going back to Aristotle: the former are “either or”-relations, which exhaustively partition a particular domain into two subdomains (dead vs. alive, or even vs. odd), whereas the latter express opposite poles on a scale that also includes a gradient middle ground between them (big vs. small, or good vs. bad) (Horn 1989: 5–45). Researchers differ in how narrowly or broadly they define antonymy: in much of the literature antonymy is understood narrowly, as basically restricted to gradable (scalar), or contrary opposites, while for others antonymy includes not only contradictory opposites, but reversives (fall vs. rise, or dress vs. undress), and conversives (child vs. parent, or teacher vs. student) (cf. Lehrer and Lehrer 1982).
The crucial notion of oppositeness, underlying antonymic relations, involves two components – logical incompatibility and maximal semantic similarity (see Jones et al. [2012: 3] for a useful account). A stone cannot be both big and small simultaneously, so big and small are logically incompatible. However, logical incompatibility is not sufficient for defining oppositeness – for instance, big and generous are not semantically opposite only because a big stone cannot be simultaneously described as generous. For being opposites, the two expressions need to be semantically quite similar, or – seen from the other side – minimally different from each other. For instance, they should be able to be used in the same context, among other things, to apply to the same entity. In our example, generous and big are simply too different from each other semantically and generous cannot apply to stones at all, either big or small.
In structuralist approaches to the lexicon (cf. Lehrer and Lehrer 1982) and in much lexicographic work – e.g., in the WordNet project (Fellbaum 1998; Miller et al. 1990) – antonymy is taken to be the most basic means of organizing the adjective lexicon. There is also extensive psycholinguistic (and so far relatively modest neurolinguistic) research on the role of antonymic associations in the mental lexicon and memory (e.g., Bentin 1987; Deese 1966; Herrmann et al. 1986; Gross et al. 1989; Jeon et al. 2009; Roehm et al. 2007; see also Hay 2001) and a growing corpus-based research on discourse functions of antonymy and its typical manifestations in corpora (e.g., Jones et al. 2007; Kostić 2015; Lobanova et al. 2010; Muehleisen and Isono 2009; Willners and Paradis 2010; Wu 2017). There is, thus, accumulated strong evidence for antonymy being both a psychologically real and an important relation.[2] In view of what has been mentioned so far, antonymy seems to be a good candidate for giving rise to systematic motivational patterns across languages, whereby a recurrent cognitive relation between the two members in at least some antonymic pairs may be paralleled by a perceptible formal relation between the two (see the definition given in Section 2.1).
However, opinions vary as to what kind of relation antonymy actually is. The traditional view, further developed in the structuralist position, speaks about lexical relations or lexico-semantic relations, i.e., (semantic) relations among words. However, one and the same word can have different antonyms, often corresponding to distinct word senses. These, in turn, can be linked to different entities whose properties are characterized by the different antonym pairs (Herrmann et al. 1986). For instance, in English, tall is opposed to short when talking about people, and to low when talking about buildings, and the opposite of white in the context of wine will be red and not black, which is conceived of as its typical antonym when referring to a color spectrum (Jones et al. 2012: 14). Antonymy is thus not so much about words, but rather about words used in a particular sense. Pursuing this line still further, antonymy can be understood as a conceptual relation, i.e., a relation between construals (Croft and Cruse 2004: 169). It has even been suggested that antonym relations are pragmatically driven and are derived in contexts of use, making them much less dependent on the words’ lexical properties (Murphy 2003).
These differences in opinion are indicative of the inherent heterogeneity in which pairs of linguistic expressions can be classified as antonymic, with some being antonyms par excellence and others much less so. What counts as minimal difference/maximal similarity is, of course, to a certain extent pragmatically determined, with some contrasts being highly context dependent and others more or less conventionalized. Much current evidence points to a continuum of goodness of antonymy, in the spirit of Herrmann et al. (1986). According to Herrmann et al. (1986: 134–135), the ability to recognize a pair of words as gradable antonyms depends on two groups of factors: the nature of the dimension of meaning obtaining between them and the relative position of their meanings on it. First, the denotative meanings of the two words should share at least one dimension, with clearer dimensions more likely to be perceived as such. For example, good versus bad, sharing a single dimension of goodness, is a better antonym pair than holy versus evil, which includes at least two (goodness as well as moral correctness). The two words should also share all the relevant denotational dimensions: holy versus evil, sharing both dimensions, is a better antonym pair than holy versus bad, in which only the first word includes the element of moral correctness. The two meanings in better antonym pairs should also be on opposite sides of the dimensional midpoint (hot vs. cold is better than cool vs. cold) and preferably at the same distance from it (hot vs. cold is better than hot vs. cool).
It has also been suggested that languages have “canonical antonyms”, i.e., “a limited core of highly opposable couplings that are strongly entrenched as pairs in memory and conventionalized as pairs in text and discourse, while all other couplings form a scale from more to less strongly related” (Paradis et al. 2009: 381). The dimensions that have been suggested, primarily on the basis of detailed studies of English and Swedish, include speed (slow–fast), luminosity (dark–light), strength (weak–strong), size (small–large), width (narrow–wide), merit (bad–good) and thickness (thin–thick) (Paradis et al. 2009; Willners and Paradis 2010). In other research strands it has been argued that dimensional adjectives are particularly well structured with respect to antonymy, i.e., they come in antonymic pairs, whereas oppositions for other domains frequently involve clusters of adjectives at the oppositive poles, e.g., brave/bold/courageous versus cowardly/timid/fearful (Bierwisch and Lang 1989; Morzycki 2015: 138–140).
Much of the theoretical discussion on gradable antonymy has centered on the nature of scales (e.g., unbounded/open vs. bounded/closed) and on the asymmetries between the members of different antonym pairs. Cruse (1986: 206–214) and Cruse and Togia (1995) distinguish between three major types of antonymic construal with respect to scales: polar, equipollent, and overlapping. Polar antonyms are monoscalar, whereas equipollent and overlapping are biscalar. Polar antonyms are arranged along a simple scale. e.g., short–long along the scale of length. Equipollent antonyms involve two scales with adjacent zero points pointing in opposite directions, e.g., cold–hot arranged on the scales of coldness and hotness. Finally, overlapping antonyms involve two overlapping scales, one major and one minor; for example, good–bad are arranged on the scale of merit, which is the major scale covering the whole range, and the scale of badness, which is the minor scale ranging from a mid-scalar position to the lower end of the major scale.
As regards asymmetries, research on antonymy frequently evokes the notion of markedness, with one member of an antonymic pair as marked and the other as unmarked, with certain semantic and syntactic properties typically ascribed to or expected from the unmarked member. It is useful to distinguish between morphological markedness, with morphologically marked words containing additional morphs as compared to the unmarked ones (impossible vs. possible), and semantic markedness (see Ingram et al. [2016]; Lehrer [1985] for overviews). Some of the recurrently mentioned indicators of semantic markedness include the neutrality of unmarked gradable antonyms in questions such as “How long/big/old etc. is X” compared to “How short/little/young etc. is X”, as well as the differences in frequency and evaluation (valence) of the two members of antonymic pairs. It may be expected that the two types of markedness are related to each other in that morphological markedness would be a symptom of semantic markedness – the idea at heart of the notions of cross-linguistic (Greenberg 1966) or typological (Croft 2003) markedness, which couples asymmetries in morpho-syntactic behavior of related categories to their semantic relations or asymmetries. However, the notion of semantic markedness turns out to be fairly elusive when applied to a larger class of contrasting pairs, even within one and the same language, with different tests and properties not necessarily pointing in the same direction (Lehrer 1985). In fact, as shown by Cruse and Togia (1995), most of the tests work best or exclusively for monoscalar antonyms. Semantic markedness has recently been evoked as a potential explanation for preferred antonym sequences (e.g., long preceding short, or alive preceding dead) in various languages – at least in English (Ingram et al. 2016; Jones 2002), Serbian (Kostić 2015) and Mandarin Chinese (Wu 2017) – with different interpretations of the results. The research on markedness-related phenomena in antonymy seems to suggest that markedness is difficult to apply across the board and that it might be easier to break it down into several different factors, especially in cross-linguistic research. It appears that valence (emotional evaluation) and magnitude, when applicable, can explain a large portion of asymmetries in the behavior and processing of antonymy pairs, in particular combined with frequency differences (which often follow from valence).
Notably, there is also an established tradition to talk about polarity and to ambiguously apply the terms “positive” versus “negative” antonyms for at least two different contrasts – one having to do with evaluation (valence) and the other with magnitude. Thus, from the point of view of valence, ‘difficult’ and ‘dirty’ are negative, while ‘easy’ and ‘clean’ are positive. From the point of view of magnitude (as, for example, evident from the neutrality in the above mentioned questions), ‘short’ and ‘new’ are negative, while ‘tall’ and ‘old’ are positive. The two distinctions do not necessarily point in the same direction: it is not quite clear which one of the ‘dirty’ versus ‘clean’ antonyms is positive from the point of view of magnitude, ‘difficult’ is, probably, evaluated more negatively than ‘easy’, while the evaluation of ‘old’ versus ‘new’ as more positive or more negative is fairly context- and culture-dependent. The problematic and ambiguous use of such terms as “(un)marked”, “positive”, and “negative” has been noted and commented on in various connections (see also Morzycki 2015: 124). However, of special interest for our purposes is the claim that the semantic distinction between the two members in many antonymy pairs is that of polarity and involves the presence or absence of a negative element, particularly evident in dimensional adjectives (Bierwisch and Lang 1989; Cruse and Togia 1995: 123), but not confined to them (see Cable [2018]; Heim [2019]; Kennedy [2001]; Morzycki [2015: 124–134] for different suggestions on how to account for these facts).
In summary, two points discussed in this section are particularly important for our study:
Given that there is a general cognitive relation of “oppositeness” between the members in (at least prototypical) antonymic pairs, we may expect that this relation is mirrored by a direct formal relation, such as derivation of one from the other, between the two members.
Given that the idea of polarity/negation in a broad sense is often evoked to explain the semantic difference between the members in at least some antonymic pairs, we may expect that such formal markers will often have the semantics of negation – in a broad sense – paralleling im-possible in English or ne-vozmožnyj (‘neg-possible’) in Russian.[3]
2.3 Earlier studies on affixal negation in antonymy
Given that adjectives with the negative affixes, akin to the English impossible and its Russian equivalent nevozmožnyj, are widely attested in Germanic, Slavic, and Romance, the phenomenon has attracted considerable attention in linguistic research. Some of the leitmotifs in these studies are the restrictions on the application of the negative affixes, on the one hand, and the meaning/interpretation of adjectives with negative affixes, on the other (see Horn [1989: 273–308] for an overview of the field, which to a large extent still holds). To illustrate the first type of query, why are unwise and unhealthy possible, but not *unstupid and *unsick? For a language like English, with multiple negative adjectival affixes (un-, in-, dis-, non-), the additional challenge is to provide a principled analysis of the choice among them, cf. unhappy, inaccurate, dishonest, nonverbal (see Lieber [2004: 111–125, 154–177] for a theoretical account). To illustrate the second type of query, do unhappy and impossible mean the same as not happy and not possible, respectively? Traditionally, the main idea is that affixal negation often creates gradable rather than complementary opposites, but the situation is far more complicated (Horn 1989: 273–308; see also, e.g., Colston 1999; Farshchi et al. 2021).
An interesting case comes from sign languages, which despite generally being under-researched languages have garnered some attention in typological studies of negation, and by extension antonymy. As languages in a visual modality with multiple simultaneous articulators, sign language morphology is not always as linear or overtly (de)compositional as spoken languages, making the definition of a morpheme somewhat complex. Research on negation across sign languages has shown that while some languages make use of manual, linear negators, others favor negation expressed simultaneously through non-manual markers (e.g., headshakes, facial expressions) or changes to the (affirmative/neutral) stem (e.g., changing or reversing the path or direction of the movement, substituting the handshape) (see Zeshan 2006). With regard to antonyms more specifically, some of these patterns have been shown to be present in antonym pairs. For example, some antonymic word pairs in Chinese Sign Language have been shown to have mirrored/reversed articulation or use opposite dimensions of movement (e.g., horizontal vs. vertical) (Yang and Fischer 2002). A larger study across many sign languages showed a general trend of positive members of antonym pairs being more likely to have upward movement, compared to their negative counterparts, arguably mirroring the spatial metaphor good is up (Börstell and Lepic 2020). In cases of non-linear alternations, it is often more difficult to argue that one form is necessarily the more “basic” form with the other being derived from it, which is why we, for this study, focus on spoken languages.
The primary inspiration for the current investigation is Zimmer’s (1964) dissertation, which, in contrast to most of the other studies, is an early attempt to approach the phenomenon of affixal negation in adjectives cross-linguistically and to formulate cross-linguistic generalizations. It is primarily devoted to the distribution of adjectives with the negative affixes in English, German, French, and Russian, with the data coming from a systematic search in several dictionaries and texts. In addition to these four well-described European Indo-European languages, the study also contains somewhat less systematic observations from a few other non-Indo-European languages (Jordanian Arabic, Mandarin Chinese, Finnish, Hungarian, Ilokano, Japanese, Kabardian, Tamil, Thai, and Yoruba). Zimmer is chiefly interested in the direction of derivation in those cases where one of the antonyms is derived from the other one, i.e., whether and to what extent it is possible to formulate any “universals” or generalizations regulating this formally asymmetric relation between the members of antonym pairs.
Zimmer starts from the two different versions of what he calls “the derivational universal” – largely inspired by the earlier research on German, Swedish, and English by researchers such as von Jhering, Wundt, von Ginnehen, Noreen, and Jespersen – meant to explain the asymmetry of unwise versus *unstupid and unkind versus *unmean:
Negative affixes are used primarily with adjectival stems that have a “positive” value on evaluative scales such as good – bad, desirable – undesirable
Negative affixes are not used with adjectival stems that have a “negative” value on evaluative scales such as good – bad, desirable – undesirable (Zimmer 1964: 15)
It is primarily the second, more careful formulation whose validity Zimmer seeks to test in his study. The challenge, however, is to determine what counts as the positive versus negative value of an adjective. Zimmer’s solution is to restrict the use the evaluative terms to such clear cases as good versus bad, healthy versus sick, honest versus treacherous, where “positive”, “in a more or less neutral context (such as They are…) would be understood as expressing a favorable judgment, or as describing a state generally considered desirable” (Zimmer 1964: 16). Other adjectives, such as smooth, rough, regular, or sporadic, should be considered as “neutral”. As Zimmer notes, this intuitive solution is not entirely satisfactory, but should probably work quite well for the more obvious positive and negative terms, at least as long as one is dealing with languages and cultures one is familiar with.
Zimmer’s results show that the “derivational hypothesis” in any of the two formulations does not hold in the languages under investigation, even though there seems to be some truth to it (already English provides counter-examples to both, such as unselfish). In his sample, it is Russian that stands out in being much more tolerant in its derivation of negative adjectives than the Germanic languages and French. However, as Zimmer notes, the Russian negative prefix ne- is homonymous with the negative particle ne, which has a very broad distribution. And this, as he “is tempted to speculate”, may account for its seemingly unrestricted application in that “various ‘odd’ combinations into which the prefix enters might be explained as developments from the homonymous phrases involving the negative particle” (Zimmer 1964: 66). Zimmer suggests therefore the following, more cautious derivational hypothesis in terms of preferences rather than absolute restrictions, which will play an important role in our study:
We could perhaps say that for any given language negative affixes that are distinct from the particle(s) used in sentence negation are likely to have a greater affinity for evaluatively positive adjective stems than for evaluatively negative ones. What this means in practice is that for any language with such negative affixes we would at least expect that the number of “negative” adjectives among their derivatives would exceed the number of “positive” adjectives among their derivatives. (Zimmer 1964: 82)
But even Russian is not completely free in deriving adjectives by means of negative prefixes. Zimmer suggests a local restriction operating in Russian, whereby in antonym pairs designating what he calls “dimensions of length” a ne-adjective can only be derived from the word denoting the larger pole, e.g., vysokij ‘high’ – nevysokij ‘neg.high’ – nizkij ‘low’ – *nenizkij ‘neg.low’ (Zimmer 1964: 64). This observation will also be of interest for our study.
Likewise important for our purposes is Zimmer’s observations that certain terms recur frequently in lists of words with negative affixes in different languages (e.g., words for ‘uncommon’, ‘ignorant’, ‘dishonest’, ‘unjust’). These in turn underlie his suggestions for a future quest for possible generalizations governing the distribution of lexical versus derived antonym pairs across the different concepts.
Another problem for further investigation would be the degree to which there is cross-linguistic similarity in the concepts that are designated by simplex terms, and the degree to which antonym pairs of the schema ‘x vs. un-x’ can be matched in different languages having negative affixes […] The questions to be investigated would be of the following kind: Is it generally true that words for ‘just’ have no simplex antonyms? Are there languages in which ‘common’ is customarily designated by an expression meaning ‘not rare’, or ‘regular’ by an expression meaning ‘not random, not haphazard’? Such questions of lexical universals (whether they be “factual universals” or significant preponderances of certain lexical features) are of considerable interest and can moreover be investigated with a fair degree of ease. (Zimmer 1964: 90)
Our study, seeking to explore which types of property words are typically targeted by derivational versus lexical antonymy, and why, is largely inspired by Zimmer’s observations, hypotheses, and suggestions for further research, which also underlie some of the more specific hypotheses formulated and tested in this study.
2.4 Hypotheses
The hypotheses presented in this section derive from the theoretical discussion in the preceding sections. First, relating to the notion of (semantic) markedness we propose the hypothesis that semantically marked property concepts will be more likely expressed through negative derivation than their unmarked counterparts. We admit that, given the problems associated with the notion of semantic markedness discussed in Section 2.2, this hypothesis may be seen as only weakly justified. Nevertheless, since the notion has been so prominently present in the literature, we do find value in attempting to find a way to address it empirically and propose an operationalization of this notion for the purposes of our study in Section 3.2. The next two hypotheses relate to Zimmer (1964), and rest on a firmer theoretical grounding also in the light of the discussion in Section 2.2. They narrow down the comparison to the two major contrasts behind the notion of semantic markedness: valence (evaluation) and magnitude. We will hypothesize that evaluatively negative property concepts and property concepts denoting smaller magnitude will be more likely expressed through negative derivation than evaluatively positive ones and ones expressing larger magnitude, respectively. The next hypothesis is likewise inspired by Zimmer’s (1964) suggestions for a future study on possible generalizations governing the distribution of lexical versus derived antonym pairs across the different concepts, for the purposes of this study operationalized as a comparison of property concepts from the different semantic types in Dixon’s (1977) sense. We hypothesize that oppositions involving property concepts from core semantic types will be more likely expressed without negative derivation – i.e., with plain lexical forms – than those involving oppositions from the non-core semantic types (cf. Section 3.2 for the details on the semantic types). And finally, based on the principle of economy, we will hypothesize that there will be a trade-off between lexical and derivational expression of the property concepts involved in the antonym pairs. This hypothesis is not necessarily based on an expectation that lexicons should follow the principle of economy; surely there are various semantic and pragmatic factors at play as well, and in the case of triads like happy–sad/unhappy the two alternative antonyms of happy are not synonyms. It is, however, interesting to investigate in a broad quantitative perspective to what extent languages follow the principle of economy, and in order to do this, we have formulated the idea as a testable hypothesis. The hypotheses are numbered and formulated as follows.
Hypothesis 1:
Semantically marked members in antonymic pairs should be more likely to accept expression through negative derivation than their semantically unmarked counterparts.
Hypothesis 2:
Evaluatively negative members in antonymic pairs should be more likely to accept expression through negative derivation than their evaluatively positive counterparts.
Hypothesis 3:
Terms denoting smaller magnitude should be more likely to accept expression through negative derivation than their antonymic counterparts denoting greater magnitude.
Hypothesis 4:
Expression through negative derivation would be more likely found in oppositions involving property concepts from the category “other semantic types” than in those with property concepts from core semantic types. Antonym pairs from peripheral semantic types would be situated in-between the other two.
Hypothesis 5:
There should be a trade-off between expression through negative derivation and expression without such derivation in antonymic pairs, i.e., there should be an inverse correlation between the cross-linguistic frequency of negative-derived versus not negative-derived (“plain”) expression in each antonym pair member.
With the hypotheses now explicitly formulated, we will move on to presenting the material and methodology we use in order to test them. Section 3 will introduce the necessary technical terms we use in operationalizing the hypotheses, most importantly the distinction between neg-constructed and plain lexical forms as the short-hand for words containing the derivational element with the meaning of negation and those that lack such an element (see Section 3.3).
3 Methodology
In this section, we will describe the data and methods used in our study: Section 3.1 introduces the languages that were sampled for this study; Section 3.2 describes the questionnaire that was sent out to language experts and consultants on the sampled languages in order to collect data on our list of selected antonym pairs; and Section 3.3 describes the data processing in terms of definitions used for interpreting the questionnaire responses and the statistical analyses of the annotated dataset.
All data and scripts for statistical analysis and data visualization can be found through this link: https://osf.io/8kuzh/.
3.1 Language sample
The research reported in this paper is based on questionnaire data from 55 spoken languages across 23 language families (see Figure 1 and Table 1). We have aimed at including languages from all continents and from a wide variety of families and branches in the spirit of variety sampling (see Miestamo et al. 2016). However, as is very often the case in questionnaire-based work, the availability of experts to fill in the questionnaire has led to certain biases in the sample of languages surveyed. Languages of western Eurasia are well represented in the sample, but the other continents much less so. We are aware of the problems that the underrepresentation of these continents and language families will pose to the generalizability of our results. At the same time, the dense coverage of Eurasian, especially Indo-European (n = 13) and Uralic (n = 10), languages has the advantage that it will serve as a basis for addressing diachronic issues and more local contact phenomena in future work. For the purposes of the present paper, we can say that the sample gives a good coverage for western Eurasia and the presence of a fair number of languages from other continents provides preliminary impressions on the degree of the generalizability of the results beyond Eurasia. Note, however, that our statistical models (Section 4) do address the bias, thereby increasing the degree of generalizability. In Table 1, we list the sampled languages by macroarea and family, with the family classification taken from Glottolog 4.7 (Hammarström et al. 2022). In Figure 1, the languages are shown on a world map, illustrating the geographical distribution of our sampled languages. Additionally, the languages are listed in alphabetical order in Appendix A with a listing of the experts who helped us by filling out the questionnaire and answering our subsequent questions. Their contribution is gratefully acknowledged.

The language sample by geographical distribution and genealogical family classification. 1. Amharic, 2. Akan/Twi, 3. Eton, 4. Gurenɛ, 5. isiNdebele, 6. Orungu, 7. Swahili, 8. Wolof, 9. Umpithamu, 10. Warlpiri, 11. Hebrew, 12. Basque, 13. Bulgarian, 14. Dutch (Flemish), 15. German, 16. Italian, 17. Khowar, 18. Lithuanian, 19. Punjabi, 20. Romanian, 21. Russian, 22. Sinhala, 23. Slovak, 24. Spanish, 25. Swedish, 26. Japanese, 27. Georgian (Modern), 28. Korean, 29. Khalkha Mongol, 30. Aghul, 31. Cantonese, 32. Dejongke, 33. Sakha, 34. Turkish, 35. Erzya, 36. Estonian, 37. Finnish, 38. Hungarian, 39. Komi-Zyrian, 40. Mari, 41. Nganasan, 42. North Saami, 43. Udmurt, 44. Veps, 45. West Greenlandic, 46. Yucatec Maya, 47. Choctaw, 48. Indonesian, 49. Kankanaey, 50. Kilmeri, 51. Mian, 52. Bunaq, 53. Mapudungun, 54. Hup, 55. Shipibo-Konibo.
The language sample by macroarea and family according to Glottolog 4.7.
Index | Macroarea | Glottolog family | Language | ISO 639-3 | Glottocode |
---|---|---|---|---|---|
1 | Africa | Afro-Asiatic | Amharic | amh | amha1245 |
2 | Africa | Atlantic-Congo | Akan/Twi | aka | akan1250 |
3 | Africa | Atlantic-Congo | Eton | eto | eton1253 |
4 | Africa | Atlantic-Congo | Gurenɛ | gur | fare1241 |
5 | Africa | Atlantic-Congo | isiNdebele | nbl | sout2808 |
6 | Africa | Atlantic-Congo | Orungu | mye | myen1241 |
7 | Africa | Atlantic-Congo | Swahili | swh | swah1253 |
8 | Africa | Atlantic-Congo | Wolof | wol | nucl1347 |
9 | Australia | Pama-Nyungan | Umpithamu | umd | umbi1243 |
10 | Australia | Pama-Nyungan | Warlpiri | wbp | warl1254 |
11 | Eurasia | Afro-Asiatic | Hebrew | heb | hebr1245 |
12 | Eurasia | Basque | Basque | eus | basq1248 |
13 | Eurasia | Indo-European | Bulgarian | bul | bulg1262 |
14 | Eurasia | Indo-European | Dutch (Flemish) | vls | vlaa1240 |
15 | Eurasia | Indo-European | German | deu | stan1295 |
16 | Eurasia | Indo-European | Italian | ita | ital1282 |
17 | Eurasia | Indo-European | Khowar | khw | khow1242 |
18 | Eurasia | Indo-European | Lithuanian | lit | lith1251 |
19 | Eurasia | Indo-European | Punjabi | pan | panj1256 |
20 | Eurasia | Indo-European | Romanian | ron | roma1327 |
21 | Eurasia | Indo-European | Russian | rus | russ1263 |
22 | Eurasia | Indo-European | Sinhala | sin | sinh1246 |
23 | Eurasia | Indo-European | Slovak | slk | slov1269 |
24 | Eurasia | Indo-European | Spanish | spa | stan1288 |
25 | Eurasia | Indo-European | Swedish | swe | swed1254 |
26 | Eurasia | Japonic | Japanese | jpn | nucl1643 |
27 | Eurasia | Kartvelian | Georgian (modern) | kat | nucl1302 |
28 | Eurasia | Koreanic | Korean | kor | kore1280 |
29 | Eurasia | Mongolic-Khitan | Khalkha Mongol | khk | halh1238 |
30 | Eurasia | Nakh-Daghestanian | Aghul | agx | aghu1253 |
31 | Eurasia | Sino-Tibetan | Cantonese | yue | cant1236 |
32 | Eurasia | Sino-Tibetan | Dejongke | sip | sikk1242 |
33 | Eurasia | Turkic | Sakha | sah | yaku1245 |
34 | Eurasia | Turkic | Turkish | tur | nucl1301 |
35 | Eurasia | Uralic | Erzya | myv | erzy1239 |
36 | Eurasia | Uralic | Estonian | ekk | esto1258 |
37 | Eurasia | Uralic | Finnish | fin | finn1318 |
38 | Eurasia | Uralic | Hungarian | hun | hung1274 |
39 | Eurasia | Uralic | Komi-Zyrian | kpv | komi1268 |
40 | Eurasia | Uralic | Mari | mrj | west2392 |
41 | Eurasia | Uralic | Nganasan | nio | ngan1291 |
42 | Eurasia | Uralic | North Saami | sme | nort2671 |
43 | Eurasia | Uralic | Udmurt | udm | udmu1245 |
44 | Eurasia | Uralic | Veps | vep | veps1250 |
45 | North America | Eskimo-Aleut | West Greenlandic | kal | kala1399 |
46 | North America | Mayan | Yucatec Maya | yua | yuca1254 |
47 | North America | Muskogean | Choctaw | cho | choc1276 |
48 | Papunesia | Austronesian | Indonesian | ind | indo1316 |
49 | Papunesia | Austronesian | Kankanaey | kne | kank1243 |
50 | Papunesia | Border | Kilmeri | kih | kilm1241 |
51 | Papunesia | Nuclear Trans New Guinea | Mian | mpt | mian1256 |
52 | Papunesia | Timor-Alor-Pantar | Bunaq | bfn | buna1278 |
53 | South America | Araucanian | Mapudungun | arn | mapu1245 |
54 | South America | Naduhup | Hup | jup | hupd1244 |
55 | South America | Pano-Tacanan | Shipibo-Konibo | shp | ship1254 |
3.2 Questionnaire
As mentioned in Section 3.1, our study is largely inspired by Nichols et al.’s (2004) cross-linguistic research asking to what extent the members in pairs of intransitive versus transitive verb pairs are involved in derivational relations with each other, based on translational equivalents for the set of 18 pairs of concepts. For our data collection we used a list of 41 property concept pairs that stand in an antonymic relation to one another, accompanied in the questionnaire by one or several nouns that they typically modify, e.g., ‘long versus short stick’, ‘deep versus shallow river/lake’. The questionnaire is addressed to experts in particular languages who are asked to translate the expressions given in the metalanguage to the language under study, to gloss the data and, if possible, to answer a few further questions to help us in our analysis of the data. (The original questionnaire sent out is in English, but in some cases, language experts semi-informally translated the questionnaire into other languages for eliciting responses from their language consultants.) First, when several different alternatives are given for a particular concept in a language, we are interested in the potential semantic and pragmatic differences among them. We are also interested in the morphosyntactic features of the property concepts in the language, i.e., in their word-class affiliation (whether there is a separate class of adjectives in the language or whether such concepts are lexicalized as verbs and nouns), and in the predicating and modifying constructions used for property concepts in general and for those in the list in particular. In the questionnaire, we also ask several questions pertaining to negation (standard and ascriptive) and to the markers used in the expression of antonymy in case there are such examples in the data. Thus, the data collected could be used to address various additional aspects of antonymy. However, in this study, we focus our attention on the formal distinction between lexical and derived antonymy.
Our method of data collection follows the long tradition of collecting cross-linguistic data by means of concept sets, or word lists, for different purposes, including research on various morphosyntactic and word-formation phenomena (e.g., Beavers et al. 2017, 2021; Haspelmath 1993; van Lier 2016; Ye 2021, among others) – see also List et al. (2016) for examples of different types of concept lists. Remarkably, there is hardly any discussion of the foundational matters in the ample research based on concept lists, and the same concept lists are recycled across multiple studies without further notice. The assumption underlying such endeavors, in most cases tacit, is that the concepts in the lists are easily understandable and easily translatable across languages, which is, of course, a very crude simplification of linguistic reality. On the contrary, as repeatedly confirmed by the accumulated evidence of the growing field of lexical typology, there are very few meanings that can easily translate among languages. In fact, the so-called translational equivalents in two languages hardly ever mean the same, among other things, but almost always differ in their denotational range and range of uses, and in general, in their place in the lexical system of the language and their relations to other lexical items – cf. Kibrik (2012) for convincing examples of the incongruity among the English list of presumably basic verbal concepts and its correspondences in Russian and Koyukon.
However, the extent to which the (exact) semantics and (precise) semantic identity of the items in the lists matters depends on the research goals. For instance, the 18 verb pairs in Nichols et al. (2004) have been chosen on pragmatic grounds, as representing certain combinations of general parameters, corresponding to frequently encoded situations and having approximate translational equivalents in many languages. Stricter requirement on semantic comparability would create obstacles for achieving the principal objective of the study – cf. Koptjevskaja-Tamm (2008: 35–37) for a discussion of concept lists in lexical typology.
At the same time, it is necessary to avoid obvious pitfalls in the method, stemming from the frequent ambiguity or multifunctionality of many forms. In the extreme case one and the same form in the concept list corresponds to two different homonymic words (e.g., light – ‘not dark’ vs. ‘not heavy’) or has two very different meanings (e.g., dull – ‘blunt, not sharp’ vs. ‘lacking interest or excitement’), and the translation may relate to the wrong one, i.e., not to the one envisaged in the original concept list (cf. List et al. 2016). In other cases, the differences between different senses, readings, or uses of one and the same form are significantly subtler (and therefore not as easily appreciable), but may receive different translations in other languages. For instance, sharp in English corresponds to tranchant and aigu/pointu in French: the first one applies to knives, swords, saws, and other instruments with a blade (a “functional edge”), while the second applies to needles, arrows, and other “instruments with a functional endpoint” (Kyuseva et al. 2022). German and Italian also have special ‘sharp’ adjectives for needles and arrows (spitz and appuntito), as opposed to the more general adjectives scharf and affilato. If ‘sharp’ is included in a list of concepts with no clear indication as to which of its uses is meant, its translations into some languages may end up being incongruent with each other (see Section 2.2 for the discussion of antonymy as applying to words used in particular senses).
One way of improving the applicability of a concept list as a tool for collecting data and its value as tertium comparationis in cross-linguistic research consists in supplying concept labels with short definitions, following the line adopted in Concepticon (List et al. 2016, 2023). Continuing with our example, ‘sharp’ is defined in Concepticon 3.1.0 as ‘having the ability to cut easily’ (List et al. 2023: https://concepticon.clld.org/parameters/1396), which would not apply to sharp arrows and needles.
Our methodology makes use of the frequently noted close connection between the interpretation of a property concept word and the entities it applies to, as seen in the ‘sharp’ example: knives and saws have an edge and can cut (easily), while sharp needles and arrows will be able to pierce something, but hardly ever to cut anything.[4] Following this idea, we have provided the antonym pairs in the list with one or several nouns that they typically modify in a particular reading/use or, in several cases, in two or more particular readings/uses, e.g., ‘sharp versus dull knife/arrow/spear/needle’, which may result in several different translations. By adding nouns from a specific category, we hope to have achieved a sufficient degree of semantic congruity among the translational equivalents in our sample.
As explained in the introduction to the questionnaire, we are not interested in adnominal modification as such. Rather, the reason to collect the data in the form of noun phrases with property concept modifiers is to avoid confusion with clausal negation. During the project we have become increasingly aware of the fact that not all languages allow or prefer adnominal modification to the same degree. We have therefore kept open the possibility to give the property concepts in predicative position as well. But in those cases we have to be sure that we are dealing with derivational negation on the word level and not with clausal negation. In other words, the property expression may be given in a predicative position as long as negation is attached to the term on the lexical/phrasal level as in (1a), but the clause may not be negative as in (1b).
a. | The president is happy vs. The president is unhappy. |
b. | The president is happy vs. The president is not happy. |
When the property concepts are given in predicative position, it is especially important that the respondents specify how the negative expressions in the data relate to clausal negation. It is perhaps worth pointing out that clausal negation is a morphosyntactic construction whose function is to negate a clause, and it typically instantiates sentential negation (as defined by Klima [1964]; cf. Jespersen [1917] for “nexal negation”), but sentential negation can be expressed by other means as well (for a discussion, see Miestamo 2005: 3–5, 39–42).
The list of antonym pairs <Antonym 1, Antonym 2> follows the broad semantic classification of property concepts suggested in Dixon’s (1977) seminal work and further developed in Dixon and Aikhenvald (2006). Dixon (1977) differentiates among core semantic types, found in languages with both large and small adjective classes, peripheral semantic types, associated with medium and large adjectival classes, and, finally other types, associated with large adjective classes in some languages. We have aimed at a relatively balanced representation of Dixon’s three classes and have also made sure that all canonical antonyms (cf. Paradis et al. 2009: 381) are included. We have also tried to ensure a good representation of the two main types of opposition distinguishing the two members in an antonym pair: valence (positive vs. negative)[5] and magnitude (more vs. less).[6] We aimed at being as consistent as possible in the choice of the order between the members of an antonym pair, according to the main principles:
When the two antonyms differ in their valence, Antonym 1 is the more positive one (e.g., ‘good’ vs. ‘bad’, ‘rich’ vs. ‘poor’); this will enable us to test Hypothesis 2.
When the two antonyms differ in their magnitude, Antonym 1 designates the greater degree (e.g., ‘big’ vs. ‘small’, ‘heavy’ vs. ‘light’); this will enable us to test Hypothesis 3.
The two principles together covered the lion’s share of the antonym pairs in the questionnaire. In very few cases, when these principles gave contradictory results (e.g., ‘light’ vs. ‘heavy’) or could not apply straightforwardly (e.g., ‘black’ vs. ‘white’), we simply resorted to our intuition as speakers of several European languages, partly supported by the studies on preferred antonym sequencing in English, Serbian, and Mandarin Chinese, such as Jones (2002), Ingram et al. (2016), Kostić (2015), and Wu (2017).[7] On the whole, Antonym 1 and Antonym 2 more or less correspond to what is usually understood as the semantically unmarked and marked members, respectively, and this dichotomy is thus directly relevant to testing Hypothesis 1 as well. It is of course conceivable that semantic markedness relations between antonyms are not always translatable among languages, but we hope that our methodology is sufficiently consistent for the purposes of the study and will briefly discuss this issue in Section 5.
Our original list comprised the following semantic types of property concepts, which are targeted by Hypothesis 4:
Core semantic types:
dimension (1–6)
age (7)
value (8–9, 34, 36)
color (10)
Peripheral semantic types:
physical property (11–18, 25, 27, 28)
human propensity (19–24, 26)
speed (29)
Other semantic types:
difficulty (30)
similarity (31–32)
qualification (33–39)
position (40–41)
The questionnaire sent out to the language experts included a total of 41 antonym pairs <Antonym 1, Antonym 2>. After manually going through the data, we discovered that three of the antonym pairs (‘light’ vs. ‘dim’, ‘like/similar’ vs. ‘unlike/different’, ‘same’ vs. ‘different/other’) received uncertain or misinterpreted responses for multiple languages and were consequently excluded from further analysis. Furthermore, two of the antonym pairs (‘possible’ vs. ‘impossible’ and ‘probable’ vs. ‘improbable’) were in many responses treated as identical, i.e., there was no difference in the lexicalization of the concepts along the lines of distinction made in English. We resolved this by collapsing these two antonym pairs into a single one (‘possible/probable’ vs. ‘impossible/improbable’). The original questionnaire as it was sent out to the language experts is available in the OSF repository (https://osf.io/8kuzh/). Table 2 shows the resulting 37 antonym pairs from our questionnaire data that we used in our analysis, split into antonym (members) 1 and 2, and categorized into groups based on class (core vs. peripheral vs. other) and semantic type. When applicable, the antonym members are provided with their values for valence (positive vs. negative) and magnitude (more vs. less) – one antonym pair, ‘rich/wealthy’ versus ‘poor’, is included in both valence and magnitude subsets.
Antonym pairs from the questionnaire data used in the analysis (valence: positive/negative; magnitude: more/less).
Antonym 1 | Valence | Magnitude | Antonym 2 | Valence | Magnitude | Class | Semantic type |
---|---|---|---|---|---|---|---|
large/big | More | small/little | Less | core | dimension | ||
long | More | short | Less | core | dimension | ||
wide/broad | More | narrow | Less | core | dimension | ||
deep | More | shallow | Less | core | dimension | ||
tall/high | More | short/low | Less | core | dimension | ||
thick | More | thin | Less | core | dimension | ||
old | More | young/new | Less | core | age | ||
good | Positive | bad | Negative | core | value | ||
beautiful | Positive | ugly | Negative | core | value | ||
black | white | core | color | ||||
hard | soft | peripheral | physical property | ||||
heavy | More | light | Less | peripheral | physical property | ||
sharp | Positive | dull | Negative | peripheral | physical property | ||
wet | dry | peripheral | physical property | ||||
clean | Positive | dirty | Negative | peripheral | physical property | ||
hot/warm | cold/cool | peripheral | physical property | ||||
bright/light | Positive | dark | Negative | peripheral | physical property | ||
rich/wealthy | Positive | More | poor | Negative | Less | peripheral | human propensitya |
happy | Positive | sad/unhappy | Negative | peripheral | human propensity | ||
clever/wise | Positive | stupid/unwise | Negative | peripheral | human propensity | ||
kind/friendly | Positive | mean/hostile | Negative | peripheral | human propensity | ||
good | Positive | bad/evil | Negative | peripheral | human propensity | ||
generous | Positive | stingy | Negative | peripheral | human propensity | ||
well/healthy | Positive | sick/ill | Negative | peripheral | physical property | ||
brave | Positive | cowardly | Negative | peripheral | human propensity | ||
strong | Positive | weak | Negative | peripheral | physical property | ||
alive | Positive | dead | Negative | peripheral | physical property | ||
fast/quick | slow | peripheral | speed | ||||
simple/easy | Positive | difficult/hard | Negative | other | difficulty | ||
true | Positive | false | Negative | other | qualification | ||
normal | Positive | strange/odd | Negative | core | value | ||
common | uncommon | other | qualification | ||||
important | Positive | unimportant | Negative | other | qualification | ||
possible/probable | Positive | impossible/improbable | Negative | other | qualification | ||
correct | Positive | incorrect | Negative | other | qualification | ||
near | Less | far-away/distant | More | other | position | ||
right | left | other | position |
-
aThe pair ‘rich versus poor’ is not included in Dixon’s (1977) list, but some of the papers in Dixon and Aikhenvald (2004) mention these concepts – e.g., as human propensity in Genetti and Hildebrandt (2004: 81), but as physical property in England (2004: 141).
3.3 Data processing
As mentioned in Section 3.2, the kind of data we have collected with the questionnaire can be used for addressing many aspects of antonymy. In this paper, we focus on the formal expression of antonymy, paying attention to whether the members of antonym pairs are expressed with neg-constructed lexical forms versus forms that are not neg-constructed, which we will call plain.[8] In fact, for any antonym member, a language may have both forms available for a single concept, or one or none of the forms. We then ask how the plain and neg-constructed forms are distributed between the members of the antonym pairs, across antonym pairs and across languages – that is, for which concepts a word form (or, formation strategy) exists, but the two are not mutually exclusive by default as both can co-exist (e.g., sad and unhappy represent plain and neg-constructed forms, respectively, for the same concept in English).
To determine what counts as a plain versus neg-constructed form, we used the following criteria. We count as neg-constructed forms in which the added derivational element has the semantics of negation in a broad sense, and as plain forms that are not derived with an element that has the meaning of negation in a broad sense. What we mean by negative in a broad sense includes, in addition to pure negation, cases where the negative force is only partial, such as ‘little’ or ‘few’, cf. the French expression peu profond for ‘shallow’ discussed below. Clear instances of plain are found in examples such as big–small and fast–slow where both terms are morphologically simple. Clear instances of neg-constructed can be found in examples such as happy–unhappy and wise–unwise where the latter term is formed by adding an element which clearly has negation in its semantics. Thus, the first term in both of these is referred to as plain, whereas the second is referred to as neg-constructed. The latter is a more technical term for what we referred to as expression via negative derivation in formulating our hypotheses in Section 2.4, and conversely, plain then corresponds to expression without negative derivation.
Furthermore, to count as neg-constructed, a term does not have to share the stem with its antonym or to be derived from it. For example, the Turkish term imkansız ‘impossible’ derived from imkan ‘possibility, facility’ with the caritive/privative suffix -sIz, counts as neg-constructed, even though its opposite olası ‘possible’ is not related to it at all.
Noteworthy, a plain form does not have to be simplex, but may be derived, as long as the derivational marker is not negative in its semantics. An example of a derived expression that would count as plain for our purposes would be cowardly, which contains the derivational marker -ly, which is not negative in its semantics and does not turn the meaning of its base to its opposite. One of its opposites, fearless, will, however, count as neg-constructed due to the negative semantics (caritive/privative) of -less. In Estonian, both õnnelik ‘happy’ versus õnnetu ‘unhappy’ are derived from õnn ‘happiness’ by means of the general adjective suffix -lik and the caritive/privative suffix -tu. For our purposes, only õnnetu ‘unhappy’ qualifies as neg-constructed due to the negative semantics of the derivational suffix, whereas õnnelik ‘happy’ counts as plain. Examples like this are relatively common in our database.
An example of a less clear case of constructed expression is provided by French peu profond ‘shallow’ (lit. ‘little deep’), where the added element peu ‘little’ is not straightforwardly negative in its semantics. For the purposes of this study, our broad definition of negative semantics includes meanings like ‘little’ that have only partial negative force. Note also that in this example, the added element is not morphologically bound to the base; morphological boundness is not a requirement for a term to count as constructed. What is crucial, however, is the status of the whole combination as a lexicalized property expression.
This brings us to cases like ‘sad’ in Hup hãwɨg hi-hũʔ- [heart/spirit fact-finish] ‘(to have) one’s heart/spirit be ending’, or ‘generous’ in Akan-Twi, yam ye [stomach be.good] ‘(someone’s) stomach is good’. Since we are interested in property words, such idiomatic phrasal expressions are disregarded at an earlier stage of data processing. Examples like these are fairly common in our database, in particular for concepts describing human propensities.
The process of coding the questionnaire data from the language experts into a uniform machine-readable format entailed giving categorical values to each possible word form type per antonym pair. This gives us four cells to fill for every table row representing an antonym pair: one for plain and one for neg-constructed for each member of the pair. In our data coding, we have used the values “yes” (word form exists for the antonym pair member), “no” (word form does not exist for the antonym pair member) and “-” (not applicable to the antonym pair member for the language in question, used if, e.g., the language only has a phrasal expression for the meaning or data is simply missing). The data were coded (i.e., interpreted and transferred from questionnaire responses to our annotated database) by the authors with the help of research assistants. All coded entries were checked and double-checked by the authors, and any inconsistencies or unclear cases were discussed and resolved after mutual consultation and agreement.
Table 3 shows how these different cases presented above are coded in our database.
Examples of definitions in coding guidelines.
Language, example | Antonym pair | Ant1 plain | Ant2 plain | Ant1 Neg-constructed | Ant2 Neg-constructed | Comments |
---|---|---|---|---|---|---|
English good versus bad |
‘good’ versus ‘bad’ | Yes | Yes | No | No | English not in sample; included for illustration. |
English happy versus unhappy/sad |
‘happy’ versus ‘unhappy’ | Yes | Yes | No | Yes | English not in sample; included for illustration. |
French profond versus peu profond |
‘deep’ versus ‘shallow’ | Yes | No | No | Yes | Peu ‘little’ has partial negative force; morphologically unbound. |
Estonian õnnelik versus õnnetu |
‘happy’ versus ‘unhappy’ | Yes | No | No | Yes | Both õnnelik and õnnetu are derived from õnn ‘happiness’, with the general adjective suffix -lik and the caritive/privative suffix -tu. |
Turkish olası versus imkansız |
‘possible’ versus ‘impossible’ | Yes | No | No | Yes | İmkansız is derived from imkan ‘possibility, facility’ with the caritive/privative suffix sIz. |
Hup hisoso versus hãwɨg hi-hũʔ- |
‘happy’ versus ‘unhappy’ | Yes | No | No | No | Hisoso ‘happy’ is a property word, ‘unhappy’ is an idiomatic verbal expression: hãwɨg hi-hũʔ- [heart/spirit fact-finish] ‘(have) one’s heart/spirit be ending’. |
For some languages, there are more than one antonym sets available per antonym pair, for instance if the language has several synonyms that all fall within the concept of the antonym pair as described in the questionnaire. The total number of data points from our questionnaire is 9,032. Since we in this study are interested only in what is possible per antonym pair and language, we conflate all such multiple data points by the presence of any “yes” value: if a language has a plain word form for one antonym set but not for another set, both belonging to the same antonym pair, the value for that language and antonym pair will be “yes” – thus, we give each language the same weight, namely one data point per potential word form per antonym pair. For example, English has clever/wise–stupid–∅–unwise for the antonym pair ‘clever/wise–stupid/unwise’, which would be coded as having a “yes” value for Ant1 plain (clever and wise), Ant2 plain (stupid), and Ant2 neg-constructed (unwise), but “no” for Ant1 neg-constructed (as there is no *unstupid form for the meaning ‘clever/wise’). This coding resulted in a data set of 8,021 data points, representing one value for each of the possible word forms (plain and neg-constructed) for each of the antonym members (1 and 2), for each language (n = 55) and antonym pair (n = 37). In this final data set used for the subsequent analyses, we have also excluded any “-” values as they represent absence of data. The tidying, statistical analysis, and visualization of the data were done with the statistical language R 4.3.2 (R Core Team 2023): for the data wrangling and analyses, we used {tidyverse} (Wickham et al. 2019), {readxl} (Wickham and Bryan 2023), {lme4} (Bates et al. 2015), {broom.mixed} (Bolker and Robinson 2022), and {emmeans} (Lenth 2022); and for data visualization we used {tidyverse} (Wickham et al. 2019), {scales} (Wickham and Seidel 2022), {maps} (Becker et al. 2021),{ggnewscale} (Campitelli 2022), {patchwork} (Pedersen 2022), {ggrepel} (Slowikowski 2023), {ggbeeswarm}(Clarke et al. 2023), and {sjPlot} (Lüdecke 2023).
4 Results
In this section, we will present the results of our study. We will start by looking at individual languages – how frequent plain versus neg-constructed expression is in each sample language – and at individual antonym pairs – how big a proportion of the sample languages have a plain and/or neg-constructed expression for each antonym member concept. Remember that the two are not mutually exclusive as both expressions can exist side-by-side for a single concept in one and the same language, and we are thus looking at the presence of either expression, whether exclusive or co-occurring with the other type of expression. We will then move on to testing the hypotheses introduced in Section 2.4 above.
Let us start by looking at the frequency of plain versus neg-constructed expression in individual languages in our sample. Figures 2 and 3 show the proportion of plain and neg-constructed word forms identified across languages, with Ant1 and Ant2 separated. Looking across languages, we see that while there is variation in the amount of antonym concepts expressed by plain (Figure 2) and neg-constructed (Figure 3) forms, there is a clear pattern with regard to the member: Ant2 is never higher than Ant1 in the percentage of plain forms, and Ant1 is never higher than Ant2 for neg-constructed forms (a few languages have no neg-constructed forms, of course). Thus, there is a consistent pattern across languages in the expected distribution and directionality, with the second member of the pair (Ant2) being more likely to have a neg-constructed form compared to the first member (Ant1). We also see how a few languages distinguish themselves compared to the rest, in having many neg-constructed forms: namely, Lithuanian, Russian, and Slovak. The two figures show the presence of plain and neg-constructed forms independently of each other, illustrated clearly by a language like Lithuanian which has both plain and neg-constructed forms for most of the items in the set.

The proportion of plain word forms found by language and antonym member. Black dots show the mean across the two antonym members.

The proportion of neg-constructed word forms found by language and antonym member. Black dots show the mean across the two antonym members.
The percentages in these figures are based on different numbers of concepts in different languages: 48 of the languages have data from all 37 antonym pairs; 4 languages from 36 pairs (Khowar, Kilmeri, Komi-Zyrian, Nganasan); and 3 languages from 35 (Shipibo-Konibo), 34 (Aghul), and 18 (Umpithamu) pairs, respectively – see Table 4. We have chosen not to exclude any of the languages with incomplete data points (in particular Umpithamu), seeing as we look at the proportions also within languages with regard to the relative distribution of plain and neg-constructed forms and attempt to diversify the sample outside of Indo-European and Uralic. In the case of Umpithamu, we see no neg-constructed forms whatsoever, but since we also observe this situation in other, unrelated, languages with better coverage – i.e., Gurenɛ, Indonesian, and Kilmeri – the lack of neg-constructed antonymy in Umpithamu is not necessarily due to the small data coverage. The reason for the low coverage for some languages is simply due to the concepts lacking translations altogether, or that the translations provided do not match our inclusion criteria (e.g., using a clausal rather than lexical expression). Nonetheless, the proportional distribution within a language between plain and neg-constructed, as well as between the members, is relevant, and although the forms can co-exist within a single language for one and the same concept, the overall trend aligns with the hypothesis that Ant2 is more likely to have a neg-constructed form than Ant1.
The number of antonym pairs with data points per language.
Language | Number of antonym pairs |
---|---|
Akan/Twi, Amharic, Basque, Bulgarian, Bunaq, Cantonese, Choctaw, Dejongke, Dutch (Flemish), Erzya, Estonian, Eton, Finnish, Georgian (Modern), German, Gurenɛ, Hebrew, Hungarian, Hup, Indonesian, isiNdebele, Italian, Japanese, Kankanaey, Khalkha Mongol, Korean, Lithuanian, Mapudungun, Mari, Mian, North Saami, Orungu, Punjabi, Romanian, Russian, Sakha, Sinhala, Slovak, Spanish, Swahili, Swedish, Turkish, Udmurt, Veps, Warlpiri, West Greenlandic, Wolof, Yucatec Maya | 37 |
Khowar, Kilmeri, Komi-Zyrian, Nganasan | 36 |
Shipibo-Konibo | 35 |
Aghul | 34 |
Umpithamu | 18 |
Figures 4 and 5 turn the perspective to individual concepts, showing the percentage of the sample languages with plain and neg-constructed expression for each antonym member concept, Ant1 and Ant2 appearing in separate graphs. As we can see, only two items have more languages using a neg-constructed than plain form: ‘unimportant’ (from the pair ‘important’ vs. ‘unimportant’) and ‘impossible/improbable’ (from the pair ‘possible/probable’ vs. ‘impossible/improbable’).

The proportion of languages that have plain and neg-constructed word forms per Ant1 item.

The proportion of languages that have plain and neg-constructed word forms per Ant2 item.
As is apparent from the figures seen so far, there is a general pattern of Ant1 having more plain but fewer neg-constructed word forms than Ant2. Although second members (Ant2) of our antonym pairs are more likely to have neg-constructed forms than the first members (Ant1), there is also noticeable variation across items (see Figure 5). Furthermore, while neg-constructed word forms are clearly overrepresented in the antonym pairs lacking a plain word form (e.g., ‘impossible/improbable’ and ‘unimportant’), the distribution is not complementary as certain antonym pairs (e.g., ‘happy’ vs. ‘sad/unhappy’) can have parallel forms within languages. That is, a language may have triad (e.g., both plain and one neg-constructed) or tetrad (all expressions attested within a language and antonym pair) forms, using multiple expressions for the same single concept.
The general tendencies conform to Hypothesis 1, according to which Ant2 should be more likely to accept neg-constructed expression than Ant1. The proportion of plain and neg-constructed word forms across antonym pair members Ant1 and Ant2 is visualized across individual languages (Figure 2) and languages grouped by member (Figure 3). From Figures 2 and 3, there is a striking visual difference between the two antonym members with regard to the prevalence of neg-constructed word forms. It is also apparent that some of the Indo-European languages – namely Lithuanian, Russian, and Slovak – form an outlier in that they allow neg-constructed forms to a higher degree than other languages, for both Ant1 and Ant2.
To test our hypothesis statistically, we fitted a logistic mixed effects model to compare the existence of a neg-constructed form (yes vs. no) between antonym members (Ant1 vs. Ant2) – independent of whether there is a parallel plain form available or not. The model included language, family, and antonym pair as random effects, with random slopes for language. Our model shows that Ant2 is significantly more likely to appear with a neg-constructed form than Ant1 (logit coefficient: 3.1577, SE = 0.2768, z = 11.41, p < 0.0001): the predicted probability of Ant1 to have a neg-constructed form is <1 % (0.006) whereas the predicted probability for Ant2 is 13 % (0.126). We performed a likelihood ratio test of the model with the effect of member (Ant1 vs. Ant2) against a null model without member as an effect. This shows a significant difference between models (χ2(1) = 98.958, p < 0.0001). Thus, our statistical model supports our hypothesis that Ant2 is more likely to have neg-constructed forms than Ant1, across the languages of our sample. The variance in the random effects is substantial, with language (variance 2.9377, SD = 1.7140) and antonym pair (1.4736, SD = 1.2139) showing the largest influence, with family behind (0.3334, SD = 0.5774) – see Table 5.
Member model statistics for fixed and random effects.
Predictors | Odds ratios | CI | p |
---|---|---|---|
Member [ant2] | 23.52*** | 13.67–40.45 | <0.001 |
|
|||
Random effects | |||
|
|||
σ 2 | 3.29 | ||
τ00 language | 2.94 | ||
τ00 antonym_pair | 1.47 | ||
τ00 family | 0.33 |
-
*p < 0.05, **p < 0.01, ***p < 0.001.
Looking at the random effect of language family, Figure 6 illustrates that Indo-European is the family that stands out in the sample, especially due to languages like Lithuanian within the family that have a very high proportion of neg-constructed word forms. However, re-running the regression model on the data without the two largest families in the sample (i.e., Indo-European and Uralic), the model nonetheless shows a significant difference between members (Ant1 vs. Ant2) in the distribution of neg-constructed word forms.[9]

Language family as a random effect in the member model.
Turning to Hypothesis 2, which says that evaluatively negative terms should be more likely to accept neg-constructed expression than evaluatively positive terms, we look at the effect of valence (positive vs. negative) on the presence of neg-constructed forms in the 21 relevant antonym pairs across our languages – see Table 6.
Valence-only antonym pairs.
Antonym 1 | Valence | Antonym 2 | Valence |
---|---|---|---|
good | Positive | bad | Negative |
beautiful | Positive | ugly | Negative |
sharp | Positive | dull | Negative |
clean | Positive | dirty | Negative |
bright/light | Positive | dark | Negative |
rich/wealthy | Positive | poor | Negative |
happy | Positive | sad/unhappy | Negative |
clever/wise | Positive | stupid/unwise | Negative |
kind/friendly | Positive | mean/hostile | Negative |
good | Positive | bad/evil | Negative |
generous | Positive | stingy | Negative |
well/healthy | Positive | sick/ill | Negative |
brave | Positive | cowardly | Negative |
strong | Positive | weak | Negative |
alive | Positive | dead | Negative |
simple/easy | Positive | difficult/hard | Negative |
true | Positive | false | Negative |
normal | Positive | strange/odd | Negative |
important | Positive | unimportant | Negative |
possible/probable | Positive | impossible/improbable | Negative |
correct | Positive | incorrect | Negative |
Figure 7 shows a similar distribution to that of Ant1 versus Ant2, in that negative valence members are more likely to have a neg-constructed form – each point represents a language plotted on the y-axis based on the proportion (in percent) of antonym pairs for which it uses a plain and/or neg-constructed form (strategy) – again, the proportions are not mutually exclusive due to the presence of parallel forms.

The proportion of plain and neg-constructed word forms found by language and valence (positive/negative). Languages with a proportion more than two standard deviations from the group mean are labeled.
We once more fitted a logistic mixed effects model to compare the presence of a neg-constructed form (yes vs. no) based on antonym valence (positive vs. negative). The model included language, family, and antonym pair as random effects, with random slopes for language. Our model shows that negative antonyms are significantly more likely to appear with a neg-constructed form than positive ones (logit coefficient: 3.3321, SE = 0.3433, z = 9.705, p < 0.0001): the predicted probability of positive to have a neg-constructed form was 1 % (0.010) whereas the predicted probability for negative was 23 % (0.226). We performed a likelihood ratio test of the model with the effect of valence (positive vs. negative) against a null model without valence as an effect. This shows a significant difference between models (χ2(1) = 93.33, p < 0.0001). Table 7 shows the influence of the random effects of the model, which mirrors the pattern from the member model. Thus, our statistical model supports our hypothesis that negative antonyms are more likely to accept a neg-constructed form than positive ones, across the languages of our sample.
Valence model statistics for fixed and random effects.
Predictors | Odds ratios | CI | p |
---|---|---|---|
Valence [negative] | 28.00*** | 14.29–54.88 | <0.001 |
|
|||
Random effects | |||
|
|||
σ 2 | 3.29 | ||
τ00 language | 2.81 | ||
τ00 family | 0.32 | ||
τ00 antonym_pair | 0.65 |
-
*p < 0.05, **p < 0.01, ***p < 0.001.
Turning to Hypothesis 3, according to which terms denoting smaller magnitude should be more likely to accept neg-constructed expression than terms denoting greater magnitude, we look at the effect of magnitude (more vs. less) on the presence of neg-constructed forms in the ten relevant anonym pairs across our languages – see Table 8.
Magnitude-only antonym pairs.
Antonym 1 | Magnitude | Antonym 2 | Magnitude |
---|---|---|---|
large/big | More | small/little | Less |
long | More | short | Less |
wide/broad | More | narrow | Less |
deep | More | shallow | Less |
tall/high | More | short/low | Less |
thick | More | thin | Less |
old | More | young/new | Less |
heavy | More | light | Less |
rich/wealthy | More | poor | Less |
near | Less | far-away/distant | More |
Figure 8 shows a similar distribution to that of previous comparisons, in that members expressing less magnitude are more likely to have a neg-constructed form – each point represents a language plotted on the y-axis based on the proportion (in percent) of antonym pairs for which it uses a plain or neg-constructed form (strategy).

The proportion of plain and neg-constructed word forms found by language and magnitude (more/less). Languages with a proportion more than two standard deviations from the group mean are labeled.
Once more, we fitted a logistic mixed effects model to compare the presence of a neg-construction (yes vs. no) based on antonym magnitude (more vs. less). The model included language, family, and antonym pair as random effects. Our model shows that less-antonyms are significantly more likely to appear with a neg-constructed form than more-antonyms (logit coefficient: 2.9554, SE = 0.4417, z = 6.692, p < 0.0001): the predicted probability of more-antonyms to have a neg-constructed form was <1 % (0.002) whereas the predicted probability for less-antonyms was 4 % (0.040). We performed a likelihood ratio test of the model with the effect of magnitude (more vs. less) against a null model without magnitude as an effect. This shows a significant difference between models (χ2(1) = 79.1, p < 0.0001). Thus, our statistical model supports our hypothesis that less-antonyms are more likely to accept a neg-constructed form than more-antonyms, across the languages of our sample. As with the previous models, we see a large influence by language as a random effect – see Table 9.
Magnitude model statistics for fixed and random effects.
Predictors | Odds ratios | CI | p |
---|---|---|---|
Magnitude [less] | 19.21*** | 8.08–45.65 | <0.001 |
|
|||
Random effects | |||
|
|||
σ 2 | 3.29 | ||
τ00 language | 4.12 | ||
τ00 family | 0.94 | ||
τ00 antonym_pair | 0.51 |
-
*p < 0.05, **p < 0.01, ***p < 0.001.
These figures, tables, and statistical tests have shown that we do see the expected patterns across our sampled languages in that specific members of antonym pairs – namely, Ant2/negative/less members – are more likely to have neg-constructed expression than their Ant1/positive/more counterparts. However, we have also seen that there is variation across languages, and that some languages (notably Balto-Slavic ones) are much more likely to have neg-constructed forms than other languages. Figure 9 shows the proportion of neg-constructed forms out of all possible antonyms per language (i.e., relative to the total number of data points per language). We can see from this geographic distribution that Lithuanian and Russian stand out, but also that many of our sampled languages, across the globe, have at least some neg-constructed forms: the languages without any such forms (gray squares on the map in Figure 9) are not localized to a single geographic area or genealogical group.

Proportion of neg-constructed word forms available geographically. Darker red and larger size means a higher proportion of neg-constructed forms available. Languages without any neg-constructed forms are shown as gray squares.
Since our language sample is heavily skewed towards Indo-European and Uralic languages, we can look closer at these two language families. Figure 10 shows the distribution of neg-constructed proportions across subfamilies (groups) of our sample Indo-European and Uralic languages. Once again, the high proportion of Balto-Slavic is visible among the Indo-European languages, but interestingly mostly for three of the four languages: high for Lithuanian, Russian, and Slovak, but not particularly high for Bulgarian. The dashed lines mark the median of neg-constructed proportions across the entire language sample (not only these two language families). Whereas three of the four Indo-European subfamilies are above this line (only Germanic slightly below), the Uralic subfamilies are more evenly distributed around the total sample median. This may suggest a bias towards neg-constructed expression specifically in Indo-European languages. However, as mentioned above, the statistical preference for neg-constructed forms with Ant2 members was observable and significant even when removing Indo-European and Uralic altogether.

Proportion of neg-constructed word forms available by subfamily in Indo-European and Uralic. Dashed line shows median proportion across entire language sample (n = 55).
Hypothesis 4 predicts that Core oppositions should be more likely expressed with plain forms whereas neg-constructed forms would be more likely found in the category Other, and Peripheral pairs would be situated between Core and Other in this respect. Figure 11 shows the proportion of plain and neg-constructed forms across languages and the three classes of antonyms: Core, Peripheral, and Other. Visually, the figure points in the direction of our hypothesis: while there is an overwhelming preference for plain forms over neg-constructed forms, there is an incline in the proportion of neg-constructed forms from the Core class of antonyms, to Peripheral and Other. Note that Figure 11 shows the proportion of forms out of all the word forms in the data, such that parallel forms (e.g., equivalent to English sad and unhappy) would be represented as separate forms in each category.

The proportion of plain versus neg-constructed word forms overall per antonym class.
To test the hypothesis statistically, we constructed a logistic mixed effects model to predict the presence of a strategy (plain vs. neg-constructed) with class (Core vs. Peripheral vs. Other). We performed a likelihood ratio test of the model with the effect of class (core vs. peripheral vs. other) against a null model without class as an effect. This shows a significant difference between models (χ2(2) = 68.12, p < 0.0001). To compare the difference between each pairwise set, we calculated estimated marginal means with Bonferroni correction and find significant differences between Core and Other (p < 0.0001) as well as between Peripheral and Other (p < 0.0001), but no significant difference between Core and Peripheral (p = 0.1671) – see Table 10. Thus, our statistical model partially supports our hypothesis in that the class Other is more likely to accept neg-constructed forms than classes Core and Peripheral.
Estimated marginal means with Bonferroni correction between semantic classes.
Pair | Estimate | SE | z ratio | p | |
---|---|---|---|---|---|
Core – Peripheral | −0.226 | 0.118 | −1.913 | 0.1671 | |
Core – Other | −1.003 | 0.129 | −7.766 | <0.0001 | *** |
Peripheral – Other | −0.777 | 0.111 | −7.028 | <0.0001 | *** |
Hypothesis 5 predicts that, based on the principle of economy, there should be a trade-off between lexical and derivational expression, i.e., there should be an inverse correlation between the frequency of plain versus neg-constructed expression in each antonym pair member/antonym pair/opposition type. Table 11 shows the distribution of double (plain and neg-constructed) versus single (plain or neg-constructed) forms between antonym members across all languages. We can see the previously observed pattern of plain forms being overwhelmingly more frequent than neg-constructed forms, and a plain form without a simultaneous neg-constructed form for a certain item is by far the most common situation. However, when it comes to items that do have a neg-constructed expression in a certain language, we can see that such forms quite often tend to have a simultaneous plain form as well.
Number of antonym pairs with double (both plain and neg-constructed) versus single forms for the same item.
Neg-constructed | |||
---|---|---|---|
Yes | No | ||
Plain | Yes | 340 | 3,297 |
No | 230 | 144 |
This goes partially against the idea of an “either or”-pattern, instead showing that the parallel existence of two different strategies is the most common case in our data, when a neg-constructed form is present. The general pattern across languages, however, is that single forms are dominant over double forms, and that the single form is expressed by a plain antonym, with the exception of a few languages in our sample – most notably Lithuanian – which serve as outliers compared to the overall situation. This is seen in Figure 12, which shows the proportion of items with double forms (both plain and neg-constructed) across all languages. Here we can see that only Lithuanian and Russian have more double forms than single forms – the median across languages is shown with a dashed line.

Proportion of items with double (both plain and neg-constructed) versus single forms across languages. Dashed line is the median across languages.
Similarly, Figure 13 shows the proportion of languages with a double strategy across items, illustrating that all items have more single than double forms, and that only a small group of items have double forms across more than 20 % of the languages, e.g., ‘sad/unhappy’, ‘incorrect’, ‘strange/odd’, ‘stupid/unwise’, ‘false’, ‘mean/hostile’, ‘ugly’, and ‘uncommon’ (notably all negative valence words).

Proportion of languages with double (both plain and neg-constructed) versus single forms across items. Dashed line is the median across items.
The overall conclusions from Table 8 and Figures 12 and 13 is that there is no sign of a trade-off between lexical and derivational expression in our data, and thus no support for Hypothesis 5.
We have now tested our five hypotheses, most of which relate to plain versus neg-constructed expression of antonymy in different classes of antonym concepts. One further classification we introduced in Section 3.2 above had to do with the semantic domains of antonyms. We did not propose any specific hypotheses related to these semantic domains, but it will nevertheless be interesting to see whether they exhibit any differences as to the frequency of plain versus neg-constructed expression. There are three domains that are represented by more than a couple of antonym pairs in our questionnaire: dimension, human propensity, and physical property. We will now look at their likelihood of showing plain versus neg-constructed forms. Figure 14 visualizes plain and neg-constructed expression in the three domains.

The presence and absence of plain and neg-constructed word forms by semantic type.
As Figure 14 shows, plain forms are most frequent across these three semantic types of antonyms, but slightly less frequent among human propensity antonyms, which instead have a somewhat higher proportion of neg-constructed forms.
5 Discussion and conclusions
As has been shown in Section 4, the majority of our five initial hypotheses are supported in our data. Hypothesis 1 finds support in that semantically marked members in antonym pairs (operationalized as Ant2, i.e., the second member in an antonym pair) are indeed more likely to accept expression through negative derivation than their semantically unmarked counterparts (i.e., Ant1). In most cases this is due to the cumulative effect of the criteria of valence and magnitude. However, this is also true for those cases where these criteria are not relevant, but the latter group is relatively small and fairly heterogeneous and the evidence is relatively weak.[10] Addressing these dimensions specifically, starting with valence (evaluation), Hypothesis 2 is supported as evaluatively negative terms are more likely to accept expression through negative derivation than their evaluatively positive counterparts. Similarly, support for Hypothesis 3 is found as terms denoting smaller magnitudes are indeed more likely to accept expression through negative derivation than their counterparts denoting greater magnitude. As to Hypothesis 4, it is partially supported as oppositions involving property concepts from core semantic types are more likely expressed with non-derived forms than those involving property concepts from the category “other semantic types”. However, there are no statistically significant differences between oppositions involving property concepts from core versus peripheral semantic types, contrary to the expectations. In other words, we do not see the “expected” full cline across the classes Core/Peripheral/Other with regard to the existence of neg-constructed forms. As for Hypothesis 5, according to which there should be a trade-off between expression through negative derivation and expression without such derivation, it does not find support in our data. When a neg-constructed form is present, we normally also find a (near) synonymous plain form, resulting in the simultaneous existence of two different strategies (e.g., happy–sad/unhappy). However, the general pattern across languages is that single antonym forms are dominant over double forms and that the single form is expressed by a plain antonym.
One general pattern that our data show is that neg-constructed forms are fairly rarely used in the expression of antonymy across the languages of our sample. On the basis of what was previously known from European languages, we would have expected a wider use of neg-constructed forms, and it is a reasonable question to ask why this expectation is not borne out. We believe that there are good reasons for this. In our view, there are two different factors (forces) that seem to interact and partly neutralize each other in the shaping of neat antonymic pairs with a recurrent formal relation between the two members: “goodness of antonymy” and textual frequency.
To start with the former, as has been repeatedly pointed out in research within different traditions (see Section 2.2 for details), lexical oppositions taken as antonymy cover a fairly heterogeneous set, with some contrasts being more easily recognized and conventionalized, and the rest much more context dependent and pragmatically determined (Herrmann et al. 1986). Dimensional adjectives have been pointed out as particularly well structured with respect to antonymy and coming in antonymic pairs, with oppositions in many other domains frequently involving clusters of adjectives at the opposite poles (Bierwisch and Lang 1989; Morzycki 2015: 138–140). Oppositions in speed, luminosity, strength, and merit (value) have also been shown to be strongly entrenched in memory and conventionalized as pairs, i.e., to stand out as “canonical antonyms” (Paradis et al. 2009; Willners and Paradis 2010).
However, most of the concepts in these semantic types are also very frequently talked about. It is well known that concepts that are frequently talked about tend to be lexicalized. Zipf’s (1949) insight that the frequency of concepts is crucial for the choice between basic and derived expressions has been confirmed for various phenomena, e.g., for the lexical causatives improve (‘make good’) and reduce (‘make small’), as opposed to the derived causatives sad(d)-en (‘make sad’) and hard-en (‘make hard’) (Haspelmath 2008: 18) – see also Montserrat (2010: 108) for more examples and discussion. In other words, since the best examples of antonymic pairs (found in the core and to a certain extent in peripheral semantic classes) in our questionnaire belong to the most frequently occurring property concepts, it is not surprising that both of their members tend to be lexicalized as plain forms. From this point of view, we would therefore expect to encounter neg-derived expression across languages in antonym pairs for the less frequent concepts, found in the class “other semantic types” and also in many pairs within the peripheral semantic class. However, it is in these categories that the pairing of opposite concepts is much less obvious (cf. brave/bold/courageous vs. cowardly/timid/fearful mentioned in Section 2.2), not to mention the fact that some of them are not necessarily expressed as property words (which also underlies Dixon’s (1977) distinction among the three semantic categories), cf. ‘unhappy/sad’ in Hup hãwɨg hi-hũʔ- [heart/spirit fact-finish] ‘(to have) one’s heart/spirit be ending’, or ‘generous’ in Akan-Twi, yam ye [stomach be.good] ‘(someone’s) stomach is good’, discussed in Section 3.3.
Interestingly, though, there are languages in which even some oppositions in the core semantic type physical dimension are expressed by negative derivation. In our sample, this is found most clearly in Hup and Shipibo-Konibo. Hup has no lexical expression, e.g., for ‘small’ or ‘short’, but these are expressed by the negation of ‘big’ and ‘long’, respectively, see (2).
a. | pã̌t | (tɨh=)w’ǝt | b. | pã̌t | w’ǝt | |
hair | attr=long | hair | long | |||
‘long hair’ | ‘the hair is long’ | |||||
c. | pã̌t | (tɨh=)w’ǝt | ʔap | d. | pã̌t | w’ǝt-nɨh |
hair | attr=long | neg | hair | long-neg | ||
‘not long hair’ | ‘the hair is not long / is short’ | |||||
(Patience Epps, p.c.) |
Similarly, Shipibo-Konibo expresses the concepts of ‘short’ and ‘narrow’ by negating their respective antonyms ‘long’ and ‘broad’, and Mapudungun expresses, e.g., ‘short’ and ‘narrow’ using an element that counts as negation in a broad sense as defined above, namely ‘small’ as a derivational element. We find several examples of derived expression of oppositions among core oppositions in sampled languages from South America. Outside our sample, we have found derived expression of oppositions in core physical dimensions in a few additional languages of that macroarea, e.g., in Irántxe-Münkü spoken in Brazil (Montserrat 2010: 108): jamã ‘small’ versus jamã-pu ‘big-neg’.
Outside of South America, we find few examples of derivation of oppositions in core physical dimensions. Tlingit, a Na-Dene language spoken in Alaska, British Columbia, and Yukon, expresses the semantically marked member of several antonym pairs in the core and peripheral classes by negating their respective antonyms, among others, ‘bad’ and ‘weak’ (Cable 2018). Derived expression involving negation for some of the core (dimension, age, and value) and peripheral (sharpness) opposites is also found in Mobilian Jargon (or, “Chickasaw–Choctaw trade language”, a pidgin language formerly used on the coast of the Gulf of Mexico), e.g., ʧeto-kʃo ‘big-neg’ for ‘small’ (alongside plain forms esketene, oʃe), fala-kʃo ‘long-neg’ for ‘short’ (alongside yoʃkolole), sepe-kʃo ‘old-neg’ for ‘new’ (alongside hemona), ʧokma-kʃo ‘good-neg’ for ‘bad’, and alokpa-kʃo ‘sharp-neg’ for ‘dull’ (Drechsel 1996). However, derivation in antonymic pairs appears to be uncommon in pidgin and creole languages on the whole (Mikael Parkvall, p.c.). This may come as a surprise with regard to pidgin languages, which are known for their smaller lexica and the ample use of various strategies for “making do” with limited lexical resources (Juvonen 2016). The Andamanese language Akajeru has eleo ‘small’ versus eleo-p h o ‘big-neg’ (Abbi 2013: 109). A very interesting case comes from the Ni (or, Loloish) group within Lolo-Burmese (Mran-Ni) languages, some of which have developed a systematic formal opposition between the ‘bigger’ and ‘smaller’ dimensional antonyms, as detailed in Bradley (1995). This paradigmatic contrast is particularly salient and systematic in the N(u)oso (Northern Yi) varieties spoken in Sichuan in China, in which the two antonyms for several physical dimensions share the same stem, but differ in their prefix: /Ɂa/ for the ‘bigger’ and /Ɂi/ for the ‘smaller’ member, e.g., Ɂa 33 hmu 33 ‘high’ versus Ɂi 44 hmu 33 ‘low’ or Ɂa 33 fifi 33 ‘wide’ versus Ɂi 44 fifi 33 ‘narrow’ in Shengza (Bradley 1995: 17).
Finally, one physical dimension concept that is relatively commonly expressed via negative derivation across languages is ‘shallow’, as is well known even in some Romance languages (cf. French peu profond ‘shallow’, lit. ‘little deep’).
The physical dimension can also be used to iconically depict size in the visual modality, but also quantity and valence contrasts through metaphorical use of space. For example, Woodin et al. (2020) show how the size and shape of the hands correspond to numerical quantity in co-speech gesture (e.g., larger distance between the two hands when discussing large quantities, and vice versa), and Börstell and Lepic (2020) show that positive valence concepts across sign languages are more likely to be articulated upwards in space compared to their negative valence antonyms, following the spatial metaphor good is up. These examples show how language can express antonymic relationships in yet another formal contrast (i.e., the iconic use of space and metaphorical mappings) in the gestural–visual modality employed in both (co-speech) gesture and sign languages. However, the non-linear and simultaneous form-distinction used here is different from both plain and neg-constructed expressions as defined in this study: first, the forms may be plain but nonetheless formally related (e.g., have reversed articulation); second, one form may be (historically) derived from the other, but the directionality of derivation may be impossible to establish due to neither form necessarily being more complex (i.e., the magnitude of each phonetic form is the same).
Coming back to Zimmer’s (1964) observations about the relationship between affixal and sentential negation (see Section 2.3 above), our results clearly show that derivational negation varies across languages as to how similar or different it is to standard negation. There are languages in which the derivational expression of antonymic opposition looks identical to standard negation. For example in Lithuanian, the negative prefix ne- attaches to finite verbs to express standard negation and to adjectives to express antonymic opposition. On the other hand, in many languages, the two functions are expressed in completely different negative constructions. In Finnish, for example, standard negation is expressed with a negative auxiliary that takes person-number inflection and the lexical verb is in a non-finite form, while the derivational negator on adjectives is the prefix epä-. Our study does not address this issue systematically, but based on our data, we can make some more impressionistic observations that conform to those made by Zimmer. If we look at the three languages that stand out in Figure 3 as showing an exceptionally high proportion of neg-constructed expression (Lithuanian, Russian, and Slovak) or the four languages that stand out in Figure 12 as showing an exceptionally high proportion of double forms (Lithuanian, Russian, Slovak, and Spanish), we can observe that the derivational construction is quasi-identical to standard negation in these languages. It must of course be noted that these are all Indo-European languages spoken in Europe, and three of them are close to each other areally and genealogically (Balto-Slavic). This suggests that more standard-negation-like derivational constructions may be more productive, but a more systematic look at the data is needed to say anything more definitive about the matter – such an investigation will be left for future work. It is worth mentioning that the co-existence of plain and neg-constructed antonyms resulting in triads and even tetrads of antonyms often creates additional contrasts in functions and meanings among the forms. These include elaboration of gradability (e.g., Russian umnyj ‘clever’ – ne-glupyj ‘neg-stupid = fairly clever’ – ne-umnyj ‘neg-clever = fairly stupid’ – glupyj ‘stupid’), but also specialization of particular antonym forms for distinct word senses or different entities of the semantically unmarked counterpart (e.g., alive – dead – undead in English, where undead ≠ alive, or the Russian contrasts between glubokij ‘deep’ – melkij ‘shallow’ in reference to rivers, lakes, etc. and glubokij ‘deep’ – ne-glubokij ‘neg-deep’ in reference to snow). It is not unreasonable to think that the existence of neg-constructed forms alongside plain forms may increase through analogy within a language, such that the wide use of this strategy leads to it being more productive and employed even more frequently and with more words – which could to some extent explain the large gap between the languages with the most productive use (e.g., Lithuanian) compared to the others.[11]
In the beginning of this paper, we mentioned two constructed or fictional languages, Esperanto and Newspeak, that made use of negative derivation in a systematic way to express antonymic relations. On the basis of our study, we can now conclude that such a strategy is not something we find to any larger extent in natural languages. Even though occasional examples of neg-constructed antonyms occur in various languages across the globe, this strategy does not seem to be employed systematically across different antonymic pairs and different semantic types, at least not in any of the languages considered in this study. This makes antonymy very different from the examples of typical derivational categories, used to illustrate the notion of lexical motivation in Section 3 – e.g., pairs of intransitive and transitive verbs studied in Nichols et al. (2004). It looks like the notions of oppositeness and antonymy are simply too vague, fluid, local, and context dependent for being generalized as the basis for structuring the lexicon and acquiring a dedicated formal marker – an insight which, ultimately, is in line with understanding antonymy as an umbrella term covering many quite different phenomena.
Orwell’s and Zamenhof’s ideas were undoubtedly based on their knowledge of European – primarily Germanic and Slavic – languages, which regularly use negative affixes but are also the languages mostly studied with regard to this domain previously. However, there is certainly variation also within languages with regard to the productivity of such word forms. For example, certain dialects and registers of Swedish accept a negative prefix o- ‘un-’ to a higher extent than standardized dictionaries may suggest. In colloquial Swedish, forms such as o-bra ‘un-good’ (milder or slightly sarcastic ‘bad’) are used, and northern varieties are known to employ this prefix for both lexical and sentential negation, yet this may not be covered when describing standardized varieties or using data with limited register variation. In English, it is possible to see the use of creative synonymy in, e.g., online language use on social media. For instance, there are many examples of language use that deliberately avoids terms that are automatically identified/blocked by filters to censor, e.g., harmful language on certain platforms. Whereas some of this usage is entirely orthographic (e.g., replacing letters with non-letter symbols), one example of such forms that relates to our research is the use of unalive(d) to refer to ‘kill(ed), die(d), commit(ed) suicide’. Thus, some of the strategies investigated here exhibit variation within languages, with regard to frequency and productivity. In order to fully understand to what extent negative derivation is used for antonymy across and within languages, a larger and more balanced sample is needed, with deeper investigations into each individual language.
What we have presented in this article is a typologically broader study of lexical versus derivational antonymy, although based on a not entirely balanced sample of a limited number of languages, with a questionnaire that covers a small proportion of the antonymic oppositions that languages express. We regard it as a first step in a larger research project on antonyms across languages. As a next step, we would envisage a collection of papers on individual languages from different geographical regions and language families, each paper written by a linguist specializing in that particular language. With such a cumulative knowledge-base, we would then be better equipped to make more fine-grained cross-linguistic generalizations on the expression antonymy.
Funding source: Research Council of Finland
Award Identifier / Grant number: 332529
Funding source: Stockholm University and University of Helsinki
Acknowledgments
We gratefully acknowledge the invaluable contribution by language experts and consultants who provided the data for this study by responding to our questionnaire (listed in Appendix 1), as well as research assistants who assisted in coding parts of the data: Heidi Bordal, Vilma Kaijser, Héloïse Calame, Andrei Dumitrescu, and Jaakko Helke. We are also thankful for comments and feedback received by various people at conferences and events at which the (preliminary) work was presented, as well as to the two anonymous reviewers for their thought-provoking and helpful comments.
-
Author contributions: The project idea was devised by MKT and MM. The survey design and data collection were done by MKT and MM. The annotation guidelines and database design were devised by CB in collaboration with MKT and MM. Annotation and data validation were done by MKT and MM with assistance from CB and numerous research assistants. The statistical analyses and data visualizations were done by CB. The paper was drafted and revised jointly by all authors.
-
Research funding: The three of us are grateful for the financial support received within the funding for collaboration between Stockholm University and the University of Helsinki; MM also acknowledges support from Research Council of Finland, grant number 332529.
-
Data availability: Data and code used for this study can be found at: https://osf.io/8kuzh/.
Appendix A: Language experts
Language | Language experts/informants |
---|---|
Aghul | Solmaz Merdanova, Timur Maisak |
Akan/Twi | Victoria Owusu Ansah |
Amharic | Desalegn Asfawwesen |
Basque | Miren Lourdes Oñederra, Iker Salaberri |
Bulgarian | Eti Antonova Baumann |
Bunaq | Antoinette Schapper |
Cantonese | Hilário de Sousa |
Choctaw | Marcia Haag |
Denjongke | Juha Yliniemi |
Dutch (Flemish) | Dana Louagie |
Erzya | Niina Aasmäe |
Estonian | Miina Norvik |
Eton | Mark van de Velde |
Finnish | Matti Miestamo |
Georgian (Modern) | Tamar Makharoblidze, Jakov Testelets |
German | Ida Matysek |
Gurenɛ | Atenga Johnson Asunka, Samuel Atintono |
Hebrew | Ora R. Schwarzwald |
Hungarian | Magdolna Kovács |
Hup | Patience Epps |
Indonesian | Poppy Siahaan |
IsiNdebele | Matti Mestamo, Jaakko Heike |
Italian | Francesca Di Garbo |
Japanese | Nobufumi Inaba, Marie Jacquemard |
Kankanaey | Baraquel Managdag, Ria Isabelle Dela Rosa |
Khalkha Mongol | Benjamin Brosig, Dolgor Guntsetseg |
Khowar | Afsar Ali Khan, Henrik Liljegren |
Kilmeri | Claudia Gerstner-Link |
Komi-Zyrian | Paula Kokkonen, Evgeni Cypanov |
Korean | Jae Song |
Lithuanian | Jurgis Pakerys |
Mapudungun | Kayleigh Karinen, Victor Carilfar, Fernando Zúñiga |
Mari | Sirkka Saarinen, Oleg Sergeev |
Mian | Sebastian Fedden |
Nganasan | Sándor Szeverényi |
North Saami | Sierge Rasmus |
Orungu | Odette Ambouroue |
Punjabi | Usman Ashraf, Juozas Alminas |
Romanian | Andrei Călin Dumitrescu |
Russian | Natalia Perkova, Maria Koptjevskaja-Tamm |
Sakha | Sardana Ivanova, Toivo Qiu |
Shipibo-Konibo | Pilar Valenzuela |
Sinhala | Naveen Wijeratne, Julia Veromaa |
Slovak | Lívia Körtvélyessy |
Spanish | Lauri Marjamäki |
Swahili | Lotta Aunio, Rasmus Bernander |
Swedish | Heidi Bordal, Vilma Kaijser |
Turkish | Hatice Zora |
Udmurt | Svetlana Edygarova |
Umpithamu | Jean-Christophe Verstraete |
Veps | Nina Zaytseva, Olga Zaytseva |
Warlpiri | David Nash, Mary Laughren |
West Greenlandic | Michael Fortescue, Naja Blytmann |
Wolof | Olivier Bondéelle |
Yucatec Maya | Olivier Le Guen, Lorena Pool Balam |
References
Abbi, Anvita. 2013. A grammar of the Great Andamanese language: An ethnolinguistic study (Brill’s Studies in South and Southwest Asian Languages). Leiden: Brill.10.1163/9789004246126Search in Google Scholar
Auwera, Johan van der, Ludo Lejeune & Valentin Goussev. 2013. The prohibitive. In Matthew S. Dryer & Martin Haspelmath (eds.), The world atlas of language structures online. Leipzig: Max Planck Institute for Evolutionary Anthropology. Available at: https://wals.info/chapter/71.Search in Google Scholar
Bates, Douglas, Martin Mächler, Ben Bolker & Steve Walker. 2015. Fitting linear mixed-effects models using lme4. Journal of Statistical Software 67(1). 1–48. https://doi.org/10.18637/jss.v067.i01.Search in Google Scholar
Beavers, John, Michael Everdell, Kyle Jerro, Henri Kauhanen, Andrew Koontz-Garboden, Elise LeBovidge & Stephen Nichols. 2017. Two types of states: A cross-linguistic study of change-of-state verb roots. Proceedings of the Linguistic Society of America 2(38). 1–15. https://doi.org/10.3765/plsa.v2i0.4094.10.3765/plsa.v2i0.4094Search in Google Scholar
Beavers, John, Michael Everdell, Kyle Jerro, Henri Kauhanen, Andrew Koontz-Garboden, Elise LeBovidge & Stephen Nichols. 2021. States and changes of state: A crosslinguistic study of the roots of verbal meaning. Language 97(3). 439–484. https://doi.org/10.1353/lan.2021.0044.Search in Google Scholar
Becker, Richard A., Allan R. Wilks, Ray Brownrigg, Thomas P. Minka & Alex Deckmyn. 2021. maps: Draw geographical maps. Available at: https://CRAN.R-project.org/package=maps.Search in Google Scholar
Bentin, Shlomo. 1987. Event-related potentials, semantic processes, and expectancy factors in word recognition. Brain and Language 31(2). 308–327. https://doi.org/10.1016/0093-934X(87)90077-0.Search in Google Scholar
Bierwisch, Manfred & Ewald Lang (eds.). 1989. Dimensional adjectives: Grammatical structure and conceptual interpretation (Springer Series in Language and Communication 26). Berlin: Springer.10.1007/978-3-642-74351-1Search in Google Scholar
Bolker, Ben & David Robinson. 2022. broom.mixed: Tidying methods for mixed models. Available at: https://CRAN.R-project.org/package=broom.mixed.Search in Google Scholar
Börstell, Carl & Ryan Lepic. 2020. Spatial metaphors in antonym pairs across sign languages. Sign Language & Linguistics 23(1–2). 112–141. https://doi.org/10.1075/sll.00046.bor.Search in Google Scholar
Bradley, David. 1995. Grammaticalisation of extent in Mran-Ni. Linguistics of the Tibeto-Burman Area 18(1). 1–28.10.32655/LTBA.18.1.01Search in Google Scholar
Cable, Seth. 2018. The good, the “not good”, and the “not pretty”: Negation in the negative predicates of Tlingit. Natural Language Semantics 26(3–4). 281–335. https://doi.org/10.1007/s11050-018-9147-1.Search in Google Scholar
Campitelli, Elio. 2022. ggnewscale: Multiple fill and colour scales in “ggplot2”. Available at: https://CRAN.R-project.org/package=ggnewscale.Search in Google Scholar
Clarke, Erik, Scott Sherrill-Mix & Charlotte Dawson. 2023. ggbeeswarm: Categorical scatter (violin point) plots. Available at: https://CRAN.R-project.org/package=ggbeeswarm.Search in Google Scholar
Colston, Herbert L. 1999. “Not good” is “bad”, but “not bad” is not “good”: An analysis of three accounts of negation asymmetry. Discourse Processes 28(3). 237–256. https://doi.org/10.1080/01638539909545083.Search in Google Scholar
Croft, William. 2003. Typology and universals, 2nd edn. Cambridge: Cambridge University Press.Search in Google Scholar
Croft, William & D. Alan Cruse. 2004. Cognitive linguistics (Cambridge Textbooks in Linguistics). Cambridge: Cambridge University Press.Search in Google Scholar
Cruse, D. Alan. 1986. Lexical semantics (Cambridge Textbooks in Linguistics). Cambridge: Cambridge University Press.Search in Google Scholar
Cruse, D. Alan & Pagona Togia. 1995. Towards a cognitive model of antonymy. Journal of Lexicology 1. 113–141.Search in Google Scholar
Dahl, Östen. 1979. Typology of sentence negation. Linguistics 17(1–2). 79–106. https://doi.org/10.1515/ling.1979.17.1-2.79.Search in Google Scholar
Deese, James. 1966. The structure of associations in language and thought. Baltimore, MD: Johns Hopkins University Press.Search in Google Scholar
Dixon, Robert M. W. 1977. Where have all the adjectives gone? Studies in Language 1. 19–80. https://doi.org/10.1075/sl.1.1.04dix.Search in Google Scholar
Dixon, Robert M. W. & Alexandra Y. Aikhenvald. 2004. Adjective classes: A cross-linguistic typology. Oxford: Oxford University Press.10.1093/oso/9780199270934.001.0001Search in Google Scholar
Dixon, Robert M. W. & Alexandra Y. Aikhenvald. 2006. Adjective classes: A cross-linguistic typology. Oxford: Oxford University Press.Search in Google Scholar
Drechsel, Emanuel J. 1996. An integrated vocabulary of Mobilian jargon, a Native American pidgin of the Mississippi Valley. Anthropological Linguistics 38(2). 248–354.Search in Google Scholar
Dryer, Matthew S. 2013a. Negative morphemes (v2020.3). In Matthew S. Dryer & Martin Haspelmath (eds.), The world atlas of language structures online. Leipzig: Max Planck Institute for Evolutionary Anthropology. Zenodo.Search in Google Scholar
Dryer, Matthew S. 2013b. Order of negative morpheme and verb (v2020.3). In Matthew S. Dryer & Martin Haspelmath (eds.), The world atlas of language structures online. Leipzig: Max Planck Institute for Evolutionary Anthropology. Zenodo.Search in Google Scholar
Dryer, Matthew S. 2013c. Position of negative morpheme with respect to subject, object, and verb (v2020.3). In Matthew S. Dryer & Martin Haspelmath (eds.), The world atlas of language structures online. Leipzig: Max Planck Institute for Evolutionary Anthropology. Zenodo.Search in Google Scholar
England, Nora C. 2004. Adjectives in Mam. In Robert M. W. Dixon & Alexandra Y. Aikhenvald (eds.), Adjective classes: A cross-linguistic typology (Explorations in Linguistic Typology 1), 125–146. Oxford: Oxford University Press.10.1093/oso/9780199270934.003.0005Search in Google Scholar
Eriksen, Pål Kristian. 2011. “To not be” or not “to not be”: The typology of negation of non-verbal predicates. Studies in Language. International Journal sponsored by the Foundation “Foundations of Language” 35(2). 275–310. https://doi.org/10.1075/sl.35.2.02eri.Search in Google Scholar
Farshchi, Sara, Annika Andersson, Joost van de Weijer & Carita Paradis. 2021. Processing sentences with sentential and prefixal negation: An event-related potential study. Language, Cognition and Neuroscience 36(1). 84–98. https://doi.org/10.1080/23273798.2020.1781214.Search in Google Scholar
Fellbaum, Christiane. 1998. WordNet: An electronic lexical database. Cambridge, MA: MIT Press.10.7551/mitpress/7287.001.0001Search in Google Scholar
Genetti, Carol & Kristine Hildebrandt. 2004. The two adjective classes in Manange. In Robert M. W. Dixon & Alexandra Y. Aikhenvald (eds.), Adjective classes: A cross-linguistic typology (Explorations in Linguistic Typology 1), 74–96. Oxford: Oxford University Press.10.1093/oso/9780199270934.003.0003Search in Google Scholar
Greenberg, Joseph H. 1966. Language universals, with special reference to feature hierarchies (Janua Linguarum, Series Minor 59). The Hague: Mouton.Search in Google Scholar
Gross, Derek, Ute Fischer & George A. Miller. 1989. The organization of adjectival meanings. Journal of Memory and Language 28(1). 92–106. https://doi.org/10.1016/0749-596X(89)90030-2.Search in Google Scholar
Hale, Kenneth L. 1971. A note on a Walbiri tradition of antonymy. In Danny D. Steinberg & Leon A. Jakobovits (eds.), Semantics: An interdisciplinary reader in philosophy, linguistics and psychology, 472–484. Cambridge: Cambridge University Press.Search in Google Scholar
Hammarström, Harald, Robert Forkel, Martin Haspelmath & Sebastian Bank. 2022. Glottolog 4.7. Catalogue. Available at: http://glottolog.org.Search in Google Scholar
Haspelmath, Martin. 1993. More on the typology of causative/inchoative verb alternations. In Bernard Comrie & Maria Polinsky (eds.), Causatives and transitivity, 87–120. Amsterdam: John Benjamins.10.1075/slcs.23.05hasSearch in Google Scholar
Haspelmath, Martin. 2001. Indefinite pronouns (Oxford Studies in Typology and Linguistic Theory). Oxford: Oxford University Press.10.1093/oso/9780198235606.001.0001Search in Google Scholar
Haspelmath, Martin. 2008. Frequency vs. iconicity in explaining grammatical asymmetries. Cognitive Linguistics 19(1). 1–33. https://doi.org/10.1515/COG.2008.001.Search in Google Scholar
Hay, Jennifer. 2001. Lexical frequency in morphology: Is everything relative? Linguistics 39(6). 1041–1070. https://doi.org/10.1515/ling.2001.041.Search in Google Scholar
Heim, Irene. 2019. Decomposing antonyms? Proceedings of Sinn und Bedeutung 12. 212–225. https://doi.org/10.18148/sub/2008.v12i0.586.Search in Google Scholar
Herrmann, Douglas J., Roger Chaffin, Margaret P. Daniel & Robert S. Wool. 1986. The role of elements of relation definition in antonym and synonym comprehension. Zeitschrift für Psychologie mit Zeitschrift für angewandte Psychologie 194(2). 133–153.10.1515/9783112469644-003Search in Google Scholar
Horn, Laurence R. 1989. A natural history of negation. Chicago: University of Chicago Press.Search in Google Scholar
Ingram, Joanne, Christopher J. Hand & Greg Maciejewski. 2016. Exploring the measurement of markedness and its relationship with other linguistic variables. PLoS One 11(6). e0157141. https://doi.org/10.1371/journal.pone.0157141.Search in Google Scholar
Jeon, Hyeon-Ae, Kyoung-Min Lee, Young-Bo Kim & Zang-Hee Cho. 2009. Neural substrates of semantic relationships: Common and distinct left-frontal activities for generation of synonyms vs. antonyms. NeuroImage 48(2). 449–457. https://doi.org/10.1016/j.neuroimage.2009.06.049.Search in Google Scholar
Jespersen, Otto. 1917. Negation in English and other languages. Copenhagen: Andr. Fred. Høst & Søn.Search in Google Scholar
Jones, Steven. 2002. Antonymy: A corpus-based perspective. Abingdon, Oxon: Taylor & Francis. Available at: http://ebookcentral.proquest.com/lib/sub/detail.action?docID=180672.10.4324/9780203166253Search in Google Scholar
Jones, Steven, M. Lynne Murphy, Carita Paradis & Caroline Willners. 2012. Antonyms in English: Construals, constructions and canonicity (Studies in English Language). Cambridge: Cambridge University Press.10.1017/CBO9781139032384Search in Google Scholar
Jones, Steven, Carita Paradis, M. Lynne Murphy & Caroline Willners. 2007. Googling for “opposites”: A web-based study of antonym canonicity. Corpora 2(2). 129–155. https://doi.org/10.3366/cor.2007.2.2.129.Search in Google Scholar
Juvonen, Päivi. 2016. Making do with minimal lexica: Light verb constructions with make/do in pidgin lexica. In Päivi Juvonen & Maria Koptjevskaja-Tamm (eds.), The lexical typology of semantic shifts (Cognitive Linguistics Research 58), 223–248. Berlin & Boston: De Gruyter Mouton.10.1515/9783110377675-008Search in Google Scholar
Kahrel, Peter. 1996. Aspects of negation. Amsterdam: University of Amsterdam thesis.Search in Google Scholar
Kennedy, Christopher. 2001. Polar opposition and the ontology of “degrees”. Linguistics and Philosophy 24(1). 33–70. https://doi.org/10.1023/a:1005668525906.10.1023/A:1005668525906Search in Google Scholar
Kennedy, Christopher & Louise McNally. 2005. Scale structure, degree modification and the semantics of gradable predicates. Language 81(2). 345–381. https://doi.org/10.1353/lan.2005.0071.Search in Google Scholar
Kibrik, Andrej A. 2012. Toward a typology of verbal lexical systems: A case study in Northern Athabaskan. Linguistics 50(3). 495–532. https://doi.org/10.1515/ling-2012-0017.Search in Google Scholar
Klima, Edward. 1964. Negation in English. In Jerry Fodor & Jerrold Katz (eds.), The structure of language: Readings in the philosophy of language, 246–323. Englewood Cliffs, NJ: Prentice-Hall.Search in Google Scholar
Koch, Peter & Daniela Marzo. 2007. A two-dimensional approach to the study of motivation in lexical typology and its first application to French high-frequency vocabulary. Studies in Language 31(2). 259–291. https://doi.org/10.1075/sl.31.2.02koc.Search in Google Scholar
Koptjevskaja-Tamm, Maria. 2008. Approaching lexical typology. In Martine Vanhove (ed.), From polysemy to semantic change: Towards a typology of lexical semantic associations (Studies in Language Companion Series 106), 3–54. Amsterdam: John Benjamins.10.1075/slcs.106.03kopSearch in Google Scholar
Koptjevskaja-Tamm, Maria. 2012. New directions in lexical typology. Linguistics 50(3). 373–394. https://doi.org/10.1515/ling-2012-0013.Search in Google Scholar
Koptjevskaja-Tamm, Maria & Ljuba Veselinova. 2020. Lexical typology in morphology. In Oxford research encyclopedia of linguistics. Oxford: Oxford University Press.10.1093/acrefore/9780199384655.013.522Search in Google Scholar
Koptjevskaja-Tamm, Maria, Ekaterina Rakhilina & Martine Vanhove. 2015. The semantics of lexical typology. In Nick Riemer (ed.), The Routledge handbook of semantics (Routledge Handbooks in Linguistics), 434–454. London: Routledge.Search in Google Scholar
Kostić, Nataša. 2015. Antonym sequence in written discourse: A corpus-based study. Language Sciences 47. 18–31. https://doi.org/10.1016/j.langsci.2014.07.013.Search in Google Scholar
Kotzor, Sandra. 2021. Antonyms in mind and brain: Evidence from English and German. London: Routledge.10.4324/9781003026969Search in Google Scholar
Kyuseva, Maria, Elena Parina & Daria Ryzhova. 2022. Methodology at work: Semantic fields SHARP and BLUNT. In Ekaterina Rakhilina, Tatiana Reznikova & Daria Ryzhova (eds.), The typology of physical qualities (Typological Studies in Language 133), 29–56. Amsterdam: John Benjamins.10.1075/tsl.133.02kyuSearch in Google Scholar
Lehrer, Adrienne. 1985. Markedness and antonymy. Journal of Linguistics 21(2). 397–429. https://doi.org/10.1017/s002222670001032x.Search in Google Scholar
Lehrer, Adrienne & Keith Lehrer. 1982. Antonymy. Linguistics and Philosophy 5(4). 483–501. https://doi.org/10.1007/bf00355584.Search in Google Scholar
Lenth, Russell V. 2022. emmeans: Estimated marginal means, aka least-squares means. Available at: https://CRAN.R-project.org/package=emmeans.Search in Google Scholar
Lieber, Rochelle. 2004. Morphology and lexical semantics. Cambridge: Cambridge University Press.Search in Google Scholar
Lier, Eva van. 2016. Lexical flexibility in Oceanic languages. Linguistic Typology 20(2). 197–232. https://doi.org/10.1515/lingty-2016-0005.Search in Google Scholar
List, Johann-Mattis, Michael Cysouw & Robert Forkel. 2016. Concepticon: A resource for the linking of concept lists. In Nicoletta Calzolari (Conference Chair), Khalid Choukri, Thierry Declerck, Marko Grobelnik, Bente Maegaard, Joseph Mariani, Asuncion Moreno, Jan Odijk & Stelios Piperidis (eds.), Proceedings of the Tenth International Conference on Language Resources and Evaluation, 2393–2400. European Language Resources Association (ELRA). Available at: http://www.lrec-conf.org/proceedings/lrec2016/summaries/127.html.Search in Google Scholar
List, Johann Mattis, Annika Tjuka, Mathilda Van Zantwijk, Frederic Blum, Carlos Barrientos Ugarte, Christoph Rzymski, Simon Greenhill & Robert Forkel. 2023. CLLD Concepticon 3.1.0. Leipzig: Max Planck Institute for Evolutionary Anthropology. Zenodo.Search in Google Scholar
Lobanova, Anna, Tom van der Kleij & Jennifer Spenader. 2010. Defining antonymy: A corpus-based study of opposites by lexico-syntactic patterns. International Journal of Lexicography 23(1). 19–53. https://doi.org/10.1093/ijl/ecp039.Search in Google Scholar
Lüdecke, Daniel. 2023. sjPlot: Data visualization for statistics in social science. Available at: https://CRAN.R-project.org/package=sjPlot.Search in Google Scholar
Miestamo, Matti. 2005. Standard negation: The negation of declarative verbal main clauses in a typological perspective (Empirical Approaches to Language Typology 31). Berlin & New York: Mouton de Gruyter.10.1515/9783110197631Search in Google Scholar
Miestamo, Matti. 2017. Negation. In Aleksandra Aikhenvald & Robert M. W. Dixon (eds.), The Cambridge handbook of linguistic typology, 405–439. Cambridge: Cambridge University Press.10.1017/9781316135716.013Search in Google Scholar
Miestamo, Matti, Bakker Dik & Arppe Antti. 2016. Sampling for variety. Linguistic Typology 20(2). 233–296. https://doi.org/10.1515/lingty-2016-0006.Search in Google Scholar
Miller, George A., Richard Beckwith, Christiane Fellbaum, Derek Gross & Katherine J. Miller. 1990. Introduction to WordNet: An on-line lexical database. International Journal of Lexicography 3(4). 235–244. https://doi.org/10.1093/ijl/3.4.235.Search in Google Scholar
Montserrat, Ruth Maria Fonini. 2010. A língua do povo Mỹky [The language of the Mỹky people]. Campinas: Curt Nimendajú.Search in Google Scholar
Morzycki, Marcin. 2015. Modification (Key Topics in Semantics and Pragmatics). Cambridge: Cambridge University Press.Search in Google Scholar
Muehleisen, Victoria & Maho Isono. 2009. Antonymous adjectives in Japanese discourse. Lexical Contrast in Discourse 41(11). 2185–2203. https://doi.org/10.1016/j.pragma.2008.09.037.Search in Google Scholar
Müller, Peter O., Ingeborg Ohnheiser, Susan Olsen & Franz Rainer (eds.). 2015. Word formation: An international handbook of the languages of Europe (Handbooks of Linguistics and Communication Sciences [HSK] 40), vols. 1–5. Berlin & New York: De Gruyter Mouton.10.1515/9783110375732Search in Google Scholar
Murphy, M. Lynne. 2003. Semantic relations and the lexicon: Antonymy, synonymy and other paradigms. Cambridge: Cambridge University Press.10.1017/CBO9780511486494Search in Google Scholar
Nedjalkov, Vladimir P. 1969. Nekotorye verojatnostnye universalii v glagol’nom slovoobrazovanii [Some statistical universals in verb formation]. In Igor’ F. Vardul’ (ed.), Jazykovye universalii i lingvističeskaja tipologija [Language universals and linguistic typology], 106–114. Moscow: Nauka.Search in Google Scholar
Nichols, Johanna. 2018. Non-linguistic conditions for causativization as a linguistic attractor. Frontiers in Psychology 8. 2356. https://doi.org/10.3389/fpsyg.2017.02356.Search in Google Scholar
Nichols, Johanna, David A. Peterson & Jonathan Barnes. 2004. Transitivizing and detransitivizing languages. Linguistic Typology 8(2). 149–211. https://doi.org/10.1515/lity.2004.005.Search in Google Scholar
Orwell, George. 2008. Nineteen eighty-four. London: Penguin Books in association with Martin Secker & Warburg.Search in Google Scholar
Paradis, Carita, Caroline Willners & Steven Jones. 2009. Good and bad opposites: Using textual and experimental techniques to measure antonym canonicity. The Mental Lexicon 4(3). 380–429. https://doi.org/10.1075/ml.4.3.04par.Search in Google Scholar
Payne, John. 1985. Negation. In Timothy Shopen (ed.), Language typology and syntactic description, vol. 1, Clause structure, 197–242. Cambridge: Cambridge University Press.Search in Google Scholar
Pedersen, Thomas Lin. 2022. patchwork: The composer of plots. Available at: https://CRAN.R-project.org/package=patchwork.Search in Google Scholar
R Core Team. 2023. R: A language and environment for statistical computing. Vienna: R Foundation for Statistical Computing. Available at: https://www.R-project.org/.Search in Google Scholar
Rakhilina, Ekaterina & Tatiana Reznikova. 2016. A frame-based methodology for lexical typology. In Päivi Juvonen & Maria Koptjevskaja-Tamm (eds.), The lexical typology of semantic shifts, 95–129. Berlin: De Gruyter Mouton.10.1515/9783110377675-004Search in Google Scholar
Roehm, Dietmar, Ina Bornkessel-Schlesewsky, Frank Rösler & Matthias Schlesewsky. 2007. To predict or not to predict: Influences of task and strategy on the processing of semantic relations. Journal of Cognitive Neuroscience 19(8). 1259–1274. https://doi.org/10.1162/jocn.2007.19.8.1259.Search in Google Scholar
Slowikowski, Kamil. 2023. ggrepel: Automatically position non-overlapping text labels with “ggplot2”. Available at: https://CRAN.R-project.org/package=ggrepel.Search in Google Scholar
Spencer, Andrew. 2014. Lexical relatedness. Oxford: Oxford University Press.10.1093/acprof:oso/9780199679928.001.0001Search in Google Scholar
Štekauer, Pavol, Salvador Valera & Lívia Kőrtvélyessy. 2012. Word-formation in the world’s languages: A typological survey. Cambridge: Cambridge University Press.10.1017/CBO9780511895005Search in Google Scholar
Veselinova, Ljuba. 2013. Negative existentials: A cross-linguistic study. Rivista di linguistica 25(1). 107–145.Search in Google Scholar
Veselinova, Ljuba. 2014. The negative existential cycle revisited. Linguistics 52(6). 1327–1389. https://doi.org/10.1515/ling-2014-0021.Search in Google Scholar
Warriner, Amy Beth, Victor Kuperman & Marc Brysbaert. 2013. Norms of valence, arousal, and dominance for 13,915 English lemmas. Behavior Research Methods 45(4). 1191–1207. https://doi.org/10.3758/s13428-012-0314-x.Search in Google Scholar
Wickham, Hadley & Jennifer Bryan. 2023. readxl: Read Excel files. Available at: https://CRAN.R-project.org/package=readxl.Search in Google Scholar
Wickham, Hadley & Dana Seidel. 2022. scales: Scale functions for visualization. Available at: https://CRAN.R-project.org/package=scales.Search in Google Scholar
Wickham, Hadley, Mara Averick, Jennifer Bryan, Winston Chang, Lucy McGowan, Romain François, Garrett Grolemund, Alex Hayes, Lionel Henry, Jim Hester, Max Kuhn, Thomas Lin Pedersen, Evan Miller, Stephan Milton Bache, Kirill Müller, Jeroen Ooms, David Robinson, Dana Paige Seidel, Vitalie Spinu, Kohske Takahashi, Davis Vaughan, Claus Wilke, Kara Woo & Hiroaki Yutani. 2019. Welcome to the Tidyverse. Journal of Open Source Software 4(43). 1686. https://doi.org/10.21105/joss.01686.Search in Google Scholar
Willners, Caroline & Carita Paradis. 2010. Swedish opposites: A multi-method approach to antonym canonicity. In Petra Storjohann (ed.), Lexical-semantic relations: Theoretical and practical perspectives, 15–47. Amsterdam & Philadelphia: John Benjamins.10.1075/lis.28.04wilSearch in Google Scholar
Woodin, Greg, Bodo Winter, Marcus Perlman, Jeannette Littlemore & Teenie Matlock. 2020. “Tiny numbers” are actually tiny: Evidence from gestures in the TV News Archive. PLoS One 15(11). 1–21. https://doi.org/10.1371/journal.pone.0242142.Search in Google Scholar
Wu, Shuqiong. 2017. Iconicity and viewpoint: Antonym order in Chinese four-character patterns. Language Sciences 59. 117–134. https://doi.org/10.1016/j.langsci.2016.09.005.Search in Google Scholar
Yang, Jun Hui & Susan Fischer. 2002. Expressing negation in Chinese Sign Language. Sign Language and Linguistics 5(2). 167–202. https://doi.org/10.1075/sll.5.2.05yan.Search in Google Scholar
Ye, Jingting. 2021. Property words and adjective subclasses in the world’s languages. Leipzig: Leipzig University PhD Thesis.Search in Google Scholar
Zeshan, Ulrike. 2006. Interrogative and negative constructions in sign language. Nijmegen: Ishara Press.10.26530/OAPEN_453832Search in Google Scholar
Zimmer, Karl E. 1964. Affixal negation in English and other languages: An investigation of restricted productivity. New York: Linguistic Circle of New York.Search in Google Scholar
Zipf, George Kingsley. 1949. Human behavior and the principle of least effort: An introduction to human ecology. Cambridge, MA: Addison-Wesley.Search in Google Scholar
© 2024 the author(s), published by De Gruyter, Berlin/Boston
This work is licensed under the Creative Commons Attribution 4.0 International License.
Articles in the same Issue
- Frontmatter
- Research Articles
- Processing Chinese object-topicalization structures in simple and complex sentences
- Competing constructions in Kaqchikel focus contexts
- A cross-linguistic study of lexical and derived antonymy
- Placeholders in crosslinguistic perspective: abilities, preferences, and usage motives
- From gesture to Sign? An exploration of the effects of communicative pressure, interaction, and time on the process of conventionalisation
- Dialect separation and cross-dialectal influence: a study on the grammatical gender of Oromo
- Morphosyntactic stereotypes of speakers with different genders and sexual orientations: an experimental investigation
Articles in the same Issue
- Frontmatter
- Research Articles
- Processing Chinese object-topicalization structures in simple and complex sentences
- Competing constructions in Kaqchikel focus contexts
- A cross-linguistic study of lexical and derived antonymy
- Placeholders in crosslinguistic perspective: abilities, preferences, and usage motives
- From gesture to Sign? An exploration of the effects of communicative pressure, interaction, and time on the process of conventionalisation
- Dialect separation and cross-dialectal influence: a study on the grammatical gender of Oromo
- Morphosyntactic stereotypes of speakers with different genders and sexual orientations: an experimental investigation