Abstract
Conceptual metaphor theory has been criticized due to its emphasis on concepts instead of words and its top-down direction of analysis. In response to these criticisms, this paper employs a new strategy, utilizing established mathematical modeling methods to allow a systematic, quantitative analysis of the entire dataset produced by the Mapping Metaphor project at the University of Glasgow. This dataset consists of 9609 words performing 18916 metaphorical mappings between 414 domains. The data is represented as a network consisting of 414 nodes, the domains, connected by shared words. Words are represented by groups of directed mappings between all domains in which they occur. This is made possible by the use of a directed hypergraph representation, a tool commonly used in discrete mathematics and various areas of computer science but not previously applied to the metaphorical meanings of words. Examining the dataset as a whole, rather than focusing on individual words or metaphors, allows global patterns of behavior to emerge from the data without pre-filtering or selection by the authors. Outcomes of the analysis relating to the distributions of source and target domains within the network, the growth mechanisms at work in the spread of metaphorical meanings and how these relate to existing concepts in CMT are discussed.
Acknowledgements
The authors would like to thank Aura Heidenreich and Klaus Mecke for organising the conference ‟Models, Metaphors and Simulations. Epistemic Transformations in Literature, Science and the Arts”, which gave rise to this interdisciplinary collaboration. Further thanks go to Giulio Zucal for productive discussions on the mathematical part and to Jürgen Jost for the support of the scientific innovations in supervision.
References
Barsalou, Lawrence W. 2008. Grounded cognition. Annual Review of Psychology 59. 617–645.10.1146/annurev.psych.59.103006.093639Search in Google Scholar
Bretto, Alain. 2013. Applications of hypergraph theory: a brief overview. In Hypergraph theory: An introduction, 111–116.10.1007/978-3-319-00080-0_7Search in Google Scholar
Choi, Minjin, Sunkyung Lee, Eunseong Choi, Heesoo Park, Junhyuk Lee, Dongwon Lee & Jongwuk Lee. 2021. MelBERT: Metaphor detection via contextualized late interaction using metaphorical identification theories. In Kristina Toutanova, Anna Rumshisky, Luke Zettlemoyer, Dilek Hakkani-Tur, Iz Beltagy, Steven Bethard, Ryan Cotterell, Tanmoy Chakraborty & Yichao Zhou (eds.), Proceedings of the 2021 conference of the north american chapter of the association for computational linguistics: Human language technologies, 1763–1773. Online: Association for Computational Linguistics. https://doi.org/10.18653/v1/2021.naacl-main.141 (last checked 04/10/2024)Search in Google Scholar
Crawford, L. Elizabeth. 2009. Conceptual metaphors of affect. Emotion Review 1(2). 129–139. https://doi.org/10.1177/1754073908100438.Search in Google Scholar
Eisenberg, Eli & Erez Y. Levanon. 2003. Preferential attachment in the protein network evolution. Physical Review Letters 91(13). 138701.10.1103/PhysRevLett.91.138701Search in Google Scholar
Gibbs, Raymond W. 2009. Why do some people dislike conceptual metaphor theory? Cognitive Semiotics 5(1–2). 14–36.10.1515/cogsem.2013.5.12.14Search in Google Scholar
Hamilton, Rachael, Ellen Bramwell & Carole Hough. 2015. Mapping metaphor with the historical thesaurus. In Carole Hough & Daria Izdebska (eds.), Names and Their Environment. Proceedings of the 25th International Congress of Onomastic Sciences, Glasgow, 25–29 August 2014. Vol. 4. Theory and Methodology. Socio-onomastics, 33–40. https://www.gla.ac.uk/media/Media_576598_smxx.pdf (last checked 04/10/2024).Search in Google Scholar
Köper, Maximilian & Sabine Schulte im Walde. 2017. Improving verb metaphor detection by propagating abstractness to words, phrases and individual senses. In Proceedings of the 1st workshop on sense, concept and entity representations and their applications, 24–30. Valencia, Spain: Association for Computational Linguistics. https://doi.org/10.18653/v1/W17-1903.Search in Google Scholar
Kövecses, Zoltán. 2008. Conceptual metaphor theory: Some criticisms and alternative proposals. Annual Review of Cognitive Linguistics 6(1). 168–184.10.1075/arcl.6.08kovSearch in Google Scholar
Lakoff, George & Mark Johnson. 1980. Metaphors we live by. Chicago, IL: University of Chicago Press.Search in Google Scholar
Leal, Wilmer & Guillermo Restrepo. 2019. Formal structure of periodic system of elements. Proceedings of the Royal Society A 475(2254). 20180581.10.1098/rspa.2018.0581Search in Google Scholar
Lee, David. 2017. Competing discourses: Perspective and ideology in language. Routledge.Search in Google Scholar
Li, Yucheng, Shun Wang, Chenghua Lin, Frank Guerin & Loic Barrault. 2023. FrameBERT: Conceptual metaphor detection with frame embedding learning. In Andreas Vlachos & Isabelle Augenstein (eds.), Proceedings of the 17th conference of the european chapter of the association for computational linguistics, 1558–1563. Dubrovnik, Croatia: Association for Computational Linguistics. https://doi.org/10.18653/v1/2023.eacl-main.114.Search in Google Scholar
Littlemore, Jeannette. 2019. Metaphors in the mind. Cambridge: Cambridge University Press.10.1017/9781108241441Search in Google Scholar
Lugo, Igor. 2013. Spatial externalities approach to modelling the preferential attachment process in urban systems. In Proceedings of the european conference on complex systems 2012, 857–863. Springer.10.1007/978-3-319-00395-5_104Search in Google Scholar
Majid, Asifa, Seán G. Roberts, Ludy Cilissen, Karen Emmorey, Brenda Nicodemus, Lucinda O’Grady, Bencie Woll, et al. 2018. Differential coding of perception in the world’s languages. Proceedings of the National Academy of Sciences 115(45). 11369-11376.10.1073/pnas.1720419115Search in Google Scholar
Mapping metaphor. http://mappingmetaphor.arts.gla.ac.uk. (last checked 04/10/2024)Search in Google Scholar
Poncela, Julia, Jesús Gómez-Gardenes, Luis M. Floría, Angel Sánchez & Yamir Moreno. 2008. Complex cooperative networks from evolutionary preferential attachment. PLoS One 3(6). e2449.10.1371/journal.pone.0002449Search in Google Scholar
Teich, Marie, Wilmer Leal & Juergen Jost. 2023. Diachronic Data Analysis Supports and Refines Conceptual Metaphor Theory. arXiv. https://doi.org/10.48550/ARXIV.2209.12234.Search in Google Scholar
Vinciguerra, Sandra, Koen Frenken & Marco Valente. 2010. The geography of internet infrastructure: an evolutionary simulation approach based on preferential attachment. Urban Studies 47(9). 1969-1984.10.1177/0042098010372685Search in Google Scholar
Winter, Bodo & Jeff Yoshimi. 2020. Metaphor and the philosophical implications of embodied mathematics. Frontiers in Psychology, 11, doi: 10.3389/fpsyg.2020.569487.Search in Google Scholar
Supplementary information
Hypergraph representation
This study utilizes a directed hypergraph to represent the data from the Mapping Metaphor project (MMP). A directed hypergraph is formally defined as follows
Definition 4.1 (Directed Hypergraph) A directed Hypergraph HG = (V, H) consists of a set of vertices V and a set of directed hyperedges H. Each hyperedge consists of two non-empty sets of vertices v ∈ V, the set of its tails and its heads H = (t, h). The sum of tail and head vertex numbers is the cardinality of a hyperedge.
The MMP data is represented in this structure by denoting each domain by a vertex and each metaphorically used word by a hyperedge with the source set encompassing the sources and the head set the targets.
Hypergraph growth models with hyperedge distributions
Random growth
The randomly growing hypergraph evolves according to the following principle: Initially, there are only the 414 vertices and no hyperedges. At each step of the growth process there are three possibilities:
With probability p1 a new hyperedge with one vertex in the tail and one vertex in the head is created hі = (1, 1). This happens whenever the processed line in the data contains a metaphorical transfer of a word which did not appear in the previously processed data lines.
With probability p2 one vertex is added to the tail of one existing hyperedge hі = (tі + 1, hі). This corresponds to a word which already took part in one or more mappings also mapping from a previously unrelated source to a previously related target. The probability is equal for all existing hyperedges (words) at one step.
With probability p3 one vertex is added to the head of one existing hyperedge hі = (tі, hі + 1). This corresponds to a word which already took part in one or more mappings also mapping from an established source of this word to a new target. The probability is equal for all existing hyperedges (words) at one step.
One of the above processes will occur at each step, meaning p1 + p2 + p3 = 1.
To derive the predicted distribution of tail numbers at each step, we describe the change in the hyperedge size distributions between steps:
where N (c, t) stands for the number of hyperedges with a tail number of c at step t. The first term represents the possibility of a new hyperedge with one tail being created. For c > 1, the predicted value of N (c, t) decreases if a hyperedge of tail number c grows by one additional vertex, which occurs with probability
Considering the probability distribution rather than the predicted value using
As ⟨N (t)⟩ = p1 . t, this transforms to:
Assuming that the cardinality distribution does not change at t → ∞, we insert the following approximations:
and set the case c = 1 aside. This turns 4 into
The smallest variation step of c is 1, so the left side corresponds to the probability change in c:
For any c ≠ 1 this is solved by an exponential function and accordingly the probability distribution is
The derivation of the probability distributions of the head sizes is exactly analogous to p3 instead of p2. Taking c to be the number of head vertices, the probability distribution of hyperedge head sizes is then described by
Growth with preferential attachment
For the preferential attachment model, there are also three options at each step. However, in contrast to random growth, the probability of an additional tail or head vertex is not evenly distributed across the hyperedges. Instead, it is proportional to the tail size for an additional tail and to the head size for an additional head. Because of this, the rate of change of the predicted number of hyperedges of tail or head size c will be proportional to
Explicitly, the three options at each step are;
Adding a new hyperedge with both tail and head number 1 with probability p1.
With probability p2 one existing hyperedge is selected. The selection occurs with a probability proportional to each hyperedge’s tail number. An additional vertex is added to this hyperedge’s tail.
With probability p3 one existing hyperedge is selected. The selection occurs with a probability proportional to each hyperedge’s head number. An additional vertex is added to this hyperedge’s head.
The mean tail number
As such, the expected tail number distribution changes at each step according to
Considering probabilities ⟨N (c, t)⟩ = P (c, t) · ⟨N⟩ = P (c, t) · p1 · t results in
Assuming also here that for t → ∞ the cardinality distribution becomes stable and thus time independent, we again insert 5 and omit the case of c = 1:
where the latter reformulation results from the fact that the right-hand side contains the rate of change in c. This gives the differential equation:
the solution to which is spanned by
Interchanging p2 and p3, the probability distribution for the head sizes is analogously
4.6 Cardinality and target to source ratio for the linear relation growth model
According to linear relation growth model, each domain number c grows depending on the number n of metaphorical mappings by the word according to
The head to tail ratio r can be written according to the model growth mechanisms as
Rewriting 25 as
which simplifies to
Source and target growth relation model
In part 3.6 the target to source ratio is plotted against the total domain number. The results from the MMP data are compared to a model of constant source and target growth probability. Explicitly, this model assumes two possible hyperedge growth mechanisms:
With probability p, both the tail set and the head set of a hyperedge grow by one new vertex. This process occurs whenever a previously encountered word maps from a new source to a new target.
With probability 1 − p a new vertex is added only to the head set of the hyperedge. This describes a metaphorical mapping by a word from an established source to a new target domain.
This model differs slightly from the random growth model in part 3.5 as it takes into account the evolution of the target source ratio. The process of only a new source being added is sufficintly rare to be negligible.
According to this growth model, each domain number c grows depending on the number n of metaphorical mappings by the word according to
The head to tail ratio r plotted as target to source ratio in Fig (8) is
Rewriting (25) as
which simplifies to
This is the function fitted to the data in Fig 8.
©2024 Walter de Gruyter GmbH, Berlin/Boston
Articles in the same Issue
- Frontmatter
- Editorial
- Jetzt hab ich voll die Panik: Prototype effects of NP-external intensifiers in German
- Metapragmatic markers and the instantiation of pragmatic frames: A cognitive-linguistic approach to the problem of current discourse
- Linguistic paradigms as cognitive entities: A domain-general approach
- Partial colexifications reveal directional tendencies in object naming
- The interplay of conceptualization and case marking in the directional cases of Udmurt
- Integrating approaches to the role of metaphor in the evolutionary dynamics of language
- Metaphorical meaning dynamics: Identifying patterns in the metaphorical evolution of English words using mathematical modeling techniques
- The language of gratitude: An empirical analysis of acknowledgments in German medical dissertations
- Cross- and multimodal anaphoric references in mystery movies: A cognitive perspective
- Language learners, chess champions, and piano prodigies – insights from research on language contact and expert behavior
- Adaptive language strategies of an older sibling in bilingual German-Russian acquisition: A case study
Articles in the same Issue
- Frontmatter
- Editorial
- Jetzt hab ich voll die Panik: Prototype effects of NP-external intensifiers in German
- Metapragmatic markers and the instantiation of pragmatic frames: A cognitive-linguistic approach to the problem of current discourse
- Linguistic paradigms as cognitive entities: A domain-general approach
- Partial colexifications reveal directional tendencies in object naming
- The interplay of conceptualization and case marking in the directional cases of Udmurt
- Integrating approaches to the role of metaphor in the evolutionary dynamics of language
- Metaphorical meaning dynamics: Identifying patterns in the metaphorical evolution of English words using mathematical modeling techniques
- The language of gratitude: An empirical analysis of acknowledgments in German medical dissertations
- Cross- and multimodal anaphoric references in mystery movies: A cognitive perspective
- Language learners, chess champions, and piano prodigies – insights from research on language contact and expert behavior
- Adaptive language strategies of an older sibling in bilingual German-Russian acquisition: A case study