Abstract
Iconicity studies in the field of sign language linguistics, and in other disciplines, have predominantly been visuocentric, emphasising vision over other senses. This qualitative, experimental study investigates whether bodily or somatosensory senses contribute to the formation of iconicity. The research compares a group of five sighted signers with a group of two congenitally blind gesturers using elicitation and interview methods. The observed similarities in iconic descriptions suggest a role for somatosensory iconicity. Results indicate that both groups use their hands motivated by manual actions and exploratory procedures, which are essential for the haptic perception of objects. Moreover, because both the hands and the world are tangible, touchable, and sometimes touched, the hands iconically represent the world based on these experiences. In contrast, the sighted group also utilises visual feedback to adjust their articulators, making them visually iconic while exhibiting varying degrees of somatosensory iconicity. This suggests a sensory/semiotic ratio. The findings expand the concept of linguistic and semiotic signs to include somatosensory perception, encouraging recognition of the previously overlooked aspects of iconicity and semiotic signs.
1 Introduction
As widely recognised in studies of language and communication, an expression can resemble the object it refers to, thereby being iconic. A signed or gestured expression visually represents something (e.g. hands flap like wings), while a voiced word aurally represents something (e.g. meow). The iconic expressions mentioned are based on visual and auditory resemblances, respectively. Due to their physical resources or affordances, signed language (SL) is more directly iconic for visually perceptible actions and entities, whereas spoken language (SpL) is more directly iconic for auditory objects (e.g. Perniss et al. 2010, Dingemanse 2013, Slonimska 2022). This notion of iconicity generally aligns with the basic tenet of embodied cognition, which posits that language is grounded in our sensorimotor experiences and other cognitive processes (Zlatev and Blomberg 2019).
However, an often-overlooked aspect in iconicity studies is the role of somatosensory experiences. Beyond visual and auditory senses, we perceive our internal bodily positions and movements without looking at our body (proprioception, e.g. Taylor 2009) and discern the properties of external objects through touch (haptics, e.g. Lederman and Klatzky 2009), thanks to our somatosensory system, which encompasses general bodily sensation. Touch is holistic, as haptics includes proprioception (Ratcliffe 2013, Lederman and Klatzky 2009). Consequently, this article will explore whether an expression can feel like an object based on haptic-proprioceptive experience. To simplify terminology, this concept will be referred to as proprioceptive iconicity, following the terminology from my previous study (Keränen 2023). If necessary, the term ‘haptics’ will also be used to refer to this specific dimension.
To provide context for the current study, a visuocentric bias (Mitchell 2005, Prinz 2013) – emphasising vision over other senses – has been predominant since the inception of SL linguistics.[1] For example, more recently, Slonimska (2022, 10) states “Language can be also expressed and perceived through the visual modality, that is, by the visible articulators of the body,” highlighting aspects of seeing and being seen without mentioning other senses. Moreover, the term ‘visual modality’ is often ambiguous, as it may refer to visual perception, visually perceptible expression, or both.
Consequently, the notion of iconicity has also been predominantly visuocentric, often presupposing that signed iconicity is inherently visual. As Perniss et al. (2010, 5) stated, “The visual nature of the modality, however, results in an abundance of direct iconic, visual-to-visual mappings.” Interestingly, even when iconic action–action mapping (i.e. motor iconicity) or the grounding of iconicity in the sensorimotor system is mentioned (e.g. Emmorey 2014, Perniss and Vigliocco 2014, Mamus et al. 2024), somatosensory senses are not acknowledged. Additionally, the use of opposing concepts like visual iconicity and motor iconicity is conceptually inconsistent, as the term ‘visual’ pertains to sensory input, while ‘motor’ pertains to behavioural output. In fact, goal-oriented motor behaviours are guided by multisensory input, resulting in the sensorimotor loop (e.g. Lederman and Klatzky 1987, Emmorey et al. 2009, Huston and Jayaraman 2011). For the current study’s purposes, iconicity will be systematically categorised according to the senses, without diminishing the importance of the motor dimension.
Recently, experimental evidence has supported the role of proprioception in signing. Emmorey et al. (2009) demonstrated that signing (and gesturing), like any action, involves a sensorimotor loop, relying on both visual and proprioceptive feedback to monitor and correct misproduced signs. The production of gestural expressions by deafblind signers (Mesch et al. 2015) and congenitally blind hearing individuals who have never seen and used gestures (Iverson and Goldin-Meadow 1997) further indicates their reliance on proprioceptive feedback for production. Importantly, the role of proprioception becomes evident when distinguishing the articulatory perspective (i.e. the producer) from the observer’s perspective (i.e. the interlocutor), as only the former includes proprioceptive feedback (see also Keränen 2023). This study focuses on the articulatory perspective. Given that signing and gesturing also rely on proprioceptive feedback, it is relevant to investigate whether proprioception can play a role in the formation of iconicity in both sighted signing and blind gesturing.
In studies of iconicity in SpL, there are at least two accounts of how iconicity is formed. The acoustic account posits that iconicity is based on the auditory properties of a word (e.g. meow) (Ohala 1984, Hinton et al. 2006). In contrast, the articulatory account considers the articulatory properties of speech, such as mouth movements (Sapir 1929, Ramachandran and Hubbard 2001, Margiotoudi and Pulvermüller 2020, Vainio and Vainio 2021). These accounts are not necessarily mutually exclusive, as noted by Vainio and Vainio (2021). Similarly, while some SL linguists (Perlman et al. 2018, Taub 2011) acknowledge the role of proprioception, it often receives very little attention. Keränen (2023), however, explicitly describes that, in the sign for HAMMERING in Finnish SL (FinSL), the signer’s hand not only looks but also feels like the imagined actor’s hand from a first-person perspective.
However, the recognition of proprioceptive iconicity has primarily relied on first-person intuition. Thus, it is timely to utilise experimental methods to substantiate these first-person findings. This can be achieved through the pheno-methodological triangulation of cognitive semiotics, which integrates methods and concepts from linguistics, semiotics, and cognitive science (Zlatev 2015, Konderak 2018). In this approach, I employ three methodological perspectives: first-person (consciousness as an epistemological priority for studying meaning), second-person (e.g. interpersonal interviews and empathic interpretation), and third-person (indexical interpretation, e.g. experiment), to gain multifaceted insights.
Regarding the third-person method, I will apply the stimulus-based elicitation method used by Majid and colleagues (Majid 2011, Majid et al. 2018, Emmorey et al. forthcoming) to elicit iconic descriptions of physical 3D objects from both groups – sighted five deaf signers and two hearing blind gesturers – for further comparison.[2] This comparison will focus on the use of iconic strategies and possibly other iconic properties (Section 2.2). Due to the size of the data, the current study, while experimental, emphasises a qualitative approach over a quantitative one, although it will include descriptive statistics (Section 4). The study aims to consider nuanced details, deepen the understanding of iconicity, and provide further explanations for the formation and choice of iconic strategies (or ‘classifier handshapes’, as termed in the title of this special issue).
The selection of methods and participants is based on the following working hypotheses: 1) blind gesturers rely on proprioceptive feedback and therefore form proprioceptive iconicity; 2) if similarities are found in iconic descriptions across both sighted and blind groups, this will support the role of proprioceptive iconicity in the sighted group (more details in Section 3.1). Conversely, differences between the groups will prompt further investigation.
Evidence for proprioceptive iconicity also implies widening the notion of linguistic and therefore general semiotic signs (e.g. signs, words, and gestures) to include proprioception as part of their expression (Section 5). This perspective differs from the (strong) ‘acoustic’ tradition, which posits that the sound-image (Saussure 1916) or the visual image plays the primary role in linguistic signs, with the bodily image playing only an implied or subordinate role.
2 Conceptual backgrounds
2.1 Iconicity through the motivation and sedimentation model (MSM)
Describing perceptual experiences employs semiotic signs (henceforth, S-signs)[3] that may be signs, gestures, words, or pictures. According to tradition(s) of semiotics (Jakobson 1965, Sonesson 2016, 2022) an S-sign, as an expression, stands for an object, interpreted by someone, based on one or more grounds with varying predominance: iconicity (resemblance), indexicality (contiguity), and symbolicity (conventionality or habit). When something is an expression, its object is differentiated from it and is more in focus than the expression from a subjective perspective. This differentiation makes the characteristics of the S-sign phenomenologically different from those of sensory perception (Sonesson 2016, 2022). In the standard FinSL sign for ‘sharp’ (symbolicity), the first finger moves as if the knife cuts the finger of the other hand (iconicity), indicating the sharpness of the knife (indexicality); the aspect of sharpness is more in focus than the articulators themselves. Here, I will strictly focus on iconicity while acknowledging its sub-types (i.e. image, diagram, and metaphor) and the ubiquitous role of other grounds.
The condition where sensorily and linguistically different participants describe the stimuli represents a semiotically complex nexus of bodily and sociocultural factors. To manage and study this systematically, I apply a narrower version of the model that emerged within cognitive semiotics: the MSM (Zlatev and Blomberg 2019) (Table 1). This model is inspired by ideas from Coseriu’s integral linguistics (Coseriu 1985, Coseriu 2000) and phenomenology (Husserl 1901, Merleau-Ponty 1945). The MSM has been applied to study linguistic norms (Zlatev and Blomberg 2019), linguistic relativity (Blomberg and Zlatev 2021), metaphoricity (Stampoulidis et al. 2019, Devylder and Zlatev 2020, Zlatev et al. 2021, Moskaluk et al. 2022), and the origins of money (Oakley and Zlatev 2024). Unlike Coseriu’s approach, the MSM extends beyond the scope of language to encompass any systems of meaning-making and incorporates more tenets from phenomenology and cognitive linguistics. This model addresses both static and dynamic aspects, as well as individual and (potentially) more universal aspects of iconicity.
The MSM (adapted from Devylder and Zlatev 2020, 273)
Levels | ||
---|---|---|
Situated | ![]() |
Meaning-making within an immediate situation |
Sedimented | Socio-historical and symbolic conventions | |
Embodied | Pan-human, pre-signitivea |
aSome phenomenologists (e.g. Gallagher and Zahavi 2012) acknowledge that prior knowledge can influence perception (e.g. socially shared knowledge of the affordances of a car). Thus, the Situated and Sedimented levels may motivate the Embodied level, suggesting that the MSM may need theoretical revision. However, I leave this as a preliminary reflection outside the scope of the current study.
For the present study, the MSM distinguishes three interacting levels of meaning-making (Table 1). The situated level is the most dynamic and creative, involving on-going meaning-making within immediate contextual situations where communication occurs. The sedimented level encompasses historically, culturally, and socially shared conventions. The embodied level involves pre-signitive, perceptual, bodily, and cognitive processes and structures, such as analogy-making (Gentner and Markman 1997) and bodily schemas (i.e. individual or culture-laden habits; Donald 1998). These are pan-human features that recognise differences among populations, such as the sensory differences in the participants of the current study.
Regarding the terms motivation and sedimentation (see also Zlatev 2018), the former term, inspired by Fundierung (directly translated as foundation or grounding; Merleau-Ponty 1945, 146), describes a bidirectional relationship where a higher level of meaning-making is grounded in and sublimated by a lower level, without being reduced to one of the levels. Motivation is represented as an upward arrow (Table 1). Situated acts, such as innovative expressions, may become part of habits or conventions (somewhat comparable to symbolicity) through processes of sedimentation, shown as a downward dashed arrow (Table 1).
In the framework of semiotics, since the metaphor itself is iconic (two-object resemblance), the findings of the metaphor studies using the MSM mentioned above can apply to other types of iconicity (expression–object resemblance) relatively straightforwardly. To elaborate the iconicity using the MSM, novel iconic expressions emerge through motivation, but not determinedly, by the Embodied, especially analogy-making and perception, as shown with an upward Embodied–Situated arrow. This aligns with several SL studies on iconicity (Taub 2011, Perniss et al. 2010, Emmorey 2014).
In another direction, over the short or long term, these iconic expressions become more conventionalised and thus sedimented into interpersonal or communal norms, as depicted by a downward Situated–Sedimented arrow (Table 1), in parallel with several scholars mentioned above. As is well known, several lexical signs originate from spontaneous ones, often called gestures or pantomimes (Taub 2011, Ortega and Özyürek 2020b).
Signs do not solely become fully conventional through the process of lexicalisation (i.e. sedimentation), as some linguists have argued (e.g. Frishberg 1975). Lexical signs can also ‘come back to life’ through the process of de-lexicalisation (Cormier et al. 2012) or re-iconisation. According to the MSM, this illustrates how an expression can be doubly motivated by the levels of Sedimented and Embodied. To elaborate, the ratio of sedimentedness/motivatedness of an expression can vary in a situated context, as roughly categorised by Johnston (2013) into three groups: a) non-conventional, b) partly conventional, and c) fully conventional. [4] These groups refer to a) spontaneous, gradient iconic expressions where the iconicity is vivid and immediate, b) highly iconic signs that have some conventional features (e.g. handshape), and c) non-gradient signs – those typically listed in dictionaries – whose iconicity is not immediately vivid at the moment of production, whether they were originally arbitrary or iconic (i.e., de-iconised), respectively (see a footnote for clarification).[5]
Importantly, the Situated level, where one describes to another, involves intersubjectivity in many senses. For instance, individuals adjust their articulations to ensure they are suitably perceived by others (e.g. a sighted person, a deafblind person, or a large audience), balancing articulatory ease and perceptual clarity (Cutler et al. 1987, Emmorey 2005), or ensuring intelligibility for non-signers by gesturing slowly (Moriarty and Kusters 2021). The use (or lack thereof) of iconic descriptions in interactions can imply social power dynamics (e.g. acceptability, associations with certain linguistic communities, or social motivations for the choice of semiotic strategies; Hodge and Ferrara 2022). In the current study, the social dimension may manifest in the participant–researcher relationship within a certain research design (e.g. encouraging blind participants to use an unaccustomed gestural modality to produce intelligible iconic expressions for a sighted person).
As is known, iconicity is closely intertwined with indexicality, which is based on contiguity (e.g. Sonesson 2014, Keränen 2023). In this study, it is worth mentioning a type of abductive index: prior knowledge serves as a condition for establishing contiguity between an S-sign and an object. For example, an iconic hand that resembles holding indicates something being held (Keränen 2023; see Section 2.4).
2.2 Iconic strategies
Both SL and gesture share relatively universal types of iconic strategies (also known as depicting signs, classifiers, modes of representation, and so on). These iconic strategies have been categorised into two or more types with different labels and taxonomies (e.g. Schembri 2003, Hassemer 2016, 121).
Here, I categorise 12 main groups of iconic strategies (Figure 1, Table 2) based on their prototypical conceptualisations evoked by gestural articulators, drawing from existing literature and findings from the current study. In practice, if a certain strategy does not fit into any existing type, I either establish a new type for this finding or classify it as uncategorised (see Section 3.4). This categorisation is sensitive to unique iconic conceptualisations and potentially expands the taxonomy of iconic strategies. It differs from more traditional approaches that strictly adhere to only a few types. To briefly explain Figure 1, an arrow depicts motion, and the symbol ‘ø’ depicts non-motion.

Illustrative examples of types of iconic strategies (a–l).
Taxonomy of iconic strategies with examples
Types of iconic strategies | ||
---|---|---|
Main strategies | Substrategies | Respective examples |
Acting |
|
|
Representing |
|
|
Instrument |
|
|
Tracing |
|
|
Measuring |
|
|
Locating |
|
|
POR |
|
|
Being-pulled |
|
|
Outlining |
|
|
Dividing |
|
|
Emptying |
|
|
Assembling |
|
The types that form the most fundamental iconic conceptualisations are acting (articulators ‘miming’ themselves in bodily action) and representing (articulators ‘miming’ entities other than themselves) (Müller 2014, 1696). In acting, the hands and body move as if performing actions with or without an object (Cormier et al. 2012, Keränen 2023). This strategy can also involve the entire body to convey actions, thoughts, and emotions (constructed action, e.g. Cormier et al. 2015). In representing, a hand resembles entities other than itself, and the whole body can be represented as a non-human entity (e.g. an animal; also known as personification) (Hwang et al. 2017). Moreover, the acting and representing strategies can involve different body parts at the same time (see body partitioning by Dudis 2004, Cormier et al. 2012, 344), such as using an arm to represent a human arm and a first finger to represent a toothbrush, as seen in the type known as instrument (Padden et al. 2013).
There are other strategy types, but they may be emergent from the two fundamental iconic strategies, according to Müller (2014). For example, gesture drawing, or tracing, in the air may originate from the hand drawing on the surface. Mostly acknowledged types in SL linguistics are tracing (e.g. drawing and moulding by Müller 2014) and measuring (Mandel 1977, 69, Hassemer 2016, Hassemer and Winter 2016), which are often classified under the type Size and Shape Specifier, or SASS (e.g. Johnston and Schembri 2007, 170). Less acknowledged ones are locating (Liddell 2003), point of reference (POR) (Johnston 2019), static outlining, or just outlining (Calbris 1990, holding by Hassemer 2016), and being-pulled (Keränen 2023). Additionally, the types found in the current study are dividing, emptying, and assembling.
To explain each type, in tracing, a hand or some body part moves to dynamically ‘draw’ shapes in the air or on the surface; in measuring, two or more fingers or hands are opposite to each other to measure the distance between them. In locating, a hand uses a short movement in a particular direction, as if placing something on the ground to show its location. In POR, a non-dominant hand represents itself as a less or more abstract background (ground) to show the spatial relation between it and an active object (figure). In outlining, a hand or two hands with certain configurations stay in the air to show different geometric shapes (e.g. a circle or heart) using static handshapes and/or arms. In being-pulled, a hand pulls something to show that something is absorbed, magnetised, collected, and so on. In dividing, which is formally close to tracing, a hand moves over something to divide, cut, or split it into two pieces, instead of tracing shapes. In emptying, when a non-dominant hand holds an imaginary hollow object, a dominant one usually puts itself into the hollow object to show the spatial emptiness inside it.
Then, assembling strategy may be regarded as an intermediate form between the paradigm (i.e., selection of strategies) and the syntagm (i.e., combination of strategies). In this type, it is like assembling parts into a whole object in a tracing-like but discontinuous manner. For example, in the sign SQUARE, a signer quickly repeats a pinch-like L-handshape (i.e. measuring a certain length) at two different angles (e.g. vertical and horizontal lengths) to create a complete object (e.g. a square formed by the two lengths at different angles). Unlike dynamic tracing (e.g. Figure 1e), assembling leaves traces after each repetition of a non-dynamic strategy (e.g. measuring). Unlike the simple combination of different strategies (e.g. acting and tracing), assembling forms a unified whole object, similar to tracing.
The boundary between fundamental strategies and others may occasionally be fuzzy. For example, a signer can use either the acting or tracing strategy with the first finger to draw shapes. However, the most important difference between the acting for drawing and the tracing for drawing lies in the question of which aspect is in focus from a subjective view.[6] In the former, the actor’s body itself – sometimes also the shape – is highlighted, and in the latter, only the shape is highlighted.
Moreover, I also preliminarily categorise the iconic substrategies of each main type, and each substrategy may be either acting or representing – possibly also indexing (i.e. pointing finger; see Section 4.2.1). The main strategies are responsible for primary iconic conceptualisations, as shown above, but are somehow influenced or co-construed – lacking a better term – by different substrategies to create slightly different but unified iconic conceptualisations. A substrategy can often be identified by observing handshape. For example, a clock can be located on a wall with either sub-handling (i.e. grasp-like C-handshape; Figure 1g) or sub-representing (i.e. a hand as a clock; Figure 1h). More examples are found in Table 2. To speak metaphorically, the relationship between the main strategy and substrategy is like painting a picture (main strategy) with different instruments, such as a finger, a brush, or a roller (substrategies), in parallel to the notion of ‘depictive techniques’ by Müller (2014). Note that the grasp-like handshape in the non-acting strategies has also been mentioned elsewhere, albeit with little attention given to it (e.g. Johnston and Schembri 2007, 171, Perniss and Vigliocco 2014, 2–3, Hassemer and Winter 2016, 407–8).
2.3 Tendencies for the use of iconic strategies
Studies have shown that there may be shared and different tendencies across people due to various factors. Sighted signers and gesturers share a tendency to use certain iconic strategies for certain semantic fields consistently. This is commonly explained by the shared bodily and visual affordances (Padden et al. 2013, Ortega and Özyürek 2020b, Keränen 2021) – due to Embodied motivation. According to them, the acting strategy is most frequent, especially when describing bodily actions; the tracing strategy is generally less frequent but more frequent when describing the shape of an object or a non-manipulatable object (e.g. a house). Moreover, in both gesture and SL, two hands can recruit different iconic strategies simultaneously (Hwang et al. 2017, Slonimska 2022) or sequentially (Ortega and Özyürek 2020a) to express a concept efficiently. Then, regarding differences, Padden et al. (2013) show that when describing a handling action (e.g. toothbrushing), signers may tend to use the instrument strategy (an arm as a human arm and a hand as a toothbrush), and gesturers tend to use the acting strategy.
Regarding Embodied–Situated motivation, the understanding of how different sensory capabilities (e.g. deaf, hearing, and blind) influence iconicity or gestural expressions is still premature. First, congenitally blind children were reported to use notably fewer gestures but more exclusively speech in the Directions Task (describing paths, landmarks, and locations) compared to sighted children (Iverson and Goldin-Meadow 1997). They concluded that sighted and blind children rely on global (i.e. vision globally navigates a path) and segmented (i.e. a path broken into landmarks) representations, respectively; the latter representation is less suited to producing gestures. Second, the fact that blind signers cannot see another’s facial expressions can lead to disappearance (Checchetto et al. 2018, 2) or even the preferable avoidance of using facial expressions (Edwards and Brentari 2020). Third, the lack of visual experience can affect the organisation of knowledge, resulting in different frequencies of iconic strategies across the groups of blind and sighted gesturers (Mamus et al. 2024).
Regarding Sedimented–Situated motivation, Özçalışkan et al. (2016) reported that, in co-speech gestures, congenitally blind and sighted participants of the same language share language-specific gesture patterns but differ from those who speak a different language. That is, blind participants adhere to language-specific co-gesture patterns, even though they have never seen cultural gestures before. However, the language-specific patterns do not affect any groups when using only silent gestures, suggesting that silent gestures follow a natural semantic organisation rather than being influenced by vision and language (Özçalışkan et al. 2018). For this reason, the current study focuses on signing and silent gesturing without incorporating cross-linguistic comparison.
2.4 Embodied motivations by exploratory procedures (EPs) and grasps
Since iconic S-signs are ultimately motivated by the sensorimotor (and emotional) system (e.g. Perniss and Vigliocco 2014, Zlatev et al. 2008, Zlatev 2018), I now elaborate on iconic motivation by retrieving a few findings from the studies related to cognition and the body.
A physical object has many properties, such as texture, size, shape, and weight. According to Lederman and Klatzky (1987), human hands purposively move and touch the object in a patterned manner to optimally gain desired information about its haptic properties; they classified these prototypical movement patterns as types of exploratory procedures, or Eps (Table 3). Also, hands may haptically explore the functions of the object (e.g. pushing a press) (ibid).[7]
Taxonomy of EPs for exploring haptic properties (Lederman and Klatzky 1987)
EPs | Properties being explored |
---|---|
Lateral motion | Texture |
Pressure | Hardness or softness |
Static contact | Temperature |
Unsupported holding | Weight |
Enclosure | Global shape, volume |
Contour following | Exact shape, volume |
Moreover, reaching and grasping come in different manners in contextual sensitivity to a given task (i.e. goal), and the physical properties of an object (e.g. Ansuini et al. 2006). Grasps may be categorised into three main types: precision, intermediate, and power (Feix et al. 2016): prototypically, the heavier or bigger the object is, the more fingers and palms are recruited to handle it. While pinching a needle recruits the two fingertips of the first finger and thumb (precision), the act of effortful hammering a nail recruits all fingers and palms (power grasp); the intermediate grasp combines both elements of precision and power grasps, as in rotating a key (thumb on the first finger). These show the hand–goal–object interdependency: manual movements and positions correlate with the goal and the properties of an object.
Regarding Embodied–Situated motivation, it is easy to find that many gestures and signs are motivated by EPs. For example, the FinSL sign for ‘soft’ – pressuring an object with ten fingers – is iconic for the pressure type of EPs. Moreover, an iconic grasp-like handshape (i.e. handling substrategy) abductively indicates the object being held, thanks to the motivation by hand–goal–object interdependency (see the embodied nature of gesture by Hassemer 2016).
Furthermore, Prinz (2013, xi) puts it in the foreword: “Upon seeing one surface of a ball, we may use tactile imagery to image its sphericality.” Thus, vision tends to evoke haptic imagery without actually touching it. To apply this, seeing the surface of the hands presupposes the haptic aspects of them. This seems to enable haptic iconicity (see Section 4.2.2).
In sum, based on the literature and phenomenological reflections, since blind and sighted people share bodily senses and functions, they are expected to share iconic strategies. On the other hand, blind (silent) gesturers’ iconicity is expected to be motivated by their sensorimotor experience, thus proprioception (Embodied) rather than SL conventions (Sedimented), due to the lack of SL skills. Thus, the groups may differ depending on the processes of motivations and sedimentations, including linguistic, sensory, and intersubjective dimensions, as shown in the MSM (Section 2.1). Now, I proceed from the first-person conceptual analysis to the third-person experiment.
3 Methodology
The present study applies the non-linguistic stimulus-based elicitation method (Majid 2011) to collect iconic gestural descriptions from participant groups. While others using this method aim, for example, to examine general linguistic codability (i.e., how expressible each language is for differing semantic fields; e.g. Majid et al. 2018, Emmorey et al. forthcoming), this study, with a specific research design, focuses primarily on eliciting iconic expressions to gain insights into proprioceptive iconicity.
3.1 Participants
The selected groups of legal adult participants (male = 57%; female = 43%; average age = 45.7) were 5 sighted deaf L1 signers (SS1–5) and 2 congenitally blind gesturers who hear (BG1–2). The participants were invited either directly or indirectly through organisations with general information about the study. Participants’ information was collected through the pre-task questionnaire in place. Communication with signers was conducted in FinSL throughout the sessions, and with gesturers through FinSL interpreters – used primarily during pre-task communication and after-task interviews – who were prepared using my advance materials.
To provide context, in Finland, with a population of approximately 5.5 million, there are about 3,000 deaf signers (The Finnish Association of the Deaf, n.d.). According to the Finnish Register of Visual Impairment Annual Statistics 2021, 22% of the 55,000 visually impaired people are blind, based on the definition by the World Health Organisation that their activities primarily rely on senses other than sight (Tolkkinen 2022).
Regarding the blind participants in this study, the first blind gesturer is congenitally blind with had very poor sight until adolescence, primarily perceiving objects close to the face with blurry vision. Gesturer BG2 has only limited light vision. While he could not see any surface at all or say whether lights were on in the room where tasks were conducted, he could sometimes experience some brightness (e.g. sun). Therefore, we must interpret the results from gesturer BG1 with caution because he has visual memories that may influence the formation of iconicity. It is known that human vision and visual experiences develop significantly during the first months of life; the ability to perceive the three-dimensional world is typically reached by 6 months of age (see a review by Siu and Murphy 2018).
3.2 Apparatus
The sets of stimuli are physical, three-dimensional objects. All of these are haptically accessible to the group of blind participants and visually accessible to the other group. In addition, a benefit of using physical objects as stimuli is that all participants describe the exact same objects. In contrast, for example, a word list can elicit various objects based on participants’ denotative and connotative associations. Thus, using physical objects as stimuli makes descriptions between the groups comparable. The sets of stimuli are divided into two groups (a and b) based on Tasks 1 and 2, respectively:
18 common manual household items that are typically used with hands;
12 pairs of geometric objects, ranging from more typical (e.g. sphere, cone, etc.) to less typical (e.g. a pentagonal pen holder).
I expect that a) the set of household items with familiar manual functions will elicit the acting strategy, and b) the set of geometric objects will elicit shape- and size-related strategies. In the latter of the set, the pairs of objects are expected to motivate comparison and therefore more detailed iconic descriptions of them.
3.3 Procedure
In both Tasks 1 and 2, participants (Figure 2a) from both groups were seated on a chair, with the sets of stimuli located on a table to their left (Figure 2b), taken from behind a visual barrier (Figure 2c). Participants were instructed to 1) briefly observe each stimulus on the table (about 5 s) before moving it out of sight behind another visual barrier (Figure 2d),[8] and then 2) describe it to me, a sighted deaf person (Figure 2e), encouraging the use of intelligible expressions. They were asked to provide concise yet detailed descriptions so that any deaf person could conceive of the objects without seeing them. If needed, the participants were asked – through FinSL or via the interpreters – to describe in more detail. In Task 1, the participants were asked to describe what the household items are and to show how they are used. In Task 2, they were asked to describe the size and shape of the pairs of geometric objects. After the two task scenarios, I asked debriefing questions to the participants, to attempt to understand something about their descriptions (Section 4.2.4), either through FinSL or via the interpreters. The three sessions were recorded using two video cameras (Figure 2f), and then annotated and analysed with the time-aligned video annotation software ELAN (Crasborn and Sloetjes 2008).

Illustration of the session set-up. (a) participant, (b) stimuli presented on the table, (c) stimuli hidden under the table before presentation, (d) stimuli re-hidden behind the barrier, (e) a sighted deaf researcher (myself), and (f) two cameras.
The set-ups were slightly different according to the groups. The groups of sighted signers and blind gesturers were asked to only see and only touch the stimuli, respectively. This ensured that the results of the sighted signers were not based on fresh haptic memories of the stimuli. Also, an interpreter was seated at the back of each blind participant, near the table.
3.4 Annotation
Applying and modifying the annotation conventions established in the corpus project of FinSL (Salonen et al. 2019), the annotation procedure was done in the general-specific loop shown below to systematise the data and minimise the number of errors. Exceptionally, with the blind gesturers, the annotation started at the second point due to the lack of lexical signs. In the annotation template (Figure 3), the tiers are as follows: ID (i.e., the tier for form-based glosses), Right hand, Left hand, Both hands (as a single unit), Substrategies as child tiers of the hand tiers, Groups of stimuli (grouping tokens for each stimulus), and personal notes made by me. For now, the last two tiers function as support for my procedural work. The list of ID glosses comes from the controlled vocabulary of ELAN, which is connected to and updated by the lexical database of the Finnish Signbank. Moreover, I also established my own the offline-controlled vocabulary for annotating specific semiotic types.
While annotating all signs at the tier ID, I labelled fully conventional signs with ID gloss, partly or fully iconic with a general iconic gloss (i.e., instead of specifying subtypes), indices with a general index gloss, and unidentifiable tokens with an unidentifiable gloss. In doing so, it was possible to distinguish iconic tokens from others systematically.
After that, at the separate tiers of hands (left and right), fully conventional tokens (see Section 2.1) were labelled as non-iconic (i.e. not evoking iconicity); indices remained as general indices; and iconic tokens were labelled according to the types of iconic strategies. Then, iconic tokens whose iconic strategy was unclear were labelled as uncategorised for further rechecking and for the possibility of creating new types for them. More concretely, unclear iconic strategies are those that, for example, do not fit any existing categories or are simply ambiguous.
The third tier for both hands was created by copying the annotations from both hand tiers. This third tier describes hands as a single unit (i.e., regardless of the different strategies used by both hands) to make descriptive statistics sensible. Single units were typically annotated according to the dominant hand that often contributes more significantly to forming the iconic strategy (e.g. tracing versus POR) and tend to exhibit a higher frequency of production.
At the child tiers of the hand tiers, the iconic tokens were labelled according to types of substrategies: handling, representing, or uncategorised.
With the help of the search systems of ELAN, I systematically checked the annotation consistency across tiers to find possible errors (e.g. empty or exceptional annotations in the search results).
While thoroughly rechecking the uncategorised iconic strategies in the annotated data, a procedure loop is conducted (2–4).

Illustration of tier dependencies in the ELAN template.
4 Results and discussion
The research question was whether proprioceptive feedback plays a role in the formation of iconicity, especially within the group of sighted signers. Shared similarities in iconic descriptions between the sighted and blind groups are hypothesised to suggest the presence of proprioceptive iconicity in the former group. The results are analysed using descriptive statistics followed by qualitative insights and discussions.
4.1 Descriptive statistics
The descriptive statistics presented below provide familiarity with the results and support the arguments outlined in Section 4. For clarity, frequency tables are included in Appendices A (covering all semiotic types) and B (specifically focusing on iconic substrategies). While Appendix A encompasses all semiotic types, Sections 4.2 and 4.3 strictly focus on iconic (sub)strategies, with numeric results based on these categories.
Generally, the results reveal both similarities and differences across the participants and groups. To provide an overview, the total production frequency – regardless of handedness, strategies, and tasks – amounts to 3,242 tokens in the group of sighted signers (avg. 648.4 per signer) and 252 tokens in the group of blind gesturers (avg. 126 per gesturer). Unsurprisingly, the frequency of the fully conventional tokens is 61.9% of all tokens in the group of signers and 0% in the group of gesturers. As mentioned in Section 2.1, most iconic signs are gradually between non-conventional and partly conventional. However, the consideration of conventionality, including indexicality and unidentifiable cases, is outside the scope of the present study.
4.1.1 Quantising main iconic strategies
Turning to the consideration of iconic strategies, as expected, Task 1 tends to involve strategies that depict actions or functionalities related to the items, while Task 2 focuses more on strategies that depict the shape and size of geometric objects. In addition, exceptions were observed. For example, in Task 2, the acting strategy was used unexpectedly. For instance, a stimulus like an ice cream stick elicited the acting strategy to represent its commonly recognised specific shape. Additionally, because both groups typically combined strategies sequentially (Ortega and Özyürek 2020a), the frequencies of strategies are not mutually independent. However, this aspect is beyond the scope of the present study. It is important to note that not all participants used all strategy types; rather, there was variation in how they employed different strategies.
To consider the sighted group in Task 1 (as illustrated in Figure 4), the average frequency and percentages (a specific iconic strategy per total number of iconic strategies) for each type illustrate the most–least predominance of the types, as follows: acting (31.2; 42.6%), tracing (20; 27.3%), instrument (7.2; 9.8%), representing (7; 9.6%), measuring (6; 8.2%), being-pulled (0.6; 0.8%), assembling (0.6; 0.8%), locating (0.4; 0.5%), dividing (0.2; 0.3%), and no tokens shown in the rest.

Frequency diagram of iconic strategy types across sighted signers.
Task 2 shows the different predominance order, as follows (Figure 4): tracing (68.6; 65%), measuring (21.8; 20.6%), outlining (5.8; 5.5%), acting (4.4; 4.2%), assembling (1.8; 1.7%), representing (0.8; 0.8%), locating (1; 0.9%), emptying (0.8; 0.8%), dividing (0.4; 0.4%), instrument (0.2; 0.2%), and no tokens shown in the rest.
To consider the blind group in Task 1 (Figure 5), since BG1 and BG2 gesturers greatly varied in the overall production frequency and the sample size of the group was small, it is more illustrative to describe them separately. The predominance order of iconic strategies is roughly as follows: acting (BG1: 30; 30.6%; BG2: 26; 78.8%), tracing (BG1: 48; 49%; BG2: 2; 6.1%), and representing (BG1: 2; 2%; BG2: 2; 6.1%) strategies. In sum, while B1 uses slightly more tracing over acting, BG2 uses overwhelmingly more acting over others.

Frequency diagram of iconic strategy types across blind gesturers.
Again, Task 2 shows the different predominance order as follows (Figure 5): tracing (BG1: 52; 78.8%; BG2: 28; 73.7%), measuring (BG1: 14; 21.2%; BG2: 3; 7.9%), outlining (BG1: 0; 0%; BG2: 5; 13.2%), representing (BG1: 0; 0%; BG2: 2; 5.3%), and no tokens shown in the rest. Thus, they resemble each other in the percentage of the use of tracing but slightly differ in other types.
Appendix A – which is based on single whole units (Section 3.4) – does not show the frequency of the POR strategy because it occurs only as the subordinate role, mostly in a non-dominant hand. However, when considering the POR strategy in the left-hand tier using the ELAN search, the results show 191 tokens in the sighted group and 7 tokens in the blind group.
In conclusion, I present two points. First, the different frequencies, especially of the overall tokens and the diversity of iconic strategies, between the two groups can be partially explained by the presence of different semiotic repertories across the groups, such as the habitual use of bodily expressions and linguistic skills in the sighted group.
Second, the shared strategies in both groups were found: acting, tracing, representing, and POR in Task 1, as well as tracing, measuring, outlining, representing, and POR in Task 2. Importantly, BG2 with light vision used all these shared strategies. Thus, it can be safely concluded that since the blind gesturers do not share visual and linguistic aspects with sighted signers, the evidence of the shared strategies (i.e. acting, tracing, measuring, outlining, representing, and POR) supports the assumption of the role of proprioception in iconicity in both groups.
4.1.2 Quantising substrategies
This section considers the frequency of substrategies (Section 2.2), regardless of tasks, based on Appendix B. Please note that the results in Appendix B are based on the substrategies produced by the dominant hand. Table 4 shows that the two groups roughly share the percentages of substrategies (specific substrategy per total substrategies).
Overall frequencies of substrategy types in the two groups
Non-handling | Handling | Representing | Uncategorised | Total | |
---|---|---|---|---|---|
SS1–5 | |||||
Frequency | 2 | 521 | 98 | 262 | 884 |
Percentage | 0.2 | 58.9 | 11.1 | 29.8 | 100.0 |
BG1–2 | |||||
Frequency | 2 | 82 | 6 | 100 | 182 |
Percentage | 1.1 | 43.2 | 3.2 | 52.6 | 100.0 |
For both groups, the most frequent substrategy is clearly handling, about half of the total. It predominates in the main acting, tracing, and measuring strategies (Appendix B). The representing substrategy has only 98 tokens (11.1%) in the sighted group and 6 tokens (3.3%) in the blind group, and these are mostly found in the main representing and instrument strategies (Appendix B). The rarest substrategy is non-handling, with 2 tokens (0.2%) in the sighted group and 2 tokens (1.1%) in the blind group. This substrategy is mostly observed when the participants pretend to sleep to indicate the pillow.
However, there is a significant number of uncategorised substrategy. This is largely explained by the fact that several substrategies are quite ambiguous (see also Section 4.2.1). For example, tracing with a flat palm to extend a wide surface is such. The flat palm can be interpreted equally as a human hand sliding on the surface (handling) or as a surface itself (representing). This issue could raise future questions, such as whether those uncategorised substrategies are indeed ambiguous (or freely interpretable), or whether specific types could be identified by more carefully considering external factors such as human grasping and EPs. For comparison, according to Müller (2014, 1692), tracing with the index finger is schematised from the reenactment of finger-drawing on a surface, meaning that the finger-drawing is a handling type rather than a representing or indexical one (see also Section 4.2.1).
In sum, the results preliminarily show that action-based iconicity is prevalent not only in the aspect of the main strategy (also reported by Ortega and Özyürek 2020a, b) but also in that of the substrategy. This implies that proprioception motivates both the main strategies and substrategies, often manifesting as bodily actions and grasp-like handshapes, respectively.
4.2 Qualitative descriptions
While both groups purposively use iconic bodily expressions, the surprisingly salient difference between them is the articulatory difference. Both blind gesturers use less diverse handshapes, often using a somewhat curled, loose handshape. In this handshape, the thumb, first, and middle fingers touch each. Interestingly, this is in contrast to the findings by Iverson and Goldin-Meadow (1997) according to which blind and sighted children gesture in similar ways, including motions and handshapes. To speculate, the articulatory difference may arise from the non-habitual use of bodily expressions in adult blind gesturers or from individual variation by change. Despite the lack of explanation, it may be useful to bear the articulatory difference in mind throughout the text. The following sections consider more specific types of iconic strategies.
4.2.1 Grasp-tracing and line-tracing
Surprisingly, the qualitative and quantitative use of tracing differs between the two groups, especially in the selection of substrategies. To clarify, grasp-tracing involves mimicking the action of grasping a cylindrical object (i.e. handling substrategy) and moving upward to outline its height (Figure 6a). On the other hand, line-tracing refers to extending a line in the air, possibly using the first finger, to form shapes (Figure 6b), although it remains an uncategorised substrategy.

Two kinds of tracing are: (a) grasp-tracing and (b) line-tracing.
The results from both Tasks 1 and 2 show that while the sighted group (SS1–5) tended to utilise a more ‘grasp-tracing’ strategy (57.3% of all tracing tokens), the blind group (BG1–2) utilised it less (only 27.6% of those) (Table 5). Instead, the latter tended to utilise the ‘line-tracing’ strategy. Importantly, since the line-tracing can be equally interpreted as acting (e.g. finger-drawing), representing (e.g. pencil), or index (finger-pointing), termed as pointed implement by Mandel (1977, 67), it was grouped into the uncategorised type (39.4% for SS1–5; 72.4% for BG1–2).
Frequencies and percentages of substrategy types with the main tracing strategy
Frequency | Percentage | |||
---|---|---|---|---|
Tracing + substrategies | SS1–5 | BG1–2 | SS1–5 | BG1–2 |
Handling | 246 | 32 | 57.3% | 27.6% |
Uncategorised | 169 | 84 | 39.4% | 72.4% |
Representing | 14 | 0 | 3.3% | 0.0% |
Total | 429 | 116 | 100.0% | 100.0% |
The significant difference between the two substrategies is their ways of highlighting the specific aspect of an object. Unlike the first finger, the grasp-like handshape includes advanced information on three-dimensional properties (Hassemer and Winter 2016). As observed in my data, while the grasp-like handshape is sufficient to trace a three-dimensional thick ring with one or a few movements (Figure 6a), the first finger requires more repetition to achieve a similar outcome. For example, gesturer BG1 moved their first fingers through opposite curved paths with a repeated spiral motion to trace the thick ring (Figure 6b).
In this context, the advanced semiotic repertoire of signers potentially explains the selection of these substrategies. However, the role of vision in this selection remains unclear. One possible explanation for the increased use of line-tracing in the blind group could be that their tracing is more strongly motivated by specific EP, such as contour following (Section 2.4).
Finally, although the blind group notably used the grasp-tracing substrategy less frequently, they still employed it. For instance, gesturer BG2 traced a thick ring with a C-handshape, which effectively mimics grasping the ring. This demonstrates a clear motivation influenced by the power grasp (Section 2.4) and suggests that proprioception indeed plays a role in the formation of at least the handling substrategy.
4.2.2 Haptically representing
Regarding the representing (sub)strategy, one might argue that the upright finger (Figure 1b) looks like an ice cream stick, and this is solely a matter of visual iconicity. However, a first counterargument is that the blind group also used this strategy (also evidenced by Mamus et al. 2024), albeit with low average frequency: three tokens in Task 1 and one in Task 2 (Section 4.1). Nonetheless, the sighted group of signers also had a relatively low average frequency: 7 tokens in Task 1 and 0.8 tokens in Task 2.
The second argument is that gesturer BG2, with light vision, used the representing strategy for the upright ice cream stick shown above. How is this possible? To answer that, our first-person body is not only a sensing body but also a material body (Leib and Körper), inspired by phenomenological insights (Merleau-Ponty 1945, Ratcliffe 2013). Thus, through the lens of the MSM, we can perceive the haptic properties of our material body (Embodied), and these contribute to further iconic motivations (Situated). More illustratively, the gesturer in BG2 straightened his finger to create an iconic expression for the upright stick, thanks to the haptic resemblance between his upright finger and the upright stick.
A case from the data of the present study further supports this. In this case, when signer SS5 described the use of a hammer, his hand literally grasped the straight first and middle fingers of the non-dominant hand to show how the handle is held. This physical contact includes immediate haptic perception and therefore purposive, immediate haptic iconicity. In other words, straight fingers immediately haptically represent the handle of the hammer.
One may argue that signers do not typically touch their body parts. However, the bodies are still potentially touchable, presupposing the potential for haptic properties and thus haptic iconicity. As Ratcliffe (2013) phenomenologically argues, touch primarily contributes to our experience of the world as concrete and tangible, as well as to the sense of being within a world that has the potential to affect us and be affected by us; other senses presuppose these experiences, such as seeing a thing as being there and touchable. Thus, while we treat our bodies as touchable solids, we treat several iconic objects similarly as touchable. For example, hands do not physically collide to iconically show that a person does not crash into a wall. This lack of immediate touch contact is still haptically significant (Ratcliffe 2013). Therefore, haptics is much more ubiquitous in both sensory perception and iconic expressions than we may pre-reflectively think. We perceive the tangible world, and we iconically express it accordingly.
The findings also prompt us to reconsider our terminology. It could be conceptually clearer to distinguish between haptic and proprioceptive iconicity, emphasising different aspects of resemblance: external objects and internal body sensations. Therefore, the term ‘somatosensory iconicity’ could specifically denote iconicity based on general bodily sensations.
4.2.3 Sensory and semiotic ratios in articulation
After the previous section, one might enquire about the role of vision in iconicity. As mentioned earlier, articulation in signers depends on both visual and proprioceptive feedback (Emmorey et al. 2009). I now argue that both types of feedback likely contribute to the formation of iconicity in signers, as evidenced by the outlining strategy observed exclusively in Task 2. In this strategy, hands are configured into a static shape to resemble the shape and size of the object.
Regarding the blind group, only gesturer BG2, who has some light vision, used this outlining strategy. In all five instances where outlining was used, his handshape closely resembled the literal grasp of the object (i.e. handling substrategy). When BG2 described the palm-sized, roof-shaped object, he started with the line-tracing strategy to draw its contour and continued with the grasp-like outlining strategy to iconically show its palm-sized property (Figure 7a) while saying aloud ‘size’.[9] To conclude, the blind gesturer’s handshape must be motivated by a certain grasp type (see Section 2.4).

Different kinds of outlining. (a) simple grasp-like handshape, (b) rounded pinch-like F-handshape, (c) two joint grasp-like C-handshapes, and (d) oval shape shown by two curved arms.
Whereas the outlining strategy is similar across the two groups in many aspects, they also differ from each other. In the first of three examples, the sighted signers used a simple grasp-like handshape identical to the one used by BG2 (Figure 7a). Second, their handshapes are not visually identical to the simple grasp in many cases, but they also form visually sharp figures, for example, showing a small circle with a rounded F-handshape (Figure 7b) or a big circle with two joint C-handshapes (Figure 7c). While F- and C-handshapes resemble precision and power grasps, respectively (Section 2.4), the contours of curvy palms in these handshapes visually look like a clear circle. Third, a handshape does not resemble the grasping hand at all. When outlining the oval-shaped lid of the box, signer SS2 positioned his two hands in a way that the space between them visually resembled the oval shape (Figure 7c). Remarkably, the signers occasionally gazed at their hands (i.e., visual feedback) to adjust them and create visually sharp figures. In sum, there are four kinds of outlining (Figure 7a–d).
In sum, it can be safely concluded that there is a gradual ratio between proprioception and visual iconicity. On the one hand, the sighted signers outline shapes, relying on the varying degrees of two sensory feedback, ranging from more proprioceptive iconicity (simple grasp) to less or no proprioceptive iconicity (curved arms). On the other hand, congenitally blind gesturers probably rely solely on proprioceptive feedback and thus iconicity. This finding reminds us of Mitchell’s (2005) critique of the notion of ‘pure’ visual media as inexact and misleading. Instead, he argues that there is a sensory/semiotic ratio: visual media is mixed media, including other senses and semiotic grounds. While his emphasis was on the observer’s perspective (i.e. simply looking at media), my articulatory perspective (i.e. producing it) provides a complementary insight into Mitchell’s concept.
This conclusion also questions the dualist notion of sensory iconicity. In this notion (e.g. (Mesch et al. 2015), tactile iconicity and visual iconicity are classified exclusively for blind and sighted individuals, respectively. My findings support a non-dualist, gradual notion of visual and proprioceptive iconicity, at least for sighted signers.
4.2.4 What do interviews tell us?
After the two task scenarios, semi-structured interview questions were presented, aiming to better understand participants’ minds. The first questions enquired about the participants’ overall experience. Their answers mostly concerned the difficulty level of the tasks and attempted to invent ways to describe (i.e. task-oriented). For both groups, the second task was more difficult than the first, requiring quite a complex, detailed description of objects that goes beyond their everyday communication. Interestingly, both blind participants reported that comparing the sizes of two objects was the most difficult because the size ratio was completely confused. This requires further research.
The second question was: On what basis did you describe the objects? What did you focus on when describing the objects? All participants focused on the functions and appearances (e.g. how it looks) of household items, as well as the size and shape of objects. Interestingly, they did not mention articulatory or iconic dimensions in their descriptions. Thus, their descriptions were quite object-oriented, supporting the Sonessonian notion of asymmetric expression–object focus (Sections 2.1 and 2.2).
In the third question, participants were asked to describe the size and shape of the ice stick and bucket and then to observe the handshapes they used – usually resembling pinch and power grasps, respectively. They were then asked whether the handshapes could be used interchangeably for the stimuli (the pinch grasp for the bucket and the power grasp for the stick). The participants typically had a puzzled look and answered that it was just intuitively wrong because this big handshape fits for the bucket, and this small handshape fits for the stick. However, they were unable to explain this further.
In the fourth question, I started with a brief introduction of ‘SL as visual language’ and its rich iconicity, and then I directly asked the participants: How is it possible that congenitally blind and sighted individuals iconically describe in a similar way?[10] Participants theorised variously as follows: 1) an object is accessible visually for sighted individuals and haptically for blind individuals, respectively and exclusively. 2) Blind individuals have been taught visual concepts by sighted society. 3) The blind gesturers also reported having practical experience with hand tools – usually guided by sighted individuals – and therefore knowing how to describe them.
After the fourth question, the research’s assumptions were revealed to the participants that the shared iconic descriptions may also arise from the shared bodily feelings. The participants responded variously, ranging from strong confirmation (e.g. ‘true!’) (SS4) to moderate confirmation (SS3; BG1–2) and uncertainty (SS2, SS5). Signer SS3 agreed and found it interesting that our hands touch objects in certain ways and that our hands describe them similarly. Gesturers BG1–2 acknowledged that whether one sees or not, manipulating things involves touch. Participants SS2 and SS5 were more uncertain, reporting that “I may feel it, but I do not know. I may be wrong.” All the participants reported that they had never thought about or noticed it before.
To summarise, all participants typically have a poor awareness of their bodily senses during and after the tasks, probably due to our object-oriented (and task-oriented) tendency. It may be concluded that participants’ uncertain responses may stem from an incomplete awareness of bodily experience, which has been largely implicit, or perhaps from difficulties in replacing their visuocentric belief with new information (i.e. cognitive dissonance), especially in the short term. Remarkably, while their responses varied, none of the participants denied the role of bodily senses.
4.3 Brief reflections on limitations
While the current study has been productive, there are limitations. First, due to the space limit, the current study exclusively focused on manual iconic strategies, however, acknowledging that both groups have also used non-manual articulators (e.g. bodily reenactments, and iconic mouthings). Their contributions to (somatosensory) iconicity, both independently and in relation to manual articulators, deserve greater attention in the future.
Second, here the categorisation of iconic strategies is phenomenologically sensitive to different iconic conceptualisations, extending the taxonomy, rather than bundling some types into one and missing their uniqueness. However, the taxonomy is still incomplete. There are some unclear cases of how they should be categorised, partially due to the fuzzy boundary between the types, as well as the sample size of the study. Moreover, while the categorisation of substrategies provided a detailed explanation, especially for the handling of substrategy, a large portion of the substrategies remained uncategorised due to their ambiguity (see also Section 4.2.1). On the other hand, such sensitive categorisation is the right way forward, as it provides a nuanced understanding of the diverse and dynamic nature of iconicity, despite the challenge of handling difficult and fuzzy cases. It is important to remember that the task of qualitative research is not only to deepen the understanding of phenomena but also to accommodate and tolerate the complexity inherent in these phenomena (Juhila 2021).
Third, the elicitation method in the current study is still quite visuocentric. In the tasks, blind gesturers are asked to describe to a deaf-sighted individual, likely adjusting their descriptions to ‘fit’ a sighted individual’s understanding. We have not yet explored alternative set-ups in which participants with different sensory and linguistic resources are combined in various ways. In fact, a blind–blind situation with physical contact can enable unique co-formed patterns (Mesch et al. 2015, Edwards and Brentari 2020). The four hands and legs of two deafblind signers can be ‘combined’ into an iconic whole (e.g. interlocutor’s hand as a tree). Exploring different set-ups could provide more nuanced and convergent findings, helping to untangle the complex interactions of sensory, linguistic, and intersubjective factors.
Fourth, the small sample, consisting of five signers and two blind gesturers, limits the ability to provide explanatory statistics and hinders generalisation. Partly due to Finland’s situation, which stems from its relatively small population (5.5 million), it is challenging to recruit a diverse range of voluntary participants, particularly those with disabilities. To address this issue, more international studies are needed to obtain a larger sample size. However, the qualitative, comparative approach has allowed for a more nuanced understanding of iconicity. Additionally, the current results can be somewhat generalised, as they align with findings from several studies on similar topics.
5 Conclusions
The current research investigates whether proprioception contributes to the formation of iconicity by comparing iconic expressions of sighted signers and congenitally blind gesturers. Shared similarities between these groups are assumed to indicate the role of proprioception in iconicity formation. The main results suggest that while proprioception does play a role, there are differences between the two groups in certain respects.
Both groups share six types of iconic strategies (i.e. acting, tracing, measuring, outlining, representing, and POR), indicating the existence of proprioceptive iconicity. I have also shown that several iconic expressions are strongly motivated by the processes through which we grasp objects and haptically explore the world. In addition, even the representing (sub)strategy, which may seem to be primarily visual, necessarily involves (potential) haptic iconicity. Because both the hands and the world are tangible, touchable, and sometimes touched, the hands iconically represent the world based on these experiences. In sum, somatosensory iconicity has both proprioceptive and haptic aspects. To fully explain the formation and choice of iconic strategies, the somatosensory system should be considered. Otherwise, a gap remains in the explanation.
Moreover, since sighted signers also utilise both visual and proprioceptive feedback to form iconicity, their iconicity can qualitatively differ from that of the blind group in some respect. Thus, sighted people can adjust their iconicity based on the two feedbacks at different sensory/semiotic ratios (Mitchell 2005). That is, sensory feedback can be regarded as a resource for meaning-making. This experimentally supports and complements my similar semiotic-phenomenological findings (Keränen 2023).
Although outside the scope of this study, other factors, such as semiotic repertoires (e.g. habitual use of bodily expression and FinSL conventions), seem to contribute significantly to various aspects, including the diversity of iconic strategy types and their efficient use. Through the lens of the MSM, the process of iconic description is better understood as a complex nexus comprising bodily (Embodied) and socially conventional dimensions (Sedimented), emerging within an immediate situation (Situated).
By adopting the articulatory perspective, my study has experimentally and phenomenologically revealed that haptics and proprioception are integral parts of semiotic signs. In contrast, most societies and studies on meaning-making tend to favour the observer’s perspective. Consequentially, the linguistic or semiotic sign has been predominantly considered based on discrete senses (sign as visual, word as auditory, tactile sign as haptic, etc.). By solely adopting this perspective, we also risk maintaining divisions based on sensory capacities (e.g. deaf, hearing, and blind) and semiotic systems (e.g. signed, and spoken), rather than recognising the shared resources among them. Furthermore, computer vision, which identifies gestural expressions, may partially exemplify the observer’s bias in the context of technology.[11] In sum, adopting the articulatory perspective is more than just a marginal shift in perspective.
Biases – at least visuocentrism, observism, and dualism – that lead us to overlook the dimension of somatosensory sense seem to remain persistent, even among scholars studying blind persons. For both blind and sighted gesturers (Özçalışkan et al. 2018, 2024), co-speech gestures tend to follow language-specific patterns, whereas silent gestures do not. It has been concluded that silent gestures adhere neither to language-specific patterns nor to vision, but rather to a ‘natural semantic organisation’ (Özçalışkan et al. 2018) or ‘language-general patterns’ (Özçalışkan et al. 2024), without mentioning the possible role of somatosensory sense in silent gesture. Additionally, some scholars (Mesch et al. 2015) seem to classify visual and tactile iconicity exclusively for sighted and blind signers, respectively. It seems that human beings are better at exclusion than at inclusion (Radman 2013). While our universal biases are understandable, we must attempt to overcome them to deepen our understanding of iconicity and meaning-making.
To conclude, this study marks just the beginning, raising many further questions in the fields of SL and gesture studies. As reviewed, the novel notion of iconicity and semiotic signs has a significant potential impact on both science and society. This should encourage both fields to recognise this overlooked aspect and to make theoretical and practical progress.
Abbreviations
- BG
-
Blind gesturer
- EPs
-
Exploratory procedures
- MSM
-
Motivation and sedimentation model
- POR
-
Point of reference
- SL
-
Sign language
- SS
-
Sighted gesturer
Acknowledgements
My deepest gratitude goes to my supervisors, Tommi Jantunen and Urho Määttä, for their guidance and support throughout this study. I also extend my heartfelt thanks to Annika Schiefner, Asifa Majid, Gerardo Ortega, Göran Sonesson (rest in peace), Jamin Pelkey, Johanna Mesch, Jordan Zlatev, Laura Kanto, Lari Vainio, and Terra Edwards for their invaluable thoughts and shared resources. Special thanks to Lauri Lehenkari, a deaf carpenter, for his advice on preparing the stimulus materials. My PhD study was generously funded by the Finnish Cultural Foundation (skr.fi), and the language checking was funded by the project (339268) of the Research Council of Finland (aka.fi).
-
Funding information: The work was financed by the Finnish Cultural Foundation (www.skr.fi).
-
Author contributions: The author confirms the sole responsibility for the conception of the study, presented results, and manuscript preparation.
-
Conflict of interest: The author states no conflict of interest.
-
Data availability statement: Data sharing is not applicable to this study due to personal data protection. The author can nevertheless be contacted in inquiries concerning the data.
Appendix A: Table of frequency of main iconic strategies in Task 1 (describing household items) and Task 2 (describing geometric objects)
The first row of tables in Tasks lists the participants, consisting of sighted signers (SS) and blind gesturers (BG) with numeric identifiers (SS1–5; BG1–2). The first column presents the types of full conventional and indexical strategies, as well as iconic strategies: acting, tracing, instrument, representing, measuring, being pulled, sequential tracing, locating, dividing, emptying, outlining, and POR (Section 2.2). The rest consists of iconic signs with uncategorised strategies and fully unidentifiable types. Each row shows the frequencies of semiotic strategies by each participant, and the total rows display the total frequency of semiotic strategies by each participant. Additionally, the row ‘Tasks 1 & 2’ displays the total frequencies for both tasks.
Task 1 | SS1 | SS2 | SS3 | SS4 | SS5 | BG1 | BG2 |
---|---|---|---|---|---|---|---|
Fully conventional | 400 | 184 | 122 | 236 | 192 | 0 | 0 |
Indexical | 47 | 23 | 21 | 43 | 56 | 0 | 1 |
Acting | 33 | 37 | 21 | 37 | 28 | 30 | 26 |
Tracing | 26 | 32 | 8 | 8 | 26 | 48 | 2 |
Instrument | 2 | 0 | 10 | 17 | 7 | 0 | 0 |
Representing | 9 | 12 | 4 | 7 | 3 | 2 | 2 |
Measuring | 2 | 3 | 0 | 5 | 20 | 0 | 0 |
Being-pulled | 0 | 0 | 1 | 1 | 1 | 0 | 0 |
Assembling | 0 | 0 | 1 | 0 | 2 | 0 | 0 |
Locating | 1 | 0 | 0 | 0 | 1 | 0 | 0 |
Dividing | 0 | 0 | 0 | 1 | 0 | 0 | 0 |
Emptying | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Outlining | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
POR | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Uncategorised | 4 | 0 | 1 | 5 | 1 | 5 | 1 |
Unidentifiable | 10 | 6 | 0 | 17 | 9 | 13 | 1 |
Total | 534 | 297 | 189 | 377 | 346 | 98 | 33 |
Task 2 | SS1 | SS2 | SS3 | SS4 | SS5 | BG1 | BG2 |
---|---|---|---|---|---|---|---|
Fully conventional | 280 | 269 | 124 | 65 | 135 | 0 | 0 |
Indexical | 6 | 5 | 4 | 5 | 19 | 0 | 0 |
Acting | 1 | 4 | 4 | 6 | 7 | 0 | 0 |
Tracing | 65 | 99 | 72 | 48 | 59 | 52 | 28 |
Instrument | 0 | 0 | 0 | 0 | 1 | 0 | 0 |
Representing | 2 | 0 | 0 | 2 | 0 | 0 | 2 |
Measuring | 14 | 31 | 7 | 31 | 26 | 14 | 3 |
Being-pulled | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Assembling | 5 | 2 | 0 | 1 | 1 | 0 | 0 |
Locating | 2 | 1 | 2 | 0 | 0 | 0 | 0 |
Dividing | 0 | 0 | 0 | 2 | 0 | 0 | 0 |
Emptying | 1 | 1 | 2 | 0 | 0 | 0 | 0 |
Outlining | 5 | 10 | 4 | 6 | 4 | 0 | 5 |
POR | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Uncategorised | 1 | 1 | 0 | 1 | 0 | 2 | 0 |
Unidentifiable | 22 | 9 | 12 | 4 | 9 | 12 | 3 |
Total | 404 | 432 | 231 | 171 | 261 | 80 | 41 |
Tasks 1 & 2 | |||||||
Total | 938 | 729 | 420 | 548 | 607 | 178 | 74 |
Appendix B: Table of frequency of iconic substrategies in Tasks 1 and 2
In both tasks, the table is divided into two groups of participants (SS1–5; BG1–2). The first column refers to the types of main iconic strategies. The first row refers to the types of iconic substrategies: non-handling (nh), handling (ha), representing (re), and uncategorised (un). Thus, the table displays the frequencies of substrategies for each main iconic strategy. Additionally, the row ‘Tasks 1 & 2’ displays the total frequencies for both tasks.
Task 1 | |||||||||
---|---|---|---|---|---|---|---|---|---|
SS1–5 | nh | ha | re | un | BG1–2 | nh | ha | re | un |
Acting | 1 | 151 | 0 | 0 | 2 | 40 | 0 | 0 | |
Tracing | 0 | 58 | 9 | 21 | 0 | 16 | 0 | 32 | |
Instrument | 0 | 0 | 34 | 0 | 0 | 0 | 0 | 0 | |
Representing | 0 | 0 | 36 | 0 | 0 | 0 | 4 | 0 | |
Measuring | 0 | 14 | 0 | 16 | 0 | 0 | 0 | 0 | |
Being-pulled | 0 | 2 | 0 | 0 | 0 | 0 | 0 | 0 | |
Assembling | 0 | 0 | 2 | 3 | 0 | 0 | 0 | 0 | |
Locating | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | |
Dividing | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | |
Emptying | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | |
Outlining | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | |
POR | 0 | 5 | 1 | 1 | 0 | 0 | 0 | 0 | |
Uncategorised | 0 | 3 | 0 | 7 | 0 | 0 | 0 | 6 | |
Total | 1 | 234 | 82 | 50 | 2 | 56 | 4 | 38 |
Task 2 | |||||||||
---|---|---|---|---|---|---|---|---|---|
SS1–5 | nh | ha | re | un | BG1–2 | nh | ha | re | un |
Acting | 0 | 22 | 0 | 0 | 0 | 0 | 0 | 0 | |
Tracing | 0 | 188 | 5 | 148 | 0 | 16 | 0 | 52 | |
Instrument | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | |
Representing | 0 | 0 | 3 | 0 | 0 | 0 | 2 | 0 | |
Measuring | 0 | 52 | 0 | 48 | 0 | 6 | 0 | 7 | |
Being-pulled | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | |
Assembling | 0 | 3 | 5 | 1 | 0 | 0 | 0 | 0 | |
Locating | 0 | 4 | 0 | 0 | 0 | 0 | 0 | 0 | |
Dividing | 0 | 0 | 0 | 2 | 0 | 0 | 0 | 0 | |
Emptying | 1 | 0 | 0 | 2 | 0 | 0 | 0 | 0 | |
Outlining | 0 | 15 | 1 | 11 | 0 | 3 | 0 | 1 | |
POR | 0 | 2 | 0 | 0 | 0 | 1 | 0 | 0 | |
Uncategorised | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 2 | |
Total | 1 | 287 | 16 | 213 | 0 | 26 | 2 | 62 | |
Tasks 1 & 2 | |||||||||
Total | 2 | 521 | 98 | 263 | 2 | 82 | 6 | 100 |
References
Ansuini, Caterina, Marco Santello, Stefano Massaccesi, and Umberto Castiello. 2006. “Effects of End-Goal on Hand Shaping.” Journal of Neurophysiology 95 (4): 2456–65. 10.1152/jn.01107.2005.Suche in Google Scholar
Blomberg, Johan and Jordan Zlatev. 2021. “Metalinguistic Relativity: Does One’s Ontology Determine One’s View on Linguistic Relativity?” Language & Communication 76 (January): 35–46. 10.1016/j.langcom.2020.09.007.Suche in Google Scholar
Calbris, Geneviève. 1990. The Semiotics of French Gestures. Advances in Semiotics. Bloomington: Indiana University Press.Suche in Google Scholar
Chandler, Daniel and Rod Munday. 2020. A dictionary of media and communication. In Ocularcentrism (Third edition). Oxford University Press. 10.1093/acref/9780198841838.001.0001.Suche in Google Scholar
Checchetto, Alessandra, Carlo Geraci, Carlo Cecchetto, and Sandro Zucchi. 2018. “The Language Instinct in Extreme Circumstances: The Transition to Tactile Italian Sign Language (LISt) by Deafblind Signers.” Glossa: A Journal of General Linguistics 3 (1): 66. 10.5334/gjgl.357.Suche in Google Scholar
Cormier, Kearsy, David Quinto-Pozos, Zed Sevcikova, and Adam Schembri. 2012. “Lexicalisation and De-Lexicalisation Processes in Sign Languages: Comparing Depicting Constructions and Viewpoint Gestures.” Language & Communication 32 (4): 329–48. 10.1016/j.langcom.2012.09.004.Suche in Google Scholar
Cormier, Kearsy, Sandra Smith, and Zed Sevcikova-Sehyr. 2015. “Rethinking Constructed Action.” Sign Language & Linguistics 18 (2): 167–204. 10.1075/sll.18.2.01cor.Suche in Google Scholar
Coseriu, Eugenio. 1985. “Linguistic Competence: What Is It Really?” The Modern Language Review 80 (4): xxv–xxxv. 10.2307/3729050.Suche in Google Scholar
Coseriu, Eugenio. 2000. “The Principles of Linguistics as a Cultural Science.” Transylvanian Review 1 (9): 108–15.Suche in Google Scholar
Crasborn, Onno and Han Sloetjes. 2008. “Enhanced ELAN Functionality for Sign Language Corpora.” In Proceedings of LREC 2008, Sixth International Conference on Language Resources and Evaluation. Nijmegen, Netherlands: Max Planck Institute for Psycholinguistics. https://archive.mpi.nl/tla/elan.Suche in Google Scholar
Cutler, Anne, Alan Allport, Wolfgang Prinz, and Eckart Scheerer. 1987. “Speaking for Listening.” In Language Perception and Production: Relationships between Listening, Speaking, Reading and Writing, 23–40. London: Academic Press. https://repository.ubn.ru.nl/handle/2066/15704.Suche in Google Scholar
Devylder, Simon and Jordan Zlatev. 2020. “Cutting and Breaking Metaphors of the Self and the Motivation & Sedimentation Model.” In Figurative Meaning Construction in Thought and Language, edited by Annalisa Baicchi, Vol. 9, 254–81. Figurative Thought and Language. Amsterdam: John Benjamins Publishing Company. 10.1075/ftl.9.11dev.Suche in Google Scholar
Dingemanse, Mark. 2013. “Ideophones and Gesture in Everyday Speech.” Gesture 13 (2): 143–65. 10.1075/gest.13.2.02din.Suche in Google Scholar
Donald, Merlin. 1998. “Mimesis and the Executive Suite: Missing Links in Language Evolution.” In Approaches to the Evolution of Language: Social and Cognitive Biases, edited by James R. Hurford, Michael Studdert-Kennedy, and Chris Knight, 44–67. Cambridge, UK: Cambridge University Press.Suche in Google Scholar
Dudis, Paul G., ed. 2004. “Body Partitioning and Real-Space Blends.” Cogl 15 (2): 223–38. 10.1515/cogl.2004.009.Suche in Google Scholar
Edwards, Terra and Diane Brentari. 2020. “Feeling Phonology: The Conventionalization of Phonology in Protactile Communities in the United States.” Language 96 (4): 819–40. 10.1353/lan.0.0248.Suche in Google Scholar
Emmorey, Karen. 2005. “Signing for Viewing: Some Relations between the Production and Comprehension of Sign Language.” In Twenty-First Century Psycholinguistics Four Cornerstones, edited by Anne Cutler, 293–309. Mahwah, NJ: Lawrence Erlbaum Associates, Inc. Publishers.Suche in Google Scholar
Emmorey, Karen. 2014. “Iconicity as Structure Mapping.” Philosophical Transactions of the Royal Society B: Biological Sciences 369 (1651): 20130301. 10.1098/rstb.2013.0301.Suche in Google Scholar
Emmorey, Karen, Rain Bosworth, and Tanya Kraljic. 2009. “Visual Feedback and Self-Monitoring of Sign Language.” Journal of Memory and Language 61 (3): 398–411. 10.1016/j.jml.2009.06.001.Suche in Google Scholar
Emmorey, Karen, Brena Nicodemus, and Lucinda O’Grady. Forthcoming. “The Language of Perception in American Sign Language.” In Oxford Handbook of the Languages of Perception, edited by Asifa Majid and Stephen C. Levinson. Oxford, UK: Oxford University Press. https://psyarxiv.com/ed9bf/.Suche in Google Scholar
Feix, Thomas, Javier Romero, Heinz-Bodo Schmiedmayer, Aaron M. Dollar, and Danica Kragic. 2016. “The GRASP Taxonomy of Human Grasp Types.” IEEE Transactions on Human-Machine Systems 46 (1): 66–77. 10.1109/THMS.2015.2470657.Suche in Google Scholar
Frishberg, Nancy. 1975. “Arbitrariness and Iconicity: Historical Change in American Sign Language.” Language 51 (3): 696. 10.2307/412894.Suche in Google Scholar
Gallagher, Shaun and Dan Zahavi. 2012. The Phenomenological Mind, 2nd ed. London: Routledge.10.4324/9780203126752Suche in Google Scholar
Gentner, Dedre, and Arthur B. Markman. 1997. “Structure Mapping in Analogy and Similarity.” American Psychologist 52 (1): 45–56. 10.1037/0003-066X.52.1.45.Suche in Google Scholar
Hassemer, Julius. 2016. “Towards a Theory of Gesture Form Analysis. Imaginary Forms as Part of Gesture Conceptualisation, with Empirical Support from Motion-Capture Data.” PhD diss., RWTH Aachen University.Suche in Google Scholar
Hassemer, Julius and Bodo Winter. 2016. “Producing and Perceiving Gestures Conveying Height or Shape.” Gesture 15 (3): 404–24. 10.1075/gest.15.3.07has.Suche in Google Scholar
Hinton, Leanne, Johanna Nichols, and John J. Ohala, eds. 2006. Sound Symbolism, digitally printed 1st pbk. version. Cambridge, UK: Cambridge University Press.Suche in Google Scholar
Hodge, Gabrielle and Lindsay Ferrara. 2022. “Iconicity as Multimodal, Polysemiotic, and Plurifunctional.” Frontiers in Psychology 13 (June): 808896. 10.3389/fpsyg.2022.808896.Suche in Google Scholar
Husserl, Edmund. 1901. Logical Investigations, edited by Dermot Moran, translated by John N. Findlay. Reprinted. Vol. 1. International Library of Philosophy. London: Routledge.Suche in Google Scholar
Huston, Stephen J. and Vivek Jayaraman. 2011. “Studying Sensorimotor Integration in Insects.” Current Opinion in Neurobiology 21 (4): 527–34. 10.1016/j.conb.2011.05.030.Suche in Google Scholar
Hwang, So-One, Nozomi Tomita, Hope Morgan, Rabia Ergin, Deniz İlkbaşaran, Sharon Seegers, Ryan Lepic, and Carol Padden. 2017. “Of the Body and the Hands: Patterned Iconicity for Semantic Categories.” Language and Cognition 9 (4): 573–602. 10.1017/langcog.2016.28.Suche in Google Scholar
Iverson, Jana M. and Susan Goldin-Meadow. 1997. “What’s Communication Got to Do with It? Gesture in Children Blind from Birth.” Developmental Psychology 33 (3): 453–67. 10.1037/0012-1649.33.3.453.Suche in Google Scholar
Jakobson, Roman. 1965. “Quest for the Essence of Language.” Diogenes 13 (51): 21–37. 10.1177/039219216501305103.Suche in Google Scholar
Johnston, Trevor. 2013. “Towards a Comparative Semiotics of Pointing Actions in Signed and Spoken Languages.” Gesture 13 (2): 109–42. 10.1075/gest.13.2.01joh.Suche in Google Scholar
Johnston, Trevor. 2019. “Auslan Corpus Annotation Guidelines.” Centre for Language Sciences, Department of Linguistics, Macquarie University. https://media.auslan.org.au/django-summernote/2020-09-04/69f180d0-04bf-4831-b59c-0b0b8f07cbb5.pdf.Suche in Google Scholar
Johnston, Trevor A. and Adam Schembri. 2007. Australian Sign Language (Auslan): An Introduction to Sign Language Linguistics. Cambridge, UK: Cambridge University Press.10.1017/CBO9780511607479Suche in Google Scholar
Juhila, Kirsi. 2021. “Laadullisen Tutkimuksen Ominaispiirteet [Characteristics of Qualitative Research].” In Laadullisen Tutkimuksen Verkkokäsikirja [Online Handbook for Qualitative Research], edited by Jaana Vuori. Tampere: Vastapaino. https://www.fsd.tuni.fi/fi/palvelut/menetelmaopetus/kvali/mita-on-laadullinen-tutkimus/laadullisen-tutkimuksen-ominaispiirteet/.Suche in Google Scholar
Keränen, Jarkko. 2021. “Iconic Strategies in Lexical Sensory Signs in Finnish Sign Language.” Cognitive Semiotics 14 (2): 163–87. 10.1515/cogsem-2021-2042.Suche in Google Scholar
Keränen, Jarkko. 2023. “Cross-Modal Iconicity and Indexicality in the Production of Lexical Sensory and Emotional Signs in Finnish Sign Language.” Cognitive Linguistics 34 (3–4): 333–69. 10.1515/cog-2022-0070.Suche in Google Scholar
Konderak, Piotr. 2018. Mind, Cognition, Semiosis: Ways to Cognitive Semiotics. Lublin: Maria Curie-Skłodowska University Press.Suche in Google Scholar
Lederman, Susan J. and Roberta L. Klatzky. 1987. “Hand Movements: A Window into Haptic Object Recognition.” Cognitive Psychology 19 (3): 342–68. 10.1016/0010-0285(87)90008-9.Suche in Google Scholar
Lederman, Susan J., and Roberta L. Klatzky. 2009. “Haptic Perception: A Tutorial.” Attention, Perception & Psychophysics 71 (7): 1439–59. 10.3758/APP.71.7.1439.Suche in Google Scholar
Liddell, Scott K. 2003. Grammar, Gesture, and Meaning in American Sign Language. Cambridge, UK: Cambridge University Press.10.1017/CBO9780511615054Suche in Google Scholar
Majid, Asifa. 2011. A Guide to Stimulus‐Based Elicitation for Semantic Categories. Oxford, UK: Oxford University Press. 10.1093/oxfordhb/9780199571888.013.0003.Suche in Google Scholar
Majid, Asifa, Seán G. Roberts, Ludy Cilissen, Karen Emmorey, Brenda Nicodemus, Lucinda O’Grady, Bencie Woll, et al. 2018. “Differential Coding of Perception in the World’s Languages.” Proceedings of the National Academy of Sciences 115 (45): 11369–76. 10.1073/pnas.1720419115.Suche in Google Scholar
Mamus, Ezgi, Laura J. Speed, Gerardo Ortega, Asifa Majid, and Asli Özyürek. 2024. “Gestures Reveal How Visual Experience Shapes Concepts in Blind and Sighted Individuals.” In 14th International Conference on Iconicity in Language and Literature (ILL14). Catania, Italy, May 30–June 1, 2024.Suche in Google Scholar
Margiotoudi, Konstantina and Friedemann Pulvermüller. 2020. “Action Sound–Shape Congruencies Explain Sound Symbolism.” Scientific Reports 10 (1): 12706. 10.1038/s41598-020-69528-4.Suche in Google Scholar
Mandel, Mark. 1977. “Iconic Devices in American Sign Language.” In On the Other Hand: New Perspectives on American Sign Language, edited by Lynn A. Friedman. New York: Academic Press.Suche in Google Scholar
Merleau-Ponty, Maurice. 1945. Phenomenology of Perception, translated by Donald A. Landes. Abingdon, Oxon: Routledge.Suche in Google Scholar
Mesch, Johanna, Eli Raanes, and Lindsay Ferrara. 2015. “Co-Forming Real Space Blends in Tactile Signed Language Dialogues.” Cognitive Linguistics 26 (2): 261–87. 10.1515/cog-2014-0066.Suche in Google Scholar
Mitchell, William J. T. 2005. “There Are No Visual Media.” Journal of Visual Culture 4 (2): 257–66. 10.1177/1470412905054673.Suche in Google Scholar
Moriarty, Erin and Annelies Kusters. 2021. “Deaf Cosmopolitanism: Calibrating as a Moral Process.” International Journal of Multilingualism 18 (2): 285–302. 10.1080/14790718.2021.1889561.Suche in Google Scholar
Moskaluk, Kalina, Jordan Zlatev, and Joost Van De Weijer. 2022. “‘Dizziness of Freedom’: Anxiety Disorders and Metaphorical Meaning-Making.” Metaphor and Symbol 37 (4): 303–22. 10.1080/10926488.2021.2006045.Suche in Google Scholar
Müller, Cornelia. 2014. “128. Gestural Modes of Representation as Techniques of Depiction.” In Handbücher Zur Sprach- Und Kommunikationswissenschaft/Handbooks of Linguistics and Communication Science (HSK) 38/2, edited by Cornelia Müller, Alan Cienki, Ellen Fricke, Silva Ladewig, David McNeill, and Jana Bressem, 1687–702. Berlin: De Gruyter. 10.1515/9783110302028.1687.Suche in Google Scholar
Oakley, Todd and Jordan Zlatev. 2024. “Origins of Money: A Motivation & Sedimentation Model (MSM) Analysis.” Semiotica 2024 (257): 1–27. 10.1515/sem-2023-0031.Suche in Google Scholar
O’Brien, Dai and Annelies Kusters. 2017. Visual methods in deaf studies: Using photography and Filmmaking in research with deaf people. In Innovations in deaf studies: The role of deaf scholars, edited by Annelies Kusters, Maartje De Meulder and Dai O’Brien, p. 265–296. Oxford University Press.Suche in Google Scholar
Ohala, John J. 1984. “An Ethological Perspective on Common Cross-Language Utilization of F₀ of Voice.” Phonetica 41 (1): 1–16. 10.1159/000261706.Suche in Google Scholar
Ortega, Gerardo and Asli Özyürek. 2020a. “Types of Iconicity and Combinatorial Strategies Distinguish Semantic Categories in Silent Gesture across Cultures.” Language and Cognition 12 (1): 84–113. 10.1017/langcog.2019.28.Suche in Google Scholar
Ortega, Gerardo and Aslı Özyürek. 2020b. “Systematic Mappings between Semantic Categories and Types of Iconic Representations in the Manual Modality: A Normed Database of Silent Gesture.” Behavior Research Methods 52 (1): 51–67. 10.3758/s13428-019-01204-6.Suche in Google Scholar
Özçalışkan, Şeyda, Ché Lucero, and Susan Goldin-Meadow. 2016. “Is Seeing Gesture Necessary to Gesture Like a Native Speaker?” Psychological Science 27 (5): 737–47. 10.1177/0956797616629931.Suche in Google Scholar
Özçalışkan, Şeyda, Ché Lucero, and Susan Goldin‐Meadow. 2018. “Blind Speakers Show Language‐Specific Patterns in Co‐Speech Gesture but Not Silent Gesture.” Cognitive Science 42 (3): 1001–14. 10.1111/cogs.12502.Suche in Google Scholar
Özçalışkan, Şeyda, Ché Lucero, and Susan Goldin‐Meadow. 2024. “Is Vision Necessary for the Timely Acquisition of Language‐specific Patterns in Co‐speech Gesture and Their Lack in Silent Gesture?” Developmental Science, 27, e13507. 10.1111/desc.13507.Suche in Google Scholar
Padden, Carol A., Irit Meir, So-One Hwang, Ryan Lepic, Sharon Seegers, and Tory Sampson. 2013. “Patterned Iconicity in Sign Language Lexicons.” Gesture 13 (3): 287–308. 10.1075/gest.13.3.03pad.Suche in Google Scholar
Perlman, Marcus, Hannah Little, Bill Thompson, and Robin L. Thompson. 2018. “Iconicity in Signed and Spoken Vocabulary: A Comparison between American Sign Language, British Sign Language, English, and Spanish.” Frontiers in Psychology 9 (August): 1433. 10.3389/fpsyg.2018.01433.Suche in Google Scholar
Perniss, Pamela, Robin L. Thompson, and Gabriella Vigliocco. 2010. “Iconicity as a General Property of Language: Evidence from Spoken and Signed Languages.” Frontiers in Psychology 1: 227. 10.3389/fpsyg.2010.00227.Suche in Google Scholar
Perniss, Pamela and Gabriella Vigliocco. 2014. “The Bridge of Iconicity: From a World of Experience to the Experience of Language.” Philosophical Transactions of the Royal Society B: Biological Sciences 369 (1651): 20130300. 10.1098/rstb.2013.0300.Suche in Google Scholar
Prinz, Jesse J. 2013. “Foreword: Hand Manifesto.” In The Hand, an Organ of the Mind, edited by Zdravko Radman, 9–18. Cambridge: The MIT Press. 10.7551/mitpress/9083.003.0001.Suche in Google Scholar
Radman, Zdravko. 2013. “Beforehand.” In The Hand, an Organ of the Mind, edited by Zdravko Radman, 19–22. Cambridge: The MIT Press. 10.7551/mitpress/9083.003.0002.Suche in Google Scholar
Ramachandran, Vilayanur S. and Edward M. Hubbard. 2001. “Synaesthesia – A Window into Perception, Thought and Language.” Journal of Consciousness Studies 8 (12): 3–34.Suche in Google Scholar
Ratcliffe, Matthew. 2013. “Touch and the Sense of Reality.” In The Hand, an Organ of the Mind, edited by Zdravko Radman, 161–88. Cambridge: The MIT Press. 10.7551/mitpress/9083.003.0012.Suche in Google Scholar
Salonen, Juhana, Tuija Wainio, Antti Kronqvist, and Jarkko Keränen. 2019. “Suomen Viittomakielten Korpusprojektin Annotointiohjeet [Annotation instructions of Corpus Project of Finland’s Sign Languages (CFINSL)].” Jyväskylän yliopisto, Kieli- ja viestintätieteiden laitos [University of Jyväskylä, Department of Language- and Communication Studies]. https://www.jyu.fi/hytk/fi/laitokset/kivi/opiskelu/oppiaineet/viittomakieli/copy_of_menossa-olevat-projektit/suomen-viittomakielten-korpusprojekti/cfinsl_annotointiohjeet_2019_2versio.pdf.Suche in Google Scholar
Sapir, Edward. 1929. “A Study in Phonetic Symbolism.” Journal of Experimental Psychology 12 (3): 225–39. 10.1037/h0070931.Suche in Google Scholar
Saussure, Ferdinand de. 1916. Course in General Linguistics, edited by Perry Meisel and Haun Saussy, translated by Wade Baskin. New York: Columbia University Press.Suche in Google Scholar
Schembri, Adam. 2003. “Rethinking ‘Classifiers’ in Signed Languages.” In Classifier Constructions in Signed Languages, edited by Karen Emmorey, 3–34. Mahwah, NJ: Lawrence Erlbaum Associates.Suche in Google Scholar
Siu, Caitlin and Kathryn Murphy. 2018. “The Development of Human Visual Cortex and Clinical Implications.” Eye and Brain 10: 25–36. 10.2147/EB.S130893.Suche in Google Scholar
Slonimska, Anita. 2022. The Role of Iconicity and Simultaneity in Efficient Communication in the Visual Modality. Amsterdam: Landelijke Onderzoekschool Taalwetenschap. 10.48273/LOT0630.Suche in Google Scholar
Sonesson, Göran. 2014. “The Cognitive Semiotics of the Picture Sign.” In Visual Communication, edited by David Machin, 23–50. Berlin: De Gruyter. 10.1515/9783110255492.23.Suche in Google Scholar
Sonesson, Göran. 2016. “The Phenomenological Semiotics of Iconicity and Pictoriality—Including Some Replies to My Critics.” Language and Semiotic Studies 2 (2): 1–73. 10.1515/lass-2016-020201.Suche in Google Scholar
Sonesson, Göran. 2022. “Iconicity and Semiosis.” In Bloomsbury Semiotics Volume 1: History and Semiosis, edited by Jamin Pelkey, 193–214. Bloomsbury Semiotics. London: Bloomsbury Academic.10.5040/9781350139312.ch-9Suche in Google Scholar
Stampoulidis, Georgios, Marianna Bolognesi, and Jordan Zlatev. 2019. “A Cognitive Semiotic Exploration of Metaphors in Greek Street Art.” Cognitive Semiotics 12 (1): 20192008. 10.1515/cogsem-2019-2008.Suche in Google Scholar
Taub, Sarah F. 2011. Language from the Body: Iconicity and Metaphor in American Sign Language. 1. paperback ed., [Nachdr.]. Cambridge, UK: Cambridge University Press.Suche in Google Scholar
Taylor, J. L. 2009. “Proprioception.” In Encyclopedia of Neuroscience, 1143–49. Oxford: Academic Press. 10.1016/B978-008045046-9.01907-0.Suche in Google Scholar
The Finnish Association of the Deaf. n.d. “Viittomakieliset [Sign Language People].” Accessed June 6, 2024. https://kuurojenliitto.fi/viittomakieliset/.Suche in Google Scholar
Tolkkinen, Laura. 2022. “The Finnish Register of Visual Impairment - Annual Statistics 2022.” Annual statistics. Helsinki: The Finnish Register of Visual Impairment. https://cms.nkl.fi/sites/default/files/2023-12/VALMIS%20Annual%20Statistics%202022.pdf?_ga=2.97232281.768724368.1717663451-155928487.1717663451 (English; accessed September 25, 2025). https://cms.nkl.fi/sites/default/files/2024-02/Na%CC%88ko%CC%88vammarekisterin%20vuosikirja%202022.pdf?_ga=2.171523363.2019379169.1727259279-155928487.1717663451 (Finnish; accessed September 25, 2025).Suche in Google Scholar
Vainio, Lari and Martti Vainio. 2021. “Sound-Action Symbolism.” Frontiers in Psychology 12: 718700. 10.3389/fpsyg.2021.718700.Suche in Google Scholar
Zlatev, Jordan. 2015. “Cognitive Semiotics.” In International Handbook of Semiotics, edited by Peter Pericles Trifonas, 1043–67. Dordrecht: Springer Netherlands. 10.1007/978-94-017-9404-6_47.Suche in Google Scholar
Zlatev, Jordan. 2018. “Meaning Making from Life to Language: The Semiotic Hierarchy and Phenomenology.” Cognitive Semiotics 11 (1): 20180001. 10.1515/cogsem-2018-0001.Suche in Google Scholar
Zlatev, Jordan and Johan Blomberg. 2019. “Norms of Language: What Kinds and Where from? Insights from Phenomenology.” In Normativity in Language and Linguistics, edited by Aleksi Mäkilähde, Ville Leppänen, and Esa Itkonen, Vol. 209, 69–101. Studies in Language Companion Series. Amsterdam: John Benjamins Publishing Company. 10.1075/slcs.209.03zla.Suche in Google Scholar
Zlatev, Jordan, Göran Jacobsson, and Liina Paju. 2021. “Desiderata for Metaphor Theory, the Motivation & Sedimentation Model and Motion-Emotion Metaphoremes.” In Figurative Thought and Language, edited by Augusto Soares Da Silva, Vol. 11, 41–74. Amsterdam: John Benjamins Publishing Company. 10.1075/ftl.11.02zla.Suche in Google Scholar
Zlatev, Jordan, Timothy P. Racine, Chris Sinha, and Esa Itkonen, eds. 2008. The Shared Mind: Perspectives on Intersubjectivity. Vol. 12. Converging Evidence in Language and Communication Research. Amsterdam: John Benjamins Publishing Company. 10.1075/celcr.12.Suche in Google Scholar
© 2025 the author(s), published by De Gruyter
This work is licensed under the Creative Commons Attribution 4.0 International License.
Artikel in diesem Heft
- Research Articles
- No three productions alike: Lexical variability, situated dynamics, and path dependence in task-based corpora
- Individual differences in event experiences and psychosocial factors as drivers for perceived linguistic change following occupational major life events
- Is GIVE reliable for genealogical relatedness? A case study of extricable etyma of GIVE in Huī Chinese
- Borrowing or code-switching? Single-word English prepositions in Hong Kong Cantonese
- Stress and epenthesis in a Jordanian Arabic dialect: Opacity and Harmonic Serialism
- Can reading habits affect metaphor evaluation? Exploring key relations
- Acoustic properties of fricatives /s/ and /∫/ produced by speakers with apraxia of speech: Preliminary findings from Arabic
- Translation strategies for Arabic stylistic shifts of personal pronouns in Indonesian translation of the Quran
- Colour terms and bilingualism: An experimental study of Russian and Tatar
- Argumentation in recommender dialogue agents (ARDA): An unexpected journey from Pragmatics to conversational agents
- Toward a comprehensive framework for tonal analysis: Yangru tone in Southern Min
- Variation in the formant of ethno-regional varieties in Nigerian English vowels
- Cognitive effects of grammatical gender in L2 acquisition of Spanish: Replicability and reliability of object categorization
- Interaction of the differential object marker pam with other prominence hierarchies in syntax in German Sign Language (DGS)
- Modality in the Albanian language: A corpus-based analysis of administrative discourse
- Theory of ecology of pressures as a tool for classifying language shift in bilingual communities
- BSL signers combine different semiotic strategies to negate clauses
- Special Issue: Request for confirmation sequences across ten languages, edited by Martin Pfeiffer & Katharina König - Part II
- Request for confirmation sequences in Castilian Spanish
- A coding scheme for request for confirmation sequences across languages
- Special Issue: Classifier Handshape Choice in Sign Languages of the World, coordinated by Vadim Kimmelman, Carl Börstell, Pia Simper-Allen, & Giorgia Zorzi
- Classifier handshape choice in Russian Sign Language and Sign Language of the Netherlands
- Formal and functional factors in classifier choice: Evidence from American Sign Language and Danish Sign Language
- Choice of handshape and classifier type in placement verbs in American Sign Language
- Somatosensory iconicity: Insights from sighted signers and blind gesturers
- Diachronic changes the Nicaraguan sign language classifier system: Semantic and phonological factors
- Depicting handshapes for animate referents in Swedish Sign Language
- A ministry of (not-so-silly) walks: Investigating classifier handshapes for animate referents in DGS
- Choice of classifier handshape in Catalan Sign Language: A corpus study
Artikel in diesem Heft
- Research Articles
- No three productions alike: Lexical variability, situated dynamics, and path dependence in task-based corpora
- Individual differences in event experiences and psychosocial factors as drivers for perceived linguistic change following occupational major life events
- Is GIVE reliable for genealogical relatedness? A case study of extricable etyma of GIVE in Huī Chinese
- Borrowing or code-switching? Single-word English prepositions in Hong Kong Cantonese
- Stress and epenthesis in a Jordanian Arabic dialect: Opacity and Harmonic Serialism
- Can reading habits affect metaphor evaluation? Exploring key relations
- Acoustic properties of fricatives /s/ and /∫/ produced by speakers with apraxia of speech: Preliminary findings from Arabic
- Translation strategies for Arabic stylistic shifts of personal pronouns in Indonesian translation of the Quran
- Colour terms and bilingualism: An experimental study of Russian and Tatar
- Argumentation in recommender dialogue agents (ARDA): An unexpected journey from Pragmatics to conversational agents
- Toward a comprehensive framework for tonal analysis: Yangru tone in Southern Min
- Variation in the formant of ethno-regional varieties in Nigerian English vowels
- Cognitive effects of grammatical gender in L2 acquisition of Spanish: Replicability and reliability of object categorization
- Interaction of the differential object marker pam with other prominence hierarchies in syntax in German Sign Language (DGS)
- Modality in the Albanian language: A corpus-based analysis of administrative discourse
- Theory of ecology of pressures as a tool for classifying language shift in bilingual communities
- BSL signers combine different semiotic strategies to negate clauses
- Special Issue: Request for confirmation sequences across ten languages, edited by Martin Pfeiffer & Katharina König - Part II
- Request for confirmation sequences in Castilian Spanish
- A coding scheme for request for confirmation sequences across languages
- Special Issue: Classifier Handshape Choice in Sign Languages of the World, coordinated by Vadim Kimmelman, Carl Börstell, Pia Simper-Allen, & Giorgia Zorzi
- Classifier handshape choice in Russian Sign Language and Sign Language of the Netherlands
- Formal and functional factors in classifier choice: Evidence from American Sign Language and Danish Sign Language
- Choice of handshape and classifier type in placement verbs in American Sign Language
- Somatosensory iconicity: Insights from sighted signers and blind gesturers
- Diachronic changes the Nicaraguan sign language classifier system: Semantic and phonological factors
- Depicting handshapes for animate referents in Swedish Sign Language
- A ministry of (not-so-silly) walks: Investigating classifier handshapes for animate referents in DGS
- Choice of classifier handshape in Catalan Sign Language: A corpus study