Abstract
Sign languages make use of the full expressive power of the visual-gestural modality to report the utterances and/or actions of another person. A signer can shift into the perspective of one or more persons and reproduce the utterances or actions from the perspective of these persons. This modality-specific device of utterance and action report is called role shift or constructed action. Especially in sign language narration, role shift is a productive and expressive means that can be used to demonstrate linguistic and non-linguistic actions. Recent developments in sign language linguistics put forth new formal semantic analyses of role shift at the interface between sign language and gesture, integrating insights from classical cognitive and formal analyses of quotation, demonstration and perspective/context shift in spoken and sign languages. In this article, I build on recent accounts of role shift as a modality-specific device of demonstration and show that a modified version of this theory even accounts for cases of complex demonstrations including hybrid demonstrations, multiple demonstrations and demonstrations involving a complex interaction of gestural and linguistic components.
1 Introduction
In their seminal article, Clark and Gerrig (1990) highlighted the importance of gestural demonstration for a comprehensive theory of spoken language quotation. Still, many formal semantic and pragmatic theories of quotation mainly investigate either semantic and pragmatic properties of different kinds of reported speech in spoken language such as, for instance, direct and indirect speech and free indirect discourse, or they derive the variety of interpretations of the orthographic device of quotation marks used in the written modality of spoken languages such as pure quotation, direct quotation, mixed quotation, scare quotation and emphatic quotation (Brendel et al. 2011). Only with the advent of formal semantic and pragmatic analysis of role shift in sign languages as well as an increasing interest in gestural demonstrations used in both natural linguistic modalities, the classical idea of ‘quotation as demonstration’ is back on the table in formal linguistic analyses of role shift.
Especially Davidson’s (2015) unified analysis of English ‘be-like’ constructions and role shift in American Sign Language (ASL) can be seen as substantial progress that does not only offer a smart extension of Donald Davidson’s (1979) traditional analysis of written language quotations as semantic demonstrations and Clark and Gerrig’s (1990) analysis of spoken language quotations as (gestural) demonstrations but also paved the way for a new formal analysis of demonstration across modalities at the interface between semantics and pragmatics. However, as it stands, this unified analysis of demonstrations in both modalities cannot account for the full complexity of demonstrations used in sign language role shift, which has been discussed for a long time in cognitive-oriented analyses of role shift (Cormier et al. 2013, 2015; Dudis 2004; Liddell and Metzger 1998; Metzger 1995). Therefore, Davidson’s theory requires certain modifications and extensions to account for the full range of complex demonstrations in sign languages including the interaction of gestural demonstrations with linguistic material and the division of the body to represent different protagonists in simultaneous multiple demonstrations.
The aim of this article is to illustrate that some obvious and straightforward extensions of Davidson’s theory integrating insights from cognitive linguistic analyses provide a solid basis for a formal analysis of more complex examples of demonstrations in role shift. On the one hand, I discuss some shortcomings that become apparent when analyzing cases of role shift that involve a more complex interaction of gestural demonstrations and linguistic descriptions as well as multiple demonstrations depicting more than one protagonist simultaneously. On the other hand, I propose some modifications and illustrate how an extended version of Davidson’s theory can account for such complex cases. Since these more complex examples are typically used in sign language narratives, I provide an in-depth analysis of two short sequences taken from two fables re-narrated by a native signer of German Sign Language (DGS). Given the complexity of these examples, I can only take the first step and sketch how a unified formal analysis of more complex interactions of gestural demonstrations and linguistic descriptions in sign language role shift would look like. A full-fledged formal analysis that provides a compositional interpretation of such complex examples is beyond the scope of this article.
In the next section, I set the stage for the analysis of different kinds of complex demonstrations in role shift. I first discuss three modality-specific properties relevant to the analysis of multiple demonstrations. Then I introduce two proto-typical kinds of role shift – attitude role shift and action role shift – and briefly describe some properties relevant to the discussion of the examples in the subsequent sections. Section 3 discusses recent developments in the formal semantic analysis of role shift. The starting point is Davidson’s (2015) analysis of role shift as demonstration. I also discuss some modifications necessary to account for more complex cases of demonstrations, including mixed examples of attitude and action role shift, linguistic descriptions used in action role shift and multiple demonstrations depicting two protagonists simultaneously. Section 4 is the core of this article. In this section, I illustrate what a formal analysis of complex demonstrations would look like. Based on two sequences of DGS versions of classical fables, I sketch an analysis of multiple demonstrations and the interaction of linguistic and gestural components in different kinds of role shift. Section 5 discusses some general findings and concludes the article.
2 Setting the stage: role shift in the visual-gestural modality
It is well-known that sign languages have some modality-specific properties that are not attested in spoken languages (without co-speech gesture, see Goldin-Meadow and Brentari 2017). The following three properties are especially relevant for the analysis of role shift:
Sign languages can use various independent articulators (the hands, the arms, the upper part of the body, the head and the face) to express different meaning aspects simultaneously (Aronoff et al. 2005; Meier 2002).
Sign languages have a gestural origin and use the same modality as manual and non-manual gestures. As a consequence, sign languages can interact with and integrate gestures at various levels of communication (Golding-Meadow and Brentari 2017; Pfau and Steinbach 2011).
Sign languages use a three-dimensional signing space to express different grammatical and pragmatic features such as, for instance, agreement, topographic relations, coordination, discourse referents or contrast (Engberg-Pedersen 1993; Perniss 2012).
The first property opens the possibility to use different articulators to demonstrate different aspects of an event or the actions of different protagonists simultaneously. This is illustrated by Figure 1 taken from one of the fables discussed in the next two sections in more detail. The signer can in principle use the dominant and non-dominant hand, the upper part of the body, the head and the face for multiple simultaneous demonstrations depicting different actions of different protagonists.

Articulators in sign language that can be used simultaneously for multiple demonstrations – © SignLab Göttingen.
Likewise, all articulators illustrated in Figure 1 can be used for linguistic descriptions as well as for gestural demonstrations (second property). While in spoken languages, manual and non-manual co-speech gestures use a completely different modality than language, in sign languages, gestures and signs use the same modality. Therefore, gestural demonstrations can be easily integrated in and interact with linguistic descriptions. In this article, I am mainly interested in gestural demonstrations and linguistic descriptions used in role shift. The examples discussed in the next section illustrate that in sign language narration, signers make systematic use of this powerful option provided by the transparent interface between gesture and sign.
The third property, the spatial nature of sign languages, is also relevant for the analysis of role shift. On the one hand, signers exploit the signing space to represent topographic relations of entities and movement of entities in space. The examples discussed below illustrate that the topographic use of space in role shift does not only involve the signing space in front of the signer’s upper part of the body but also the signer’s body itself. On the other hand, role shift forces the signer to adapt spatial locations used to express grammatical or topographic features to the shifted perspective typical for demonstrations in role shift. Moreover, at least in attitude role shift, signers make use of the signing space to overtly mark the perspective shift by shifting the midsagittal axis either to the left or to the right (see Section 3).
Before I discuss how these three properties affect the interpretation of role shift in sign languages, I briefly introduce the notion of role shift (for a more general overview, see Cormier et al. 2013, 2015; Lillo-Martin 2012; Schlenker 2017a, 2017b; Steinbach 2021). Research on role shift typically distinguishes between two kinds of role shift, attitude or quotational role shift and action role shift or constructed action. Attitude role shift is used as a modality-specific means of reported speech combining properties of direct and indirect speech (Hübl et al. 2019; Quer 2016; Schlenker 2017a). Attitude role shift is thus used to report linguistic actions, i.e., speech acts, and comprises linguistic material (signs, sentences and utterances). By contrast, action role shift typically involves gestural demonstrations of what someone did, i.e., it is used to report non-linguistic actions and comprises non-linguistic gestural material. Both kinds of role shift are illustrated in Figure 2 showing five screenshots taken from a DGS version of the fable The tortoise and the hare (Herrmann and Pendzich 2018).

Action and attitude role shift in DGS – © SignLab Göttingen.
In this example, the action role shift in (1b) illustrated by the second and third screenshot is embedded by two sequences of attitude role shift in (1a) and (1c) illustrated in the first and the last two screenshots (the notational conventions can be found at the end of this article).
Attitude role shift: |
![]() |
Action role shift: |
![]() |
Attitude role shift: |
![]() |
In the attitude role shifts in (1a) and (1c), the signer, who reports the utterance of the hare, aligns the upper part of the body, the head and the eye gaze to the locus of the addressee, the tortoise. In addition, the face of the signer adapts the facial expression of the reported signer, i.e., the hare mocking the tortoise. Herrmann and Steinbach (2012) argue that the first three non-manual markers (upper part of the body, head and eye gaze) overtly express grammatical agreement with the reported addressee (see also Steinbach 2021). By contrast, the change in facial expression is a gestural marker depicting the facial expression of the reported signer. As indicated in the translation of the example, the speech report in (1a) and (1c) shares properties with direct speech in English. Consequently, the second person indexical ‘ix2’ used in the first example shifts and does not refer to the actual addressee but to the addressee of the speech report, the tortoise (for shifted indexicals in role shift, see Hübl 2014; Hübl et al. 2019; Lillo-Martin 1995; Maier 2018; Quer 2005; Schlenker 2017a).
By contrast, the action role in (1b) is indicated by a change in the facial expression and body posture, that is, attitude role shift only receives a gestural marking. As opposed to the attitude role shift in (1a) and (1c), the action role shift in (1b) is not used to reproduce linguistic actions (i.e., utterances or thoughts of another person) but to demonstrate non-linguistic actions. In (1b), the signer does not report a linguistic description of the walking of the tortoise but performs a gestural demonstration of the same event.
Pure examples of attitude and action role shift are rare. In the next section, I argue that both kinds of role shift use similar markers and typically combine linguistic and gestural elements. The attitude role shifts in (1a) and (1c) are, for instance, accompanied by a gestural demonstration of the facial expression of the tortoise. Similarly, the action role shift in (1b) contains a linguistic expression, the body part classifier ‘bp-cl:walk’, which is used by the signer to describe certain aspects of the event. Hence, while attitude role shift can be modified by gestural components used, for example, to demonstrate the reported signer’s attitude towards the addressee or specific physical and mental features of the reported signer, action role shift can include linguistic elements such as classifiers or lexical signs that describe the action demonstrated. Pure examples of attitude and action role shift are thus two endpoints of a continuum. Since attitude role shift mainly involves linguistic material (i.e., the signs produced by another person), attitude role shift is typically more on the linguistic side of the continuum. In addition, many sign languages have grammaticalized non-manual markers (i.e., the upper part of the body, the head and the eye gaze, cf. Examples [1a] and [1c]) that are typically used to mark attitude role shift. By contrast, action role shift mainly involves non-linguistic gestures used to demonstrate an action. Therefore, it is typically more on the gestural side of the continuum.
In the next section, I follow Davidson (2015) and Maier (2017, 2018 and argue for a unified theory of both kinds of role shift that distinguishes linguistic material, which is subject to grammatical constraints, from gestural components, which are represented as additional demonstrational modifications. This hybrid theory facilitates a unified analysis of both kinds of role shift that, at the same time, accounts for differences in the interpretation (demonstration of a speech event or demonstration of a non-linguistic action) and the complex interaction of gestural and linguistic components in attitude and action role shift discussed in Section 4.
3 Behind the scenes: formal analyses of role shift
It’s fair to say that role shift is one of the best-investigated topics in sign language linguistics. Quite different cognitive and formal syntactic and semantic analyses have been developed in the last 30 years to account for the morphosyntactic, semantic and pragmatic aspects of role shift as well as for its narrative functions. In this article, I focus on recent formal semantic analyses that integrates aspects of cognitive theories and sketch a formal account that offers a unified analysis of both attitude and action role shift (for cognitive analyses of role shift, see Cormier et al. 2013, 2015; Dudis 2004; Liddell and Metzger 1998; Metzger 1995).
In her seminal article, Davidson (2015) builds on traditional analyses of quotation in spoken (and written) language as demonstrations (Clark and Gerrig 1990; Davidson 1979; see also Schlenker 2017b; Steinbach 2021). As a starting point, Davidson compares role shift in American Sign Language (ASL) to ‘be like’ constructions in English and argues that both constructions introduce a demonstration as defined in (2a).
a. | Definition: a demonstration d is a demonstration of e (i.e., demonstration(d, e) holds) if d reproduces properties of e and those properties are relevant in the context of speech |
b. | Properties of a speech event include, but are not limited to words, intonation, facial expressions, sentiment, and/or gestures |
Following Davidson’s analysis, attitude role shift and action role shift are simply to different instances of a demonstration. While in attitude role shift, the properties of a speech event signers mainly demonstrate are words and intonation, in action role shift these properties are non-linguistic facial expression, sentiment and gestures. This unique treatment of both kinds of role shift is one big advantage of Davidson’s account. A second advantage is that this account provides an instrument to integrate (gestural) demonstrations into a formal semantic theory.
The event semantic representation of the attitude and action role shifts in (1a)–(1c) discussed in the previous section are given in (3a′)–(3c′). In all three cases, the role shift introduces a demonstration of an event. In the first and third case, which are two instances of attitude role shift, the role shift scopes over linguistic material (e.g., ‘ix2 funny so we-cl:move’). Consequently, the relevant properties of the speech element that are reproduced by the demonstration are signs. By contrast, in the second case, the role shift scopes over non-linguistic gestures (we come back to body part classifiers used in this example below). Here, the relevant properties are manual and non-manual gestures demonstrating the walking of the tortoise.
a. |
![]() |
a.′ | ∃e [agent(e, hare) ∧ patient(e, tortoise) ∧ demonstration(d1, e)] |
d1 is the signer’s reproduction of the hare’s signing | |
b. |
![]() |
b′. | ∃e [agent(e, tortoise) ∧ demonstration(d2, e)] |
d2 is the signer’s reproduction of the hare’s demonstration of the tortoise’s walking | |
c. |
![]() |
c.′ | ∃e [agent(e, hare) ∧ patient(e, tortoise) ∧ demonstration(d3, e)] |
d3 is the signer’s reproduction of the hare’s signing |
The demonstration analysis of role shift provides a promising basis for the analysis of more complex examples. Before I turn to more complex demonstrations in the next section, let me briefly outline some possible modifications and extensions of Davidson’s account. First of all, Maier (2017, 2018 argues that linguistic and non-linguistic demonstrations should be kept apart. On the one hand, linguistic material typically demonstrated in attitude role shift is subject to the linguistic constraints of the language quoted, that is, as opposed to pure gestural demonstrations, linguistic material follows language-specific grammatical rules and has a conventionalized meaning. In addition, the specific non-manual grammatical marking of attitude role shift described in the previous section clearly identifies the subject (the signer) and the object (the addressee) of the mocking event in attitude role shift.
On the other hand, even in attitude role shift, signers use gestural elements as additional modifiers of the speech act demonstrated. In (3a) and (3c), for instance, the demonstration does not only involve signs and prosodic marking, but also a specific facial expression and a specific way of signing that demonstrate the mocking behavior of the tortoise. However, these aspects are clearly gestural modifications of the reported utterance. These differences between attitude and action role shift are not implemented in Davidson’s original account (see also Hübl et al. 2019; Maier and Steinbach 2022). Therefore, Maier argues for a hybrid extension of Davidson’s demonstration analysis which combines the advantages of Davidson’s more pragmatic-oriented approach with the advantages of a semantic representation of the linguistic units used in both kinds of role shift. The corresponding hybrid semantic representation of Example (3a) is given in (4). As opposed to (3a′), Example (4) integrates the grammatical (agreement) marking identifying subject and object and separates the form of the reported utterance from the gestural components which are represented as additional modifications. In addition, the hybrid analysis can account for the shifted second person indexical ‘ix2’ which is part of the original utterance ‘ix2 funny so we-cl:move’ produced by the hare in a different context.
∃e [mock(e) ∧ agent(e, hare) ∧ patient(e, tortoise) ∧ form(e, ‘ix2 funny so we-cl:move’) ∧ demonstration (d1, e)] |
d1 is the signer’s reproduction of the hare’s facial expression, body posture and way of signing |
A second extension relevant to the analysis of complex demonstrations is the interaction of linguistic and non-linguistic material in action role shift. Since gestural demonstrations in sign language narrations are integrated into linguistic descriptions, they are typically restricted to the signing space and thus subject to certain articulatory limitations. Let’s have a closer look at Example (3b) – similar examples are discussed in the next section. For the gestural demonstration of the tortoise’s way of walking, the signer does not use the whole body crawling on the floor with four legs, that is, the signer does not give a full pantomimic depiction of the event. Instead, she uses only the linguistic articulators illustrated in Figure 1 above, i.e., the hands, the upper part of body and the face, to demonstrate the slow and clumsy movement of the tortoise. This is illustrated by the second and third screenshots in Figure 2.
Typically, whole entity classifiers or body part classifiers are used in contexts where the signer demonstrates the movement or location of an entity in space, that is, the signer makes use of a linguistic unit (a classifier) that can be modified gesturally to demonstrate the motion and location of a protagonist by adapting the phonological parameters movement and location (Goldin-Meadow and Brentari 2017; Schembri et al. 2005; Zwitserlood 2012). Since classifiers are linguistic elements equipped with a gestural component, they are ideal candidates for gestural demonstration in action role shift. This means that in action role shift, different phonological features of a whole entity or body part classifier receive different interpretations. This is formulated in the following condition on classifiers in action role shift. While the handshape is a linguistic description, the movement and location are typically gestural demonstrations.
![]() |
|
(a) | the handshape features of the classifier are interpreted as a linguistic description. |
(b) | the movement and location features of the classifier are interpreted as a gestural demonstration. |
Note that in attitude role shift, the classifier is typically part of the quotation and is thus interpreted as a linguistic element as can be seen in Examples (3a) and (3c) above, where the whole entity classifier ‘we-cl:move’ is part of the reported utterance.
A third extension concerns linguistic descriptions in action role shift. In certain contexts, a sequence of action role shift may contain linguistic material other than a classifier (typically a lexical sign) that provides an additional linguistic description of the gestural demonstration (corresponding examples are discussed in the next section). Usually, the meaning of these signs directly corresponds to the gestural demonstration or at least to a prominent aspect of the demonstration. While demonstrating a laughing person, the signer may simultaneously use the sign for ‘laugh’ to describe the same event. Consequently, linguistic material can receive two different interpretations, depending on whether it is the content of the speech event demonstrated by the signer or whether it is used as an additional linguistic description of the gestural demonstration. The first case in (6a) is an instance of attitude role shift where the signer reproduces (mentions) signs another person used in a different context. In this case, the sign ‘xxx’ is not used by the signer to describe a demonstration but is the content of the speech event demonstrated by the signer. The second case in (6b) is an example of a complex gestural demonstration accompanied by a linguistic description. Here, the sign ‘xxx’ is used by the signer and provides additional (linguistic) descriptions of the event demonstrated.
![]() |
|
(a) | If ‘xxx’ is part of a role shift reproducing a linguistic action (attitude role shift), ‘form(e, ‘xxx’)’ represents the content of the speech event demonstrated by the signer. |
(b) | ‘xxx’ is part of a role shift reproducing a non-linguistic action (action role shift), ‘form(e, ‘xxx’)’ is a linguistic description of the event demonstrated by the signer. |
The last extension necessary for the analysis of complex demonstrations is based on the fundamental observation made by Meir et al. (2007: 543) that the body of the signer “constituting one of the formational components of the sign, represents one particular argument in the event, the agent.” In a demonstration analysis of role shift, the concept of body as subject can be extended to the concept of body as actor. Signers can use their bodies or different parts of their bodies to demonstrate different aspects of an event. In combination with the first modality-specific property of sign languages described in the previous section, we can even go one step further. Recall from Section 2 that sign languages use various independent articulators to express different meaning aspects simultaneously. In role shift, this property enables multiple demonstrations, that is, signers can use different articulators to demonstrate linguistic and non-linguistic actions of two or more protagonists or different aspects of the same event simultaneously (Barberà and Quer 2018; Dudis 2004; Herrmann and Pendzich 2018; Steinbach 2021). The body of the signer cannot only be used to express the grammatical role of the subject; the body can also be used to express the actor of an event and different body parts can be used to express multiple demonstrations involving more than one actor (for multiple demonstrations in co-speech gestures, see Parrill 2009).
In the next section, I discuss two representative examples in more detail. The examples illustrate multiple demonstrations, the use of linguistic descriptions in action role shift, the two meaning dimensions of classifiers in action role shift and the interaction of linguistic and gestural components in hybrid quotations.
4 On stage: analyzing complex demonstrations in sign language narratives
Complex demonstrations in role shift in DGS have been described in different studies in the context of quotation, indexical shift, perspective taking, expressive meaning and reference tracking (Fischer and Kollien 2010; Herrmann and Pendzich 2018; Herrmann and Steinbach 2018; Maier and Steinbach 2022; Steinbach 2021). However, the main focus of these studies is on a description of complex demonstrations. A unified analysis of complex demonstrations within recent formal semantic accounts of role shift and demonstration is still missing. In this section, I illustrate how the different approaches discussed in the previous section can be combined to account for different aspects of complex demonstrations. Given the complexity of the examples, a full definition of a comprehensive formal model is beyond the scope of this article. Instead, this article takes the first step and sketches the requirements for a unified formal analysis of the complex interaction of gestural demonstrations and linguistic descriptions in sign language role shift. For the semantic analysis in this section, I adapt Davidson’s (2015) and Maier (2017, 2018 event semantic representations introduced in the previous section for simple and hybrid demonstrations.
The following three aspects of complex demonstrations are the focus of the analysis:
The interaction of linguistic descriptions and gestural demonstrations in action role shift,
the use of different parts of the body to realize multiple demonstrations and
the expressive power of demonstrations.
All three aspects can be nicely illustrated by sign language versions of classical narratives (Cormier et al. 2013; Crasborn et al. 2007). For this purpose, I selected two annotated sequences taken from two classic Aesop fables from the Göttingen fable corpus (for a more detailed description of the fables, see Herrmann and Pendzich 2018). The specific use of complex demonstrations attested in these two sequences is representative of narrative strategies found in the DGS versions of fables and other stories. However, since the focus of this article is on the investigation of the three aspects mentioned in (a), (b) and (c), I only provide an in-depth analysis of these two sequences and do neither compare the strategies different signers used for the same sequence, nor do I make any claims about general narrative strategies used in sign languages.
The first sequence is taken from a DGS version of the fable The lion and the mouse. This sequence comprises the following part:
The mouse climbed over the head of the sleeping lion. The lion woke up and was very angry. The surprised mouse fell down from the lion’s head and the lion furiously caught the mouse in front of its body. |
The second sequence is taken from a DGS version of the fable The shepherd boy and the wolf. This sequence consists of the following part:
During his boring job, the pondering shepherd boy had the idea to tease the neighbors in the village. He called for help: “There is a wolf! There is a wolf!” The scared neighbors ran to the shepherd boy and asked him, “Can we help you?” |
Both sequences are ideal for our endeavor since they involve physical and spatial interactions of two protagonists (e.g., one protagonist climbs over the head of the other protagonist, one protagonist catches the other protagonist, one group of protagonists run towards the other protagonist), linguistic interactions (shouting and asking), non-linguistic actions of the protagonists (e.g., climbing and running), mental activities (pondering and having an idea) as well as expressive components (e.g., being angry, being scared). The physical and spatial interaction of the protagonists suggests that the signers use their bodies as well as the physical properties of the signing space to express these interactions. The linguistic and non-linguistic actions of the protagonists suggest that the signers use gestural demonstrations and linguistic descriptions to specify the non-linguistic actions as well as linguistic demonstrations to report the form and content of an utterance (i.e., what someone thought or signed). The expressive components suggest that the signers use various manual and non-manual markers to demonstrate the expressive attitude and behavior of the protagonists.
The interpretation of role shift obviously involves various decisions: Who are the actors? Which actions are demonstrated by the signer? Which components are linguistic and which are gestural? Is linguistic material in role shift used for description (action role shift) or part of the demonstration (attitude role shift)? Which parts of the body are used for demonstrations? Do different body parts demonstrate different actions of different protagonists? The analysis of the two sequences is based on the following five principles:
a. | The facial expression marks the main actor (or protagonist) of the role shift, that is, the facial expression is the most important marker of role shift. |
b. | Linguistic material in role shift is either the content of the speech event demonstrated (attitude role shift) or a linguistic description supporting the demonstration (action role shift), cf. principle (6) above. |
c. | Whole entity and body part classifiers combine linguistic descriptions (handshape) and gestural demonstrations (location and movement), cf. principle (5) above. |
d. | Different body parts can be used to represent different protagonists or to demonstrate different aspects of an event. |
e. | (Parts of) the body of the signer can be used to represent physical locations. |
In the following discussion of the examples, I always give the glosses and the translations of the examples first. Then, I provide screenshots for a better illustration of aspects important for the analysis. Finally, I give the semantic representation(s) of the example. I discuss the first example in more detail to illustrate the general idea behind the endeavor. The discussion of the subsequent examples focuses on new aspects of complex demonstrations in role shift, which are specific to these examples.
Let us start with the first sequence of the fable The lion and the mouse. As can be seen in (10), this sequence starts with two signs (suddenly and mouse). The first sign is accompanied by a lexical mouth gesture and the second one by the lexically specified mouthing of (parts of the) the corresponding German word (Boyes Braem and Sutton-Spence 2001). The mouth activities (together with the neutral facial expression and body posture) indicate that these two signs are part of a linguistic description preparing the role shift in the second part of this example, in which the signer changes the facial expression to mark the role shift demonstrating the mouse’ actions (running and climbing).
![]() |
The role shift in (3) involves two events, the running of the mouse illustrated by the left picture in Figure 3 and the climbing of the mouse over the lion’s nose illustrated by the two pictures in the middle and on the right. As indicated in the glosses and illustrated by all three pictures in Figure 3, the two events are combined by a weak handhold (the right hand of the left-handed signer) that spreads over the whole sequence thus marking topic continuity (in addition to the facial expression marking role shift). For both events, the signer uses a classifier, a body part classifier representing the mouse’s limbs for the first event and a whole entity classifier representing the mouse’s body for the second. Since both classifiers have a lexically specified handshape, parts of the classifier provide a linguistic description of the event. In addition to this description, the specific facial expression, the mouth gesture and the movement of the hands gesturally demonstrate the two events as can be seen in Figure 3. The examples thus involve a combination of linguistic description and gestural demonstration (for mouth gestures, see Pendzich 2020).

Left: mouse running; middle and right: mouse climbing over the lion’s face – © SignLab Göttingen.
The use of the two classifiers in (10) is motivated by the fact that the complex movements of the mouse (especially the climbing over the lion’s head) cannot be demonstrated within the limits of the signing space (the signer cannot easily climb over her own face). In such contexts, classifiers are practical hybrid (linguistic and gestural) devices that permit the demonstration of entities moving in space. As already mentioned above, the linguistic description of a classifier can be combined with gestural components demonstrating the manner, the location and the path of the movement. A particularly interesting feature of this example is that the signer uses her own head to refer to the location of the movement of the mouse. This is indicated by the subscript ‘1’ in the glosses. This is possible because the moving entity is represented by the dominant hand. The body thus represents two different aspects of this complex demonstration. On the one hand, the dominant hand describes and demonstrates the climbing mouse. On the other hand, the head is used to demonstrates the location of this climbing, i.e., the head of the lion. As a consequence, the signer embodies two different protagonists simultaneously by different articulators, one is the figure (mouse), the other the ground (lion). At the same time, following principle (9a), the facial expression still demonstrates the main actor of the demonstration, i.e., the mouse. This means, that face and head represent two different protagonists.
The semantic representation of this sequence is given in (11a)–(11b). The first line gives a simplified representation of the propositional content of the sequence. The second line describes the gestural demonstration introduced by the role shift. And finally, the third line lists the articulators that are involved in these demonstrations. (11a) Is the analysis of the first role shift. (11b) Gives the analysis of the second role shift. The second role shift involves a complex demonstration with two protagonists, which appear in bold. The signer uses different parts of the body for the two protagonists.
a. | ∃e [move(e) ∧ agent(e, mouse) ∧ form(e, ‘bp-cl:run’) |
∧ demonstration(d4, e)] | |
d4 is the signer’s reproduction of the mouse’s fast running | |
d4 involves the facial expression, the head, the upper part of the body and the hands | |
b. | ∃e [move(e) ∧ agent(e, mouse) ∧ location(e, nose) ∧ form(e, ‘we-cl:climb1’) |
∧ demonstration(d5, e)] | |
d5 the signer’s reproduction of the mouse fast climbing across the lion’s nose | |
d5 involves the facial expression and the dominant hand (mouse) and the head and the nose (lion) |
In both parts of the role shift, a classifier introduces the semantic description that a small animal (in this case the mouse) is moving. The kind of movement and the location are added to the classifier by gestural demonstrations. As stated in (6) above, classifiers combine linguistic descriptions with gestural demonstrations. On the one hand, body part and whole entity classifiers are anaphoric expressions that pick up a salient discourse referent introduced in the previous context (Barberà and Quer 2018; Benedicto and Brentari 2004; Zwitserlood 2012). The specific handshape of the classifier is controlled by the antecedent. In our example, the two classifiers thus refer to the mouse. This is supported by the facial expression of the role shift, which also depicts the mouse as the main actor. On the other hand, both classifiers are used for gestural demonstrations (Goldin-Meadow and Brentari 2017; Schembri et al. 2005). The first part of the role shift in (10) includes a simple demonstration of the mouse running (by the facial expression, mouth gesture and movement of the hands). The demonstration in the second part is more complex. Here, the signer uses her body not only to demonstrate the climbing of the mouse but also to demonstrate the location of the climbing, over the lion’s head and nose. Both parts of the role shift receive a maximally iconic interpretation (Schlenker 2017b). In addition, this second demonstration is accompanied by a linguistic weak handhold marking topic continuity.
The second example in (12) is the continuation of the first one. This example includes three different role shifts, two from the perspective of the lion, and one from the perspective of the mouse. Again, the facial expression is the main marker of role shift, which is in accordance with principle (9a).
![]() |
The first role shift, which demonstrates the lion’s awakening, is illustrated in the left picture in Figure 4. The second picture illustrates the second role shift demonstrating the surprised mouse falling down from the head of the lion. In this sequence, the signer embodies again both protagonists simultaneously: While the dominant hand and the facial expression depict the mouse, the head and the upper part of the body are again used to demonstrate the lion. The last two pictures illustrate the third role shift. Here, the furious lion catches the mouse that ended up in front of the lion’s body. All three role shifts are accompanied by facial expressions gesturally marking the main actor of the role shift and the mental states of the respective protagonists (‘angry’, ‘surprised’ and ‘furious’, cf. Figure 4).

Left: lion waking up; middle left: mouse falling down from the face of the lion; middle right and right: angry lion catching mouse – © SignLab Göttingen.
Three aspects are especially interesting in this sequence: First, after introducing the second protagonist with the lexical sign lion at the beginning of this sequence, the signer shifts perspective only by a change of facial expression. These changes are enough to indicate the main protagonist of the role shift. Second, in addition to the whole entity classifier in the second role shift (i.e., ‘we-cl:fall3’) – a strategy which we have already seen in Example (10) – the signer uses two lexical signs to describe the action she demonstrates in the first and third role shift. In the first role shift, she uses the sign ‘awake’ in addition to the corresponding non-manual demonstration of the lion who wakes up. In the third role shift, she uses the agreement verb 1catch3 to describe the action the lion performs. These linguistic descriptions of the demonstration belong to the perspective of the narrator. And third, the locus ‘3’ changes its interpretation between the second and third role shift. In the second role shift, the signer introduces a topographic locus representing the endpoint of the movement of the mouse. In the third role shift, ‘1catch3’ agrees, however, with a referential locus of the object. Since the locus used for object agreement is an anaphoric expression, the topographic locus ‘3’ of the second role shift turns into a referential locus in the third role shift (Steinbach and Onea 2016). This means that in this example, a topographic locus is reinterpreted as an anaphoric locus controlling object agreement. Note finally, that the signer again uses a manual demonstration to modify a linguistic item. The modification of the movement component adds an additional gestural demonstration to the agreement verb ‘1catch3’. This demonstration is additionally supported by the corresponding facial expression (‘furious’) and the movement of the upper part of the body. The semantic representations of all three role shifts are given in (13a)–(13c).
a. | ∃e [awake(e) ∧ agent(e, lion) ∧ form(e, ‘awake’) ∧ demonstration(d6, e)] |
D6 is the signer’s reproduction of the lion’s angry awakening | |
D6 involves the facial expression and the head | |
b. | ∃e [fall(e) ∧ agent(e, mouse) ∧ source(e, nose) ∧ goal(e, loc3) |
∧ form(e, ‘1we-cl:fall3’) ∧ demonstration(d7, e)] | |
d7 is the signer’s reproduction of the mouse surprised plunge from the lion’s nose on the floor in front of the lion | |
d7 involves the facial expression and the dominant hand (mouse) and the head and body (lion) | |
c. | ∃e [catch(e) ∧ agent(e, lion) ∧ patient(e, mouse) |
∧ form(e, ‘1catch:furiously3’) ∧ demonstration(d8, e)] | |
d8 is the signer’s reproduction of the lion’s furious catching | |
d8 involves the facial expression, the head, the dominant hand and the upper part of the body |
The next three examples constitute one sequence of the second fable, The shepherd boy and the wolf. The first part of this sequence is an action role shift with the boy as the sole actor. In (14), the signer demonstrates the pondering shepherd boy who is bored of tending the sheep, see also Figure 5.

Left: shepherd boy pondering; middle and right: shepherd boy having an idea – © SignLab Göttingen.
![]() |
The first two pictures in Figure 5 illustrate the purely gestural demonstration of the first two mental activities (i.e., ‘pondering’ and ‘have-an-idea’) of the shepherd boy. The second activity is then supported by a linguistic description (i.e., ‘have-idea’) stating that something comes to the boy’s mind.
Again, facial expression is the main marker of the action role shift. Consequently, the shepherd boy is the actor as is indicated in the corresponding semantic representations in (15). The first mental activity is only expressed by a gestural demonstration. One reason for this might be that in the preceding sequence the signer reports that the shepherd boy thinks about how to tease the villagers. Therefore, an additional linguistic description of the mental activity is not necessary in this context. The transition from the first mental activity (i.e., ‘pondering’) to the second one (i.e., ‘have-an-idea’) is part of the gestural demonstration, that is, the signer initially decided to provide a demonstration of the whole sequence. Only at the end of the example, the signer includes a linguistic description of the second mental activity. However, the role shift marked by the facial expression scopes over the linguistic description as can be seen in the third picture in Figure 5. Hence, the sign ‘have-idea’ adds an additional linguistic description of the narrator to the event demonstrated in this sequence.
a. | ∃e [agent(e, shepherd boy) ∧ demonstration(d9, e)] |
d9 is the signer’s reproduction of the shepherd boy’s pondering | |
d9 involves the facial expression, the head, the upper part of the body and the hands | |
b. | ∃e [have-idea(e) ∧ agent(e, shepherd boy) ∧ form(e, ‘have-idea’) |
∧ demonstration(d10, e)] | |
d10 is the signer’s reproduction of the shepherd boy who has an idea | |
d10 involves the facial expression, the upper part of the body and the hands |
The subsequent part of this sequence is a typical example of attitude role shift accompanied by an expressive gestural demonstration of the shouting shepherd boy. Even if the main function of this part is the quotation of the boy’s cry for help, this role shift also has a strong tendency towards action role shift which gives the whole narration a strong expressive component.
![]() |
The quotation at the beginning of this part (i.e., ‘wolf there wolf there’) is illustrated in the two pictures on the left and in the middle. The signer here clearly demonstrates linguistic material (including the corresponding lexically specified mouthing ‘/wolf da wolf da/’), which is again accompanied by a non-manual gestural demonstration highlighting the expressive component of this demonstration. In addition, the attitude role shift is completed by a linguistic description of the shepherd boy’s utterance illustrated in the right picture of Figure 6. Like the description in the previous example (i.e., ‘have-idea’), the description in this example (i.e., ‘shout’) is in the scope of the role shift. The facial expression in the second picture, which marks the role shift, scopes over the linguistic description in the third picture. Again, the sign ‘shout’ does not belong to the demonstration (the signer does not demonstrate the shepherd boy signing ‘shout’) but a description of the shepherd boy’s behavior.

Left and middle: shepherd boy uttering ‘wolf’ and ‘there’; right: ‘shout’ with accompanying gestural demonstration – © SignLab Göttingen.
Consequently, the semantic representation in (17) contains again a linguistic description of the event which has to be interpreted outside the scope of the role shift. In addition, it contains the (quotational) demonstration of linguistic material which, according to the mixed approach introduced in the previous section, specifies the linguistic form of the boy’s utterance. Consequently, the linguistic material is used for two different purposes: While the first part (‘wolf there wolf there’) expresses the content of the reported speech, the second part (‘shout’), is a linguistic description of the action demonstrated in the role shift.
∃e [shout(e) ∧ agent(e, shepherd boy) ∧ patient(e, neighbors) ∧ form1(e, ‘wolf there wolf there’) ∧ form2(e, ‘shout’) ∧ demonstration(d11, e)] |
d11 is the signer’s reproduction of the shepherd boy’s shouting |
d11 involves the facial expression, the head, the hands (reproducing the linguistic content) and the upper part of the body |
The third part of this sequence, which is our last example, involves again an interesting piece of a multiple demonstration of two interacting protagonists, the shepherd boy and the villagers (here referred to as ‘the neighbors’).
![]() |
At the beginning of this sequence, the signer demonstrates the running neighbors from two different perspectives. First, she uses the whole upper part of the body (including the hands, the head and the facial expression) to demonstrate the scared neighbors running to the place where the boy is tending the sheep and was calling for help. This typical gestural demonstration of running people is illustrated in the first picture. Interestingly, the lexically specified mouthing of the sign ‘neighbor’, i.e., ‘/nachbar/’, spreads over ‘run’, the gestural demonstration of the running neighbors. This example shows that non-manual linguistic descriptions, just as manual linguistic descriptions, can occur in the scope of action role shift thereby describing the demonstration, in this case, the running neighbors. Then, the signer shifts perspective and uses a whole entity classifier denoting groups of people. As can be seen in the second picture and indicated in the glosses by the subscript ‘1’, the hands move towards the body of the signer, which now represents the shepherd boy, who is the goal of the neighbors’ movement. However, the facial expression remains the same and still represents the running neighbors, who are the main actors in this part (Figure 7).

Left and middle left: running neighbors from two different perspectives; middle right and right: neighbors uttering 3helf1 what– © SignLab Göttingen.
The multiple demonstration illustrated in the second picture is kept constant for the rest of this sequence, which is now an instance of attitude role shift again with a strong expressive component. The signer quotes the neighbor’s question by reproducing the corresponding manual (i.e., ‘3help1 what’) and non-manual (mouthing and brow furrow) components. In addition, she still uses scared and exhausted facial expressions to demonstrate the mental and physical state of the neighbors. At the same time, the body of the signer still represents the shepherd boy, who is the addressee of the neighbor’s question. This is illustrated in the third picture: The endpoint of the path movement of the inflected agreement verb ‘3help1’ is the body of the signer, i.e., ‘1’. Literally, this inflected verbal form means ‘someone helps me’. However, since the body of the role shift represents the shepherd boy in this context, the shepherd boy is the object controlling the endpoint of the path movement of the agreement verb. At the same time, he is the addressee of the question. Since the acting protagonists in this complex role shift are obviously the neighbors, which is indicated by the facial expression, the neighbors are the signers of the utterance and most likely the subject controlling the beginning of the path movement of the agreement verb. Consequently, in this context, the most likely interpretation of the inflected agreement verb ‘3help1’ is not ‘can someone help me’ but ‘can we help you’. The corresponding semantic representations of this complex demonstration are given in (19):
a. | ∃e [move(e) ∧ agent(e, neighbors) ∧ goal(e, shepherd boy) ∧ demonstration(d12, e) |
d12 is the signer’s demonstration of the neighbors’ running | |
d12 involves the facial expression, the head, the upper part of the body and the hands | |
b. | ∃e [move(e) ∧ agent(e, neighbors) ∧ goal(e, shepherd boy) |
∧ form(e, ‘3we-cl:move1’) ∧ demonstration(d13, e)] | |
d13 are the signer’s demonstration of the neighbors’ running | |
d13 involves the facial expression, the mouth and the hands (neighbors) and the body (shepherd boy) | |
c. | ∃e [ask(e) ∧ agent(e, neighbor) ∧ patient(e, shepherd boy) ∧ form(e, ‘3help1 what’) ∧ demonstration(d14, e)] |
d14 is the signer’s reproduction of the neighbors asking the shepherd boy | |
d14 involves the facial expression, the head and the hands (neighbors) | |
and the body (shepherd boy) |
The examples from the two fables not only illustrate the complexity of multiple demonstrations in which the body of the signer can be used to represent two different protagonists simultaneously, but they also show the complex interaction of action and attitude role shift. In the first fable, The lion and the mouse, the body of the signer is split into an acting agent (the climbing mouse) and a location of the mouse’s action (the head of the lion). While the mouse is represented by the signer’s facial expression and the dominant hand, the lion is represented by the signer’s head and nose, that is, while the face demonstrates the location of the climbing, the facial expression demonstrates the climbing actor.
By contrast, the multiple demonstration in the second fable The shepherd boy and the wolf is more complex. The body of the signer is again split into an acting agent (the neighbors) and the goal of their action (the shepherd boy). At the same time, both protagonists are the signer and addressee of two attitude role shifts contained in this sequence. In the second attitude role shift, the neighbors are the signers and the shepherd boy is the addressee. This gives rise to an interesting conflict in the use of the signing space and the body of the signer. On the one hand, the body of the signer represents the running and signing neighbors. This embodiment is consistently marked by the facial expression that scopes over the whole role shift. On the other hand, it represents the goal of the movement and the addressee of the utterance. This second demonstration is problematic since the referential locus ‘1’ which is typically used as an indexical referring to the reported signer in attitude role shift cannot be used anymore in this way in Example (18). The standard interpretation of ‘1’ is blocked because the body of the signer already represents the shepherd boy in the second part of the action role shift. Just as the locus ‘3’ in Example (12) above, the locus ‘1’ in this example is first introduced as a topographic location (endpoint of the movement) by a whole entity classifier (‘3we-cl:move1’) and is then reinterpreted as a linguistic anaphoric locus controlling object agreement. As a consequence, the agreement verb ‘help’ in the attitude role shift is adapted to this specific situation by using a default location for the subject and first person for the object. Since the body of the signer maintains to represent the shepherd boy, the first-person object agreement unambiguously refers to the shepherd boy in this context.
Note finally, that the gestural demonstrations used in these two examples have a highly expressive component. Without the demonstrations, the narration would be considerably less vivid and expressive. (Action) role shift is thus a powerful tool to embody protagonists and make stories come alive by watching protagonists acting.
5 Conclusion
In this article, I argued that a uniform analysis of attitude and action role shift as demonstrations of linguistic and non-linguistic actions as proposed by Davidson (2015) offers a promising basis even for the analysis of complex demonstrations found in sign language narratives. However, given the complexity of the examples discussed in this article, certain modifications to Davidson’s original account are necessary. First of all, mixed cases of attitude and action role shift show that a hybrid account distinguishing between linguistic and non-linguistic demonstrations along the lines of Maier (2017, 2018 is a promising path to correctly analyze the specific linguistic properties of speech reports. Second, the original demonstration theory needs more flexibility to account correctly for linguistic material in the scope of action role shift. Gestural demonstrations are often accompanied by corresponding linguistic descriptions which do not belong to the gestural demonstration but directly enter the semantic representation of the sentence from the perspective of the narrator. And finally, signers can use different articulators to demonstrate linguistic and non-linguistic actions of two or more protagonists or different aspects of the same event simultaneously. Such multiple demonstrations are typically used for events that involve the spatial, social or linguistic interaction of two (or more) protagonists or entities. In such cases, different body parts can be used to represent different protagonists.
I already mentioned in the beginning that in this article, I can only sketch a unified formal analysis, which accounts for more complex interactions of gestural demonstrations and linguistic descriptions in sign language role shift. Obviously, we still got a long way to go before we will be able to define a more formal algorithm that allows for a more systematic interpretation of complex demonstrations. In addition to the aspects discussed in this article, such a theory needs to integrate the iconic potential of gestural demonstrations and clearly identify the iconic aspects relevant for the interpretation of a demonstration (Clark and Gerrig 1990; Perniss et al. 2010; Schlenker 2017b). And finally, such a theory should be integrated in a uniform theory of demonstrations in different modalities that accounts for modality-specific and modality-independent aspects of (gestural) demonstrations in different contexts (Dingemanse and Akita 2017; Ebert et al. 2020; Gawne and McCulloch 2019; Parrill 2009, 2010; Schlenker 2018a, 2018b).
Funding source: Horizon 2020 Framework Programme
Award Identifier / Grant number: 693349
Funding source: German Research Foundation (DFG)
Award Identifier / Grant number: DFG Priority Programme 2329 "Visual Communication"
Acknowledgments
This research project was partly funded by the German Science Foundation (DFG Priority Program 2329 “Visual Communication”). Many thanks to all deaf participants who contributed to the Göttingen fable corpus. Furthermore, I would like to thank my colleagues Diane Brentari, Cornelia Ebert, Thomas Finkbeiner, Hans-Martin Gärtner, Annika Herrmann, Emar Maier, Nina-Kristin Meister, Josep Quer, Philippe Schlenker and the audiences of the workshops “Iconicity in Language” and “Expressing the Use-Mention Distinction: An Empirical Perspective” for helpful discussions. And last, but not least, I would like to thank two anonymous reviewers for their valuable comments.
-
Research funding: This work was financially supported by the Horizon 2020 Framework Programme (693349), German Research Foundation (DFG) (2329).
-
Notational conventions: Signs are glossed in small caps. Manual gestures are glossed in italics. Subscripts represent loci in the signing space that either represent topographic locations or discourse referents. ix is a pronominal pointing sign and cl a classifier. Whole entity classifiers are glossed as we-cl and body part classifiers as bp-cl. The (gestural) movement component is added in italics, i.e., we-cl:climb represents a climbing entity. We do not gloss the handshape of the classifiers since they are illustrated by the corresponding pictures. Non-manual markers such as facial expressions are represented by lines above the glosses. ‘bf’ stands for furrowed eyebrows used in wh-questions, ‘fe-x’ for a specific facial expression depicting the facial expression of one of the protagonists (i.e., ‘x’) and ‘mg’ for a specific mouth gesture. A mouthing is indicated by ‘/mouthing/’. The length of the line of a non-manual marker indicates the scope of the corresponding non-manual. Note that in Sections 2 and 3, role shift, like other non-manual markers, is indicated by a line above the glosses with the abbreviation ‘rs’. In Section 4, I use brackets with the subscript ‘rs’, i.e., ‘[…]re’ to mark the scope of the role shift.
References
Aronoff, Mark, Irit Meir & Wendy Sandler. 2005. The paradox of sign language morphology. Language 81. 301–344. https://doi.org/10.1353/lan.2005.0043.Search in Google Scholar
Barberà, Gemma & Josep Quer. 2018. Nominal referential values of semantic classifiers and role shift in signed narratives. In Annika Hübl & Markus Steinbach (eds.), Linguistic foundations of narration in spoken and sign languages, 251–274. Amsterdam & Philadelphia: John Benjamins.10.1075/la.247.11barSearch in Google Scholar
Benedicto, Elena & Diane Brentari. 2004. Where did all the arguments go? Argument-changing properties of classifiers in ASL. Natural Language & Linguistic Theory 22. 743–810. https://doi.org/10.1007/s11049-003-4698-2.Search in Google Scholar
Boyes Braem, Penny & Rachel Sutton-Spence (eds.). 2001. The hands are the head of the mouth: The mouth as articulator in sign languages. Hamburg: Signum.Search in Google Scholar
Brendel, Elke, Jörg Meibauer & Markus Steinbach. 2011. Exploring the meaning of quotation. In Elke Brendel, Jörg Meibauer & Markus Steinbach (eds.), Quotation and meaning, 1–33. Berlin & Boston: De Gruyter Mouton.10.1515/9783110240085.1Search in Google Scholar
Clark, Herbert H. & Richard J. Gerrig. 1990. Quotations as demonstrations. Language 66. 764–805. https://doi.org/10.2307/414729.Search in Google Scholar
Cormier, Kearsy, Sandra Smith & Zed Sevcikova-Sehyr. 2015. Rethinking constructed action. Sign Language and Linguistics 18(2). 167–204. https://doi.org/10.1075/sll.18.2.01cor.Search in Google Scholar
Cormier, Kearsy, Sandra Smith & Martine Zwets. 2013. Framing constructed action in British Sign Language narratives. Journal of Pragmatics 55. 119–139. https://doi.org/10.1016/j.pragma.2013.06.002.Search in Google Scholar
Crasborn, Onno, Johanna Mesch, Dafydd Waters, Annika Nonhebel, Els van der Kooij, Bencie Woll & Britta Bergmann. 2007. Sharing sign language data online: Experiences from the ECHO project. International Journal of Corpus Linguistics 12(4). 535–562. https://doi.org/10.1075/ijcl.12.4.06cra.Search in Google Scholar
Davidson, Donald. 1979. Quotation. Theory and Decision 11(1). 27–40. https://doi.org/10.1007/bf00126690.Search in Google Scholar
Davidson, Kathryn. 2015. Quotation, demonstration, and iconicity. Linguistics and Philosophy 38. 477–520. https://doi.org/10.1007/s10988-015-9180-1.Search in Google Scholar
Dingemanse, Mark & Kimi Akita. 2017. An inverse relation between expressiveness and grammatical integration: On the morphosyntactic typology of ideophones, with special reference to Japanese. Journal of Linguistics 53. 501–532. https://doi.org/10.1017/s002222671600030x.Search in Google Scholar
Dudis, Paul G. 2004. Body partitioning and real space blends. Cognitive Linguistics 15(2). 223–238.10.1515/cogl.2004.009Search in Google Scholar
Ebert, Christian, Cornelia Ebert & Robin Hörnig. 2020. Demonstratives as dimension shifters. Proceedings of Sinn und Bedeutung 24. 161–178.Search in Google Scholar
Engberg-Pedersen, Elisabeth. 1993. Space in Danish Sign Language. The meaning and morphosynthax of the use of space in a visual language. Hamburg: Signum.Search in Google Scholar
Fischer, Renate & Simon Kollien. 2010. Gibt es Constructed Action in Deutscher Gebärdensprache und in Deutsch (in der Textsorte Bedeutungserklärung)? Das Zeichen 86. 502–510.Search in Google Scholar
Gawne, Lauren & Gretchen McCulloch. 2019. Emoji as digital gestures. Language@Internet 17. 1–21. article 2.Search in Google Scholar
Goldin-Meadow, Susan & Diane Brentari. 2017. Gesture, sign and language: The coming of age of sign language and gesture studies. Behavioral and Brain Sciences 39. 1–17. https://doi.org/10.1017/s0140525x15001247.Search in Google Scholar
Herrmann, Annika & Nina-Kristin Pendzich. 2018. Between narrator and protagonist in fables of German Sign Language. In Annika Hübl & Markus Steinbach (eds.), Linguistic foundations of narration in spoken and sign languages, 275–308. Amsterdam & Philadelphia: John Benjamins.10.1075/la.247.12herSearch in Google Scholar
Herrmann, Annika & Markus Steinbach. 2012. Quotation in sign languages – a visible context shift. In Ingrid van Alphen & Ingrid Buchstaller (eds.), Quotatives: Cross-linguistic and crossdisciplinary perspectives, 203–228. Amsterdam & Philadelphia: John Benjamins.10.1075/celcr.15.12herSearch in Google Scholar
Herrmann, Annika & Markus Steinbach. 2018. Expressive Gesten – expressive Bedeutungen. Expressivität in gebärdensprachlichen Erzählungen. In Franz d’Avis & Rita Finkbeiner (eds.), Expressivität im Deutschen, 313–337. Berlin & Boston: De Gruyter.10.1515/9783110630190-013Search in Google Scholar
Hübl, Annika. 2014. Context shift (im)possible: Indexicals in German Sign Language. In Martin Kohlberger, Kate Bellamy & Eleanor Dutton (eds.), Proceedings of the 21st Conference of the Student Organization of Linguistics of Europe (ConSOLE), vol. 21, 171–183. Leiden: Leiden University Centre for Linguistics.Search in Google Scholar
Hübl, Annika, Emar Maier & Markus Steinbach. 2019. To shift or not to shift: Quotation and attraction in DGS. Sign Language and Linguistics 22(2). 171–209. https://doi.org/10.1075/sll.18004.hub.Search in Google Scholar
Liddell, Scott & Melanie Metzger. 1998. Gesture in sign language discourse. Journal of Pragmatics 30(6). 657–697. https://doi.org/10.1016/s0378-2166(98)00061-7.Search in Google Scholar
Lillo-Martin, Diane. 1995. The point of view predicate in American Sign Language. In Karen Emmorey & Judy Reilly (eds.), Language, gesture, and space, 155–170. Hillsdale, NJ: Lawrence Erlbaum.Search in Google Scholar
Lillo-Martin, Diane. 2012. Utterance reports and constructed action. In Roland Pfau, Markus Steinbach & Bencie Woll (eds.), Sign language: An international handbook, 365–387. Berlin & Boston: De Gruyter Mouton.10.1515/9783110261325.365Search in Google Scholar
Maier, Emar. 2017. The pragmatics of attraction: Explaining unquotation in direct and free indirect discourse. In Paul Saka & Michael Johnson (eds.), The semantics and pragmatics of quotation, 259–280. Berlin: Springer.10.1007/978-3-319-68747-6_9Search in Google Scholar
Maier, Emar. 2018. Quotation, demonstration, and attraction in sign language role shift. Theoretical Linguistics 44(3/4). 165–176. https://doi.org/10.1515/tl-2018-0019.Search in Google Scholar
Maier, Emar & Markus Steinbach. 2022. Perspective shift across modalities. Annual Review of Linguistics 8(59). 76. https://doi.org/10.1146/annurev-linguistics-031120-021042.Search in Google Scholar
Meier, Richard P. 2002. Why different, why the same? Explaining effects and non-effects of modality upon linguistic structure in sign and speech. In Richard P. Meier, Kearsy Cormier & David Quinto-Pozos (eds.), Modality and structure in signed and spoken languages, 1–25. Cambridge: Cambridge University Press.10.1017/CBO9780511486777.001Search in Google Scholar
Meir, Irit, Carol A. Padden, Mark Aronoff & Wendy Sandler. 2007. Body as subject. Journal of Linguistics 43. 531–563. https://doi.org/10.1017/s0022226707004768.Search in Google Scholar
Metzger, Melanie. 1995. Constructed dialogue and constructed action in ASL. In Ceil Lucas (ed.), Sociolinguistics in deaf communities, 255–271. Washington, DC: Gallaudet University Press.Search in Google Scholar
Parrill, Fey. 2009. Dual viewpoint gestures. Gesture 9(3). 271–289. https://doi.org/10.1075/gest.9.3.01par.Search in Google Scholar
Parrill, Fey. 2010. Viewpoint in speech–gesture integration: Linguistic structure, discourse structure, and event structure. Language and Cognitive Processes 25(5). 650–668. https://doi.org/10.1080/01690960903424248.Search in Google Scholar
Pendzich, Nina-Kristin. 2020. Lexical nonmanuals in German Sign Language (DGS): Empirical studies and theoretical implications. Berlin & Boston: De Gruyter Mouton and Ishara Press.10.1515/9783110671667Search in Google Scholar
Perniss, Pamela. 2012. Use of sign space. In Roland Pfau, Markus Steinbach & Bencie Woll (eds.), Sign language. An international handbook, 412–431. Berlin: De Gruyter Mouton.10.1515/9783110261325.412Search in Google Scholar
Perniss, Pamela, Robin L. Thompson & Gabriella Vigliocco. 2010. Iconicity as a general property of language: Evidence from spoken and signed languages. Frontiers in Psychology 1. 1–15. article 227. https://doi.org/10.3389/fpsyg.2010.00227.Search in Google Scholar
Pfau, Roland & Markus Steinbach. 2011. Grammaticalization in sign languages. In Bernd Heine & Heiko Narrog (eds.), Handbook of grammaticalization, 681–693. Oxford: Oxford University Press.10.1093/oxfordhb/9780199586783.013.0056Search in Google Scholar
Quer, Josep. 2005. Context shift and indexical variables in sign languages. Proceedings of Semantics and Linguistics Theory 15. 152–168. https://doi.org/10.3765/salt.v0i0.2923.Search in Google Scholar
Quer, Josep. 2016. Reporting with and without role shift: Sign language strategies of complementation. In Roland Pfau, Markus Steinbach & Annika Herrmann (eds.), A matter of complexity: Subordination in sign languages, 204–230. Berlin & Boston: De Gruyter Mouton and Ishara Press.10.1515/9781501503238-009Search in Google Scholar
Schembri, Adam, Caroline Jones & Denis Burnham. 2005. Comparing action gestures and classifier verbs of motion: Evidence from Australian Sign Language, Taiwan Sign Language, and nonsigners’ gestures without speech. Journal of Deaf Studies and Deaf Education 10. 272–290. https://doi.org/10.1093/deafed/eni029.Search in Google Scholar
Schlenker, Philippe. 2017a. Super monsters I: Attitude and action role shift in sign language. Semantics and Pragmatics 10(9). 1–65. https://doi.org/10.3765/sp.10.9.Search in Google Scholar
Schlenker, Philippe. 2017b. Super monsters II: Role shift, iconicity and quotation in sign language. Semantics and Pragmatics 10(12). 1–67. https://doi.org/10.3765/sp.10.12.Search in Google Scholar
Schlenker, Philippe. 2018a. Visible meaning: Sign language and the foundations of semantics. Theoretical Linguistics 44(3/4). 123–208. https://doi.org/10.1515/tl-2018-0012.Search in Google Scholar
Schlenker, Philippe. 2018b. Gesture projection and cosuppositions. Linguistics and Philosophy 41. 295–365. https://doi.org/10.1007/s10988-017-9225-8.Search in Google Scholar
Steinbach, Markus & Edgar Onea. 2016. A DRT analysis of discourse referents and anaphora resolution in sign language. Journal of Semantics 33. 409–448. https://doi.org/10.1093/jos/ffv002.Search in Google Scholar
Steinbach, Markus. 2021. Role shift – theoretical perspectives. In Josep Quer, Roland Pfau & Annika Herrmann (eds.), Theoretical and experimental sign language research, 351–377. London: Routledge.10.4324/9781315754499-16Search in Google Scholar
Zwitserlood, Inge. 2012. Classifiers. In Roland Pfau, Markus Steinbach & Bencie Woll (eds.), Sign language: An international handbook, 158–186. Berlin & Boston: De Gruyter Mouton.10.1515/9783110261325.158Search in Google Scholar
© 2023 the author(s), published by De Gruyter, Berlin/Boston
This work is licensed under the Creative Commons Attribution 4.0 International License.
Articles in the same Issue
- Frontmatter
- Quotation as an interface phenomenon
- Quotation does not need marks of quotation
- Quotational nicknames in German at the interface between syntax, punctuation, and pragmatics
- Quotation marks and the processing of irony in English: evidence from a reading time study
- Angry lions and scared neighbors: Complex demonstrations in sign language role shift at the sign-gesture interface
- Scare quotes as deontic modals
Articles in the same Issue
- Frontmatter
- Quotation as an interface phenomenon
- Quotation does not need marks of quotation
- Quotational nicknames in German at the interface between syntax, punctuation, and pragmatics
- Quotation marks and the processing of irony in English: evidence from a reading time study
- Angry lions and scared neighbors: Complex demonstrations in sign language role shift at the sign-gesture interface
- Scare quotes as deontic modals