Home Constructed dialogue in signed-to-spoken interpreting: renditions as a reconfiguration of utterance semiotics
Article Open Access

Constructed dialogue in signed-to-spoken interpreting: renditions as a reconfiguration of utterance semiotics

  • Vibeke Bø EMAIL logo
Published/Copyright: August 25, 2025
Semiotica
From the journal Semiotica

Abstract

This study investigates how two interpreters mobilize different language resources when faced with a semiotically complex construction in Norwegian Sign Language (NTS): constructed dialogue. Drawing on a semiotic approach to languages and multimodal conversation analysis, I analyze four sequences of interpreted semi-naturalistic conversation. Applying a semiotic approach to languages, three main types of resources serve as analytical categories of source utterances and renditions: descriptions, depictions, and indications. Part of interpreters’ task entails either maintaining or changing the depictive character of the source utterance’s constructed dialogue sequences, but this is only one of many choices. Fine-grained analysis reveals that interpreters recruit from a semiotically diverse repertoire to reconfigure the semiotics of the source utterance. The interpreter’s task of situating the complex semiotics of source utterances is highlighted. When the interpreter renders another participant’s utterance, they reconfigure the semiotic resources to ensure the rendition fits appropriately within its target language ecology, thus assuring common ground between participants. This study contributes to existing knowledge of interpreting as a discourse process by analyzing the semiotics of signed-to-spoken interpreting.

1 Introduction

Constructed dialogue[1] is a semiotically complex linguistic device: it represents, or depicts, someone’s dialogue with conventional lexical items (Dudis 2007; Hodge and Ferrara 2014; Tannen 1986). The semiotic complexity of constructed dialogue may pose a challenge for interpreters faced with differing language ecologies, as is the case for Norwegian Sign Language and Norwegian. If the semiotic strategies of utterances in the source language are not suitable for the target language ecology, the interpreter has several options. This paper shows examples of two interpreters’ practices of renditions for such scenarios.

The complexity of interpreted interaction has been addressed in several studies. Within the context of interpreting between a signed and spoken language, there is a tendency towards focusing on the difference in modality (Halley 2020; Napier 2016; Padden 2000; Petitta et al. 2018). Meanwhile, researchers of interpreting between two respective spoken languages are increasingly concerned with the multimodality of all languages (Davitti 2019; Davitti and Pasquandrea 2017; Vranjes 2021; Vranjes and Brône 2021). Regarding spoken languages as speech-gesture systems rather than verbal communication with some gestural additions (Holler 2022; Holler and Wilkin 2011; Kendon 2017) reduces the suitability of modality as a criterion for categorizing languages. This is because it would allow visual resources of spoken languages to be recognized as having linguistic status. Within interpreting studies, the interpreters’ multimodal pragmatic resources are acknowledged as crucial part of renditions (Blakemore and Gallai 2014), particularly in achieving interactional goals (Davitti 2019; Davitti and Pasquandrea 2017). However, considering the semiotic choices of renditions as interactional seems to be an underexplored phenomenon in dialogue interpreting research. The novelty of the current approach lies in analyzing the semiotics of source utterances and renditions with the tools of multimodal interactional analysis. In doing so, this study puts forward the view that the semiotics of languages are simultaneously linguistic and interactional in nature (Holler 2022).

Given its semiotic complexity, constructed dialogue lends itself perfectly to explore interpreters’ language practices concerning the semiotics of interpreting. The view put forward here is that the interpreting process entails not only finding lexical equivalents of the source utterance, but to make choices regarding style and genre, and specifically what semiotic strategies are appropriate in a given context. These considerations allow utterances to be properly situated in different language environments, or language ecologies (Haugen 1971; Hodge 2014). Further, language ecologies are considered influential concerning a language’s semiotic norms. Specifically when dealing with signed and spoken languages, the semiotic strategy of depiction is found to be exploited to different degrees in each language ecology (Cormier et al. 2012b). Consequently, the primary research question of this paper is: How do two signed language interpreters navigate differing norms for the semiotically complex practice of constructed dialogue within the ecologies of Norwegian and Norwegian Sign Language (NTS)?

A secondary aim of this study is to align interpreting studies with a semiotic approach to languages, enabling discussions on spoken/spoken and signed/spoken language interpreting to be on common ground. To meet the research questions, a qualitative paradigm was chosen for the current study: the analyses are based on two video-recorded interpreted lunch conversations. From these conversations, four sequences containing constructed dialogue in the signed source utterance were examined by means of multimodal interaction analysis (Goodwin 2000, 2010; Keevallik 2018; Mondada 2014). These four sequences are open non-anonymized data, available for review (link in Method section).

In what follows, I present an overview of research on a semiotic approach to language and how it applies to interpreting studies. I then discuss the data and method used for this study. The remainder of this paper examines selected source utterances and renditions of interpreted interaction which represent different mechanisms: in the first two extracts, the interpreter maintains considerable parts of the semiotic strategies from the source utterance by including constructed dialogue in the rendition, while in the last two extracts the semiotics to a large degree is altered and constructed dialogue is not part of the rendition. In all cases, multimodal resources contribute to properly situate renditions in their new ecological environments.

2 The semiotic complexity of constructed dialogue

The post-Peircean semiotic approach to languages (Clark 1996; Ferrara and Hodge 2018; Hodge and Ferrara 2022; Peirce 1965) offers terminology for discussing semiotically diverse language practices, as it does not privilege speech over other means of meaning-making (Ferrara and Hodge 2018; Hodge and Ferrara 2022). A growing body of literature acknowledges the multimodality and embodiment of all languages; spoken and signed (e.g., Allwood 2008; Bargiela-Chiappini 2013; Deppermann and Streeck 2018; Ferrara and Hodge 2018; Goodwin 2000; Mondada 2013).

Adopting the view that languages are fundamentally embodied (e.g., Holler 2022; Kendon 2004; McNeill 2000; Shaw 2019; Sweetser 2023), and semiotically complex, the starting point for the current analysis will be the semiotics of utterances, zooming in on the semiotic character of resources. Depictions, descriptions, and indications (Clark 1996; Ferrara and Hodge 2018; Kendon 2017; Peirce 1965) are the three main semiotic strategies deployed in meaning-making and serve as the main categories of the current analysis. From the vantage point of constructed dialogue, I will explain the nature of each semiotic strategy.

2.1 Depiction, description, and indication as semiotic discourse strategies

Depictions are linguistic expressions that in some way depict the intended referent (Clark 1996; Clark and Gerrig 1990; Ferrara and Hodge 2018). In a broad sense, depictions are linguistic utterances in which information is presented in a “show-you” format (Beal-Alvarez and Trussell 2015; Johnston 2019). Constructed dialogue is depictive because it shows dialogue (Dudis 2011): it prompts the signer or speaker to enter the role of the person they are talking about, mimicking some historical or fictional part of a conversation. Depictions are typically dependent on context for correct interpretation, contrasting with descriptions, discussed next.

Depiction represents but one facet of the resources of constructed action. The lexical words or signs of these utterances represent descriptive resources. These are highly conventional parts of discourse, without an apparent motivated link between form and meaning (Dingemanse 2015; Ferrara and Hodge 2018). In constructed dialogue sequences, conventional lexical items serve as descriptive signaling method semiotically. Historically, descriptive resources have been the easiest to account for and categorize and have therefore been the focus of most research on language (Linell 2005). In discourse, descriptions are linguistic utterances where meaning is conveyed in a “tell-you” format (Beal-Alvarez and Trussell 2015; Dingemanse 2015; Johnston 2019). Thus, while constructed dialogue depicts discourse, it is achieved through the use of descriptive resources.

Indicative acts entail directing the interlocutor’s attention towards something. This joint attention can be achieved by physical pointing, or gazing towards something, but we can also indicate with lexical items like deictic expressions. The deictic expressions (like “this” or “she” in English) represent a hybrid concerning conventionality: they are highly conventional lexical items that require context to make meaning. Further, indication is a strategy for anchoring discourse in time and place (Clark 1996; Ferrara and Hodge 2018). In discourse containing constructed dialogue, gaze has been reported as an important resource for indicating the (imagined) addressee of the utterances (Metzger 1995).

Importantly, the descriptive, depictive, and indicative resources often co-occur, demonstrated by the resource of constructed dialogue: a linguistic strategy that depicts discourse with the descriptive resources of conventional grammar and lexical items, often indicating an imagined addressee with body posture and gaze.

2.2 Semiotic complexity in a visual language ecology

Janzen (2017) investigates how the composite nature of languages might apply to signed languages, where everything occurs in the visual modality: “For a group of languages that share the same bodily medium of expression with gesture, understanding how utterances might be composite in this way seems a more difficult task” (Janzen 2017: 512). He continues to establish the presence of gesture in signed languages and how the concept of multimodality might apply in a visual language ecology:

In a strict sense, signed language use may not be considered multimodal because it typically is not a combination of speech of gesture, and because signs and gestures are thought of as expression in the same medium or modality. But perhaps such a conclusion is misleading … if we consider “modality” to stretch beyond a simple distinction between two articulatory systems – speech and gesture/signing – and treat multimodality as true multiplicity and not duplicity, then both spoken and signed language can be analyzed under the same rubric. (Janzen 2017: 519–520)

Thus, investigations of multimodality in spoken languages often focus on combining different resources, such as speech and gesture. Janzen (2017) suggests the topic-comment structure as a representation of the multimodal and composite nature of signed language utterances. This discourse structure involves interpreting lexical items in sequences with varying semiotic signals, embodying both compositeness and multimodality. In topic-comment structures, the topic is understood as indicating relevance to the subsequent comment. Constructed dialogue is another discourse structure where participants must interpret lexical items through different semiotic lenses. In the following, I demonstrate how the semiotic approach to languages can be used to treat constructed dialogue as a multiplicity of semiotic resources.

2.3 Constructed dialogue depicts discourse

The term constructed dialogue was coined by Deborah Tannen (1986), working with spoken languages. It is a linguistic device of representing dialogue, extensively explored in both spoken and signed languages (e.g., Dudis 2011; Ferrara and Bell 1995; Hodge and Cormier 2019; Metzger 1995; Mohammad and Vásquez 2015; Tannen 1986; Thumann 2011; Young et al. 2012). This resource of communication is also referred to as reported speech in the literature (e.g., Holt and Clift 2006). However, Tannen (1986) observed that the term obscured the fact that we can never report truthfully exactly what has been said by someone (or oneself) in the past, we construct dialogue according to our memory. While Tannen reported from spoken languages, Metzger (1995), working with American Sign Language (ASL), added the term constructed action as signed language utterances often include the construction of peoples’ action and not only their dialogue. Notably, although the term “constructed action” originates from observations of a signed language, spoken languages also employ action-like modes of expression (Kendon 2014).

As previously stated, constructed dialogue utilizes an important depictive resource by means of its “show-you” format (Beal-Alvarez and Trussell 2015; Dudis 2011). An important backdrop for the current study is that language ecologies have different use of semiotic strategies. The notion of a language ecology considers languages as part of larger ecological systems (Garner 2004; Haugen 1971). In this study, an ecological lens of inquiry is useful to help us understand how a visual language ecology (like NTS) differs from spoken language ecology in its sophisticated use of depictive resources (e.g., Brennan 1999; Hodge 2014). When a language is perceived and produced in a visual space in front of the signer, the ecological surroundings have allowed depictive resources to develop into a sophisticated system of meaning-making. The interpreted interaction investigated here, demonstrates how the “show-you” format of constructed dialogue exists with different ramifications in a spoken and a visual language ecology. Thus, the transition between language ecologies requires a nuanced semiotic adjustment. In this study, to highlight the semiotic complexity of source utterances, as well as the semiotic work carried out by the interpreter, I refer to this adjustment as the “reconfigured semiotics of utterances.”

Having established a semiotic and ecological approach to language practices, I now turn to the specific ecology surrounding the explored events of this study: the ecology of an interpreted event.

2.4 A semiotic approach applied to interpreting studies

The acknowledgement that languages are multimodal and semiotically complex has implications for how we understand the task of an interpreter. There is a growing body of work that considers the different functions of multimodal resources of interpreted interaction (Arbona et al. 2022; Davitti 2019; Davitti and Pasquandrea 2017; Janzen et al. 2023; Janzen and Shaffer 2013; Mason 2012; Poignant 2021; Tiselius 2022). Recognizing that the multimodal resources of language practices are also part of a specific language ecology, presents the possibility of viewing the act of simultaneous interpreting as replacing a specific set of semiotic resources and strategies so that renditions fit into the ecology of another language. This semiotic approach highlights that lexical choices are only part of the interpreting process.

A few studies have applied a semiotic approach to the analysis of interpreted material. Without explicit reference to semiotics, Halley (2020) explores if there is a directionality effect in the rendering of depiction strategies of ASL (American Sign Language) and English. In an experimental context, he investigates the work of one interpreter in both directions with the conclusion that the interpreter needs to “render a wide variety of forms of depiction in both directions, which further complicates the interpreting process” (Halley 2020: 34). Drawing on the (post-)Peircean semiotics also applied in the current study, Meurant and colleagues (2022) explore reformulations in signed and spoken discourse, wherein signers or speakers repeat their discourse using different terms to clarify an expression. They find that reformulations may play a key role in the interpreters’ process. Also, they argue descriptions and depictions are combined in reformulations, as “signers frequently produce reformulations by combining conventional signs and structures with signs and structures whose meaning is constructed depictively” (Meurant et al. 2022: 327). They also mention the time constraints of simultaneous interpreting which poses a specific challenge to the interpreted event and may result in extra clarity requirements.

While varieties of multimodal resources are highlighted in several studies on interpreting, few studies on dialogue interpreting have explicitly differentiated between semiotic strategies (although, see Stone and Hughes 2020). The current article explores resources in renditions with the lens of a semiotic approach to language practices: The strategies of depictions, descriptions, and indications.

3 Methods

3.1 Participants and data

The data for this study consists of two interpreted informal lunch conversations (ca. 84 min in total) that have been recorded and annotated. The sites for recording were chosen based on my professional network and availability. It was important that the hearing non-signing participant was in the minority, for two reasons: (1) Directionality: This study primarily focuses on spoken language renditions. Therefore, it was important to ensure that there were ample instances of signed-to-spoken renditions, and (2) Adaptation: Deaf individuals are typically experts at calibrating their language to meet communication needs (Moriarty and Kusters 2021), particularly in interpreted interaction (Haug et al. 2017). This expertise may result in adapting into a slower, clearer discourse; thereby facilitating comprehension for (hearing) individuals with different levels of proficiency in NTS. By ensuring a majority of deaf, signing individuals, I aimed to reduce alignment with L2 users because it could potentially alter the character of the signing discourse. Finding workplaces where deaf professionals are in the majority, limits the potential objects of study. However, while 84 min of recorded material is a small sample, it provides sufficient example sequences for a multimodal in-depth analysis (Davitti and Pasquandrea 2017; Goodwin 2017; Halley 2020; Mondada 2018).

All interpreters in this study are familiar with the workplace and most of the participants. The data are semi-naturalistic as informal settings require an appointment to film participants, and there were two or three cameras and a researcher in the room. On arrival, all participants were provided with the necessary information to give their consent to participate in the project. The project description was sent to the workplace for participants to read beforehand, and in addition information was given in Norwegian Sign Language and Norwegian to deaf and hearing participants, respectively. The current project was approved by the Norwegian Centre for Research Data (NSD; #600573). All participants have signed consent forms, stating their willingness to participate in the project, to be video-recorded, and for the data to be used in research and teaching outputs. The data was annotated in ELAN, a software widely used for signed and spoken language analyses (Crasborn and Sloetjes 2008).

Open non-anonymized data is an important contribution of the current study, as it provides the field opportunities of extended discussions on real-life interpreted interaction. For an overview of the data, see Table 1.

Table 1:

Overview of participants and data.

Total participants Deaf participants Hearing non-signing participants Hearing signing participants Interpreters Minutes of recording
Workplace1 4 (4 women) 2 1 0 1 42:35
Workplace2 6 (3 women, 3 men) 3 1 1 1 41:52

The interpreters are L2 learners of NTS. They are trained interpreters both with more than ten years of interpreting experience. Both interpreters frequently interpret in workplaces 1 and 2, respectively, so they are familiar with the other participants in the study. As topics discussed were not instructed but occurred naturally, there were limited possibilities of preparation for the interpreters.

3.2 Annotation and analysis

The transcription of multimodal, embodied discourse is necessarily a selective activity that depends on the relevance in connection to research questions (Mondada 2018) and theoretical standpoints (Ochs 1979). To answer the research question of how constructed dialogue in the signed source utterance is situated in a spoken language ecology, it was important to establish what semiotic strategies were used in both the original (signed language) utterance and the Norwegian rendition. Involving a spoken and a signed language in the same study, annotations represent two languages that represent markedly different transcription traditions.

Annotation systems of signed language data have developed since the beginning of signed language research (Stokoe 1960, 1996), and have been refined through the efforts of linguists working with signed language corpora for different signed languages (Cormier et al. 2012a; Johnston 2003). Multimodal interaction analysis considers linguistic and pragmatic resources (e.g., Goodwin 2000, 2010; Mondada 2013) and how all these different resources interact and contribute to joint meaning-making (Clark 1996). In what follows, annotation conventions used for signed and spoken discourse in this study are presented.

3.2.1 Annotating a signed language

Signed languages do not have a written form, or a conventional notational system (Hochgesang 2022). The conventional way of representing signed language discourse is to use glossing with a written language as auxiliary language (Johnston 2003; Skedsmo 2021). It is important to bear in mind that glossing is not an exact translation of the signed language sign, it is merely a method of representing discourse. This method is arguably problematic because it might contribute to a very robust myth, which is that there is one sign for each word of a spoken language (Boyes Braem 2012), which is not the case. Also, glossing might make signed languages look like ungrammatical (in this case) English, since the conventional method of glossing is to use the citation form of a written word (Rosenthal 2009). For this study, I use this glossing method for the purpose of readability. However, readers are strongly encouraged to review the video files to get a proper impression of NTS utterances used for analyses.[2]

I adopt annotation conventions from a large, signed language corpus (Johnston 2019),[3] because this is the model used in the current development of annotation conventions of the NTS (Norwegian Sign Language) corpus (Ferrara et al. 2022). Also, this study is informed by studies on signed language data conducted within the conversation analysis tradition (Skedsmo 2020, 2021).

3.2.2 Transcribing an embodied spoken language

The spoken language rendition of source utterances is orthographically transcribed. Embodied conduct for both languages was transcribed following Mondada’s (2018) conventions, including systematically displaying screenshots of data. Multimodality reaches beyond the exploitation of our own bodily resources, e.g., the coupling with the material world and sensory perception (Bargiela-Chiappini 2013). However, the focus on multimodal resources in this study is restricted to the embodied resources of the participants. The principles of multimodality include considering the ecology of the activity (Mondada 2018: 86), which is adhered to by accounting for the characteristics of an interpreted event, and specifically an informal lunch conversation at a workplace. An English translation is provided for both the signed language source utterance and the Norwegian rendition.

3.2.3 Annotation procedure

Because constructed dialogue is a clear example of semiotically complex utterances, I initially searched the material for constructed dialogue sequences in the signed discourse. The quantitative distribution of constructed dialogue sequences from the two conversations, resulted in n = 75 instances. The distribution was nearly even between the two conversations (Conversation 1: n = 37; Conversation 2: n = 38).

This study employs a fundamentally qualitative approach to investigate signed-to-spoken interpreting renditions. To demonstrate how renditions result from semiotic work, four example sequences have been selected for an in-depth analysis in this paper. The selection was based on three criteria: (1) The sequences needed to contain the depictive resource of constructed dialogue in the NTS source text; (2) The sequences could not contain sensitive issues, like religion, politics, or the mention of a third person, as open data cannot be labelled sensitive; (3) Sequences were validated by interpreter and deaf participant as acceptable renditions and appropriate for display. Once sequences containing sensitive issues were eliminated, sequences were chosen for in-depth analyses based on the main resource (depictive or descriptive) of the rendition. The purpose was to demonstrate how interpreters may or may not adopt the main semiotics from the source utterance: (1) Interpreters include constructed dialogue as part of their rendition; (2) Interpreters do not include constructed dialogue as part of their rendition. Conducting a multimodal interaction analysis is notably time-consuming and requires considerable space for analysis. Thus, performing this work for four sequences in this paper may serve as a starting point for exploring interpreting practice through a semiotic lens.

3.3 Presenting data

Framing interpreting as reconfiguring the semiotics of utterances requires a comparison of the distribution of semiotic resources and strategies in the source utterances with resources and strategies of the renditions. Consequently, each extract contains images both from the source utterance and from the interpreters’ renditions. For readability purposes, the interpreters’ renditions are highlighted in gray. The constructed dialogue sequences are identified with the prefix CD: <CD: WITHIN BRACKETS> on both the GLOSS tier and the Norwegian tier. For a complete list of annotation guidelines, see the Appendix. In what follows, four sequences will be analyzed and discussed.

4 Analysis: situating constructed dialogue in a spoken language ecology

In the data set, spoken language renditions of constructed dialogue in the source utterance takes on different forms. I present an analysis of four different sequences, representing two examples in which the semiotics are mainly maintained and two examples in which the semiotics are mainly altered.

4.1 Maintaining the semiotics

In the first example (Extract 1), the interpreter renders constructed dialogue in NTS as constructed dialogue in Norwegian. Image #1 shows gaze as co-establishing the enactment of constructed dialogue (Beukeleers and Vermeerbergen 2019; Ferrara 2019) and is thus an example of a depictive strategy. Gaze is often described as constituting an instance of constructed dialogue and constructed action in signed language research (Cormier et al. 2013; Dudis 2011; Metzger 1995; Shaffer 2012; Young et al. 2012). In the source utterance, a participant exemplifies how it would be typical in the deaf community to describe a person visually. The sequence is introduced with a conventional discourse marker, for example (line 01), after which she shifts into the depicting constructed dialogue sequence. In this sequence, she constructs the dialogue of an imagined person (lines 01 and 03, Figure 1).

Figure 1: 
Extract 1.
Figure 1:

Extract 1.

The constructed dialogue sequence is realized by means of gaze direction (indicating imagined person in signing space), a forward head gesture, and a squint (image #1). The squint contributes to depicting the facial expression the imagined deaf person would have in the effort of describing someone. The forward head gesture may be an embodied signal: placing the head in a slightly different position to signal the dialogue stems from someone other than the utterer. After the depictive constructed dialogue sequence, there is a descriptive sequence: we know who that is (line 03). This sequence is descriptive due to the semiotic character of resources deployed: Interlocutors are no longer required to “imagine what it is like to see the thing depicted” (Dingemanse 2015: 950), rather, highly conventional lexical items with corresponding grammar is the main source of meaning making. The shift between the depictive constructed dialogue and this descriptive sequence is evidenced by facial expression and head position returning to neutral, while making eye contact with one of the other deaf interlocutors (indicated by the arrow, image #2).

Working into spoken Norwegian, the interpreter maintains the descriptive character of the discourse marker that introduces this sequence (glossed ONE EXAMPLE, line 01), rendering it for example (line 02). In addition to maintaining the depictive device of constructed dialogue, the interpreter also incorporates some of the embodied resources from the source utterance: she squints and displays a slightly side-tilted head position (image #3). The squint and head tilt accompany the whole constructed dialogue sequence (you know the large one with glasses, right, the one with the big curly hair, you know who that is?) in the spoken language rendition. Also, the interpreter produces a gesture that copies elements from the source utterance, and thus might be labelled an alignment gesture (Rasenberg et al. 2020) or gestural mirroring (Shaw 2019). Note the simultaneous interactional work done by the interpreter: signaling alignment with the deaf participant while rendering an utterance directed to the hearing participant (Dittmann and Llewellyn 1968; Haug et al. 2017; Llewellyn-Jones and Lee 2014).

The depictive constructed dialogue source utterance is preceded by a descriptive discourse marker (glossed ONE EXAMPLE) and followed by a descriptive utterance then you know who that is, we are visual, you know (glossed KNOW WHO PERSON VISUAL). This way of preparing and concluding the constructed dialogue sequence is also found in the rendition, making the semiotic character of the source and target language similar (see images #1–#4).

While the referential content of then you know who that is, we are very visual, you know, is quite similar to the lexical parts of the NTS source utterance (glossed KNOW WHO PERSON VISUAL), some discourse markers are added in the rendition: jo (‘yes’) twice, and ikke sant (‘right’). The pragmatic particle jo (‘yes’) has been described to suggest that something is a given, or mutually manifest, and has the function of reminding the interlocutor of their “shared conclusion about the world” (Berthelin and Borthen 2019). Thus, two different discourse markers (and an accompanying hand gesture) mutually work to establish common ground (Brennan and Clark 1996; Clark and Schaefer 1989; Clark 2005). It might be seen as “securing the orientation of a hearer” (Goodwin 2000: 1499) as part of the process of properly situating an utterance in a different ecological environment.

4.2 Maintaining the semiotics with expanded descriptive sequence

In the second example (Extract 2), Participant 2 (P2) gives an example of what audism (discrimination on the basis of hearing status) can look like in practice, that an imagined employer would come up with reasons not to hire a deaf applicant. Resources of the constructed dialogue sequence include gaze towards the signing space, facial expression (furrowed eyebrows) and leaning to the right (extract 2, line 01). Participant 2 displays readiness to yield the floor towards the end of this sequence, evidenced by resuming eye contact with her interlocutor (line 03). I suggest she simultaneously signals a shift of main semiotic strategy, from depiction to description; she uses a conventional discourse marker that is glossed SO-ON (line 03) and the facial expression and body lean that were used as depictive resources are now neutral (line 03). See images #1 and #2 for demonstrations of the depictive and the descriptive semiotic strategies, respectively (Figure 2).

Figure 2: 
Extract 2.
Figure 2:

Extract 2.

Again, we find gaze as a constitutive resource of the constructed dialogue sequence. However, note the gaze going back and forth between the interlocutor and the signing space (lines 01 and 03), before resting at the interlocutor when the sequence of constructed dialogue is finished (line 03). The subject could indicate some interactional processes: P2 is explaining to P3 (hearing participant), how a hearing employer is likely to treat a deaf applicant. Thus, the pattern of gaze behavior closely aligns with Janzen’s (2019) description of view pointed spaces in ASL narrative discourse. Janzen argues this is an intersubjective choice “in that the signer in effect asks the addressee to see the story as she is seeing it” (Janzen 2019: 256). Seeing linguistic choices as contributing to intersubjectivity is in line with treating semiotic strategies as interactional. In this specific situation, it is difficult to conclude whether the deaf participant’s gaze is directed towards the interpreter or the hearing interlocutor. Thus, the gaze behavior could also reflect an increased need to monitor interpreters, previously reported from a study on deaf leaders’ strategies when working with interpreters (Haug et al. 2017: 120).

In the rendition, the depictive resource constructed dialogue from the source text is rendered as constructed dialogue, i.e., the semiotic strategy of depiction is maintained (lines 02, 04, and 05). The interpreter’s forward lean reflects a similar body lean behavior from the source utterance (lines 02 and 04, image #3). The shift of main semiotic strategy from the source utterance is also present in the rendition following the constructed dialogue sequence, which is evidenced by (at least) three visual resources: A hand gesture (line 05, image #4), a shift of facial expression, (raised eyebrows to neutral, see images #3 and #4) and a backwards movement of the head (lines 05 and 06). The shift of eyebrow position corresponds with that of the NTS source utterance, with a slight difference: they are not raised to neutral as in the source utterance but furrowed to neutral (images #1 and #2).

The observed differences in head position, eyebrow position and the gesture, are in this study framed as contributing to an embodied shift of semiotic strategy, from depictive constructed dialogue to a descriptive sequence.

Note that the rendition of the single sign concluding the source utterance (glossed SO-ON, line 03) is substantially longer: right, one has such attitudes to it (line 6). While the verbal rendition has undergone significant changes compared to the source utterance (SO-ON), the descriptive character of the discourse marker is maintained. SO-ON may indicate a given, that further elaboration is unnecessary. In the rendition, the Norwegian discourse marker does similar work: ikke sant (‘right’) has been described as establishing common ground with the interlocutor (Svennevig 2008). However, the continuing sequence has an explanatory character: one has such attitudes to it. One might say this prolonged rendition signals the opposite: that more work needs to be done until sufficient common ground is attained. I suggest the interpreter detects an ecological mismatch as the depictive strategy of constructed dialogue is more prevalent in spoken Norwegian discourse among adolescents (Opsahl and Svennevig 2012). Hence, she might have incorporated the extended descriptive rendition to situate the utterance more appropriately. The descriptive part of the rendition is accompanied by a neutral body position (from a forward lean), and then a salient backwards head movement (line 06).

Summing up, the reconfiguring of embodied resources from the source utterance reflects the semiotic shift between a depictive and a descriptive main strategy with slightly different realizations in form. The verbal component of the concluding descriptive sequence is extended compared to the source utterance, possibly as an attempt to situate a depictive resource into a language ecology with differing norms for depictive strategies. While constructed dialogue in NTS is unmarked as adult discourse, it may signal more adolescent discourse in spoken Norwegian. The interpreter must take this difference into account when making her choices.

4.3 Altering the semiotics: mainly descriptive discourse

In this third example (Figure 3), the constructed dialogue sequence in the source utterance is short; it consists of only three signs. Again, this sequence is preceded by a descriptive utterance in which participant 1 explains what it was like for her parents, finding out that she was deaf. She introduces the sequence with: At the time I was born (…) (line 01). This descriptive sequence is mostly accompanied by eye contact with the hearing interlocutor. She describes how the field of deafness was completely unfamiliar and new to her parents, enacting how her parents would ask what deafness is: what is this deafness all about? (NTS utterance glossed WHAT DEAF WHAT) (line 03).

Figure 3: 
Extract 3.
Figure 3:

Extract 3.

In line 03 (image #2) the gaze is directed towards signing space, where she has earlier defined an area that represents deafness (glossed AREA). Also, the head movement, the repeated head shake (line 03), contributes to expressing the parents’ stance towards deafness, they were not familiar with it. This sequence exemplifies how, in discourse, one might swiftly shift into and out of constructed dialogue. Although two semiotic strategies can be identified in this sequence (mainly descriptive or mainly depictive), the shift into the depictive strategy seems to be preceded by a preparation phase: The gaze towards the signing space and the tilting head movement start a few signs earlier than the actual constructed dialogue sequence: both the head movement and fixed gaze towards the signing space emerge with a negating sign (glossed NEG) (line 03).

Turning to the interpreter’s rendition, the constructed dialogue sequence is not rendered as constructed dialogue, but altered into a descriptive sequence, again substantially longer than the source utterance: my parents were totally unfamiliar with the situation, they knew nothing about deafness and such (lines 04–06). The interpreter’s visual cues are consistent throughout this utterance, not reflecting the semiotic shift that occurs in the source utterance.

The only reflection of a depictive device deployed by the interpreter might be read into her slightly furrowed eyebrows (image #3), which might be interpreted as an expression of the unfamiliar stance (lines 02, 04, 05, and 06). Throughout the utterance, there is no real shift to be detected regarding eyebrow position or other facial expressions, and the verbal rendition is consistently representative of descriptive resources. Thus, the depictive and descriptive resources from the source utterance are reconfigured into a mainly descriptive rendition with some depictive visual resources co-occurring.

4.4 Altering the semiotics: joint utterance

The last example is also short: participant 3 demonstrates how he would subtly puff his cheeks to refer to someone who was overweight (Figure 4). This mouth gesture [puffed cheeks] is conventionally part of a lexical adjective with a manual part, referring to the size of someone or something (image #1). Wishing to remain subtle about this characterization of an individual, he explains that the manual part of the sign in some situations will be omitted. Thus, the constructed dialogue consists of two units: [puffed cheeks], and one sign glossed UNDERSTAND, asking an imagined interlocutor if they understand who is being referred to. Again, gaze is an important resource, fixed towards a point in signing space slightly before the puffed cheeks, see Figure 4.

Figure 4: 
Extract 4.
Figure 4:

Extract 4.

After the constructed dialogue sequence, eye contact is resumed with the interlocutor with a descriptive explanation: then they understand (line 03).

Rendering this last sequence, the interpreter alters the depictive resource into descriptive and indicative resources. If I just do like this with my face contains highly conventional lexical items, which are descriptive in their semiotic character. Note however, the deictic expression like this which needs an indicative resource to be meaningful. The interpreter indicates the mouth gesture with a hand gesture towards her own face, without copying the mouth gesture. Since this hand gesture is completely invisible for the hearing non-signing participant (her gaze and head are directed towards the signing participant, leaving the interpreter behind her, see image #3), it seems she combines the visual resource from the source utterance [puffed cheeks], with her own indicative resource, a deictic expression (like this). The interpreter incorporates resources (a depictive facial expression) from the source utterance, indicating it with a deictic expression, making the rendition a joint utterance between them. This joint utterance is created in a specific ecology of the interpreted event. Note that the spoken Norwegian rendition does not overlap with the mouth gesture [puffed cheeks], as reflected in the annotation (see line 01 and line 04). How the interpreter ensures that the deictic expression like this is indicative of [puffed cheeks] thus remains unclear. The only evidence suggesting that the hearing participant makes the connection is the absence of any orientation towards it as problematic.

The ecology of an interpreted event includes the physical condition of seating arrangements: In this case sitting parallel to the signing participant. Thus, this example demonstrates that the choice of rendering constructed dialogue as a descriptive utterance with an indicative resource is based on a mix of linguistic, pragmatic, and physical conditions, i.e., the ecology and semiotic affordances of the interpreted event.

5 Discussion

This study has sought to investigate signed-to-spoken interpreting with a semiotic approach to languages. The benefits of a semiotic approach to interpreted discourse is that it offers a way of investigating language practices of interpreting beyond the lexical choices. Constructed dialogue, with its semiotic complexity, served to demonstrate the semiotic work conducted by the interpreter. While research within interpreting studies is increasingly conducted within the realm of multimodal interactional analysis, few studies have dealt with the interpreting process in terms of depictive, descriptive, and indicative resources (although, see Halley 2020; Meurant et al. 2022). I suggest that the acknowledgement of semiotic complexity in languages has implications for how we investigate, discuss, and teach the profession of interpreting.

In discussing the task of situating renditions, I employed the metaphor of language ecology. The simultaneous application of ecological and semiotic approaches to languages presents a potential paradox in the argument: First, the ecological approach is utilized to elucidate the differences between signed and spoken languages, particularly in how the visual ecology facilitates depictive strategies to a greater degree. On the other hand, the semiotic approach highlights the similarities; it proves useful in demonstrating how both signed and spoken languages employ depictive, descriptive, and indicative resources, suggesting they should not be viewed as fundamentally different from one another. While these might appear as conflicting approaches on the surface, the combination provides important insight: Firstly, it is evident that the languages are not alike, representing distinct properties and ecologies. However, in terms of the semiotic categories of depiction, description, and indication, these are framed in this study as categories present in both languages but realized through different language practices. Sometimes, the differences are easily detected, such as when one lexical sign from NTS needs to be replaced with a lexical word from Norwegian. However, in discourse involving constructed dialogue, the interpreter has other semiotic choices to make, beyond the choice of lexical items for the rendition. A key semiotic decision is whether or not to adopt constructed dialogue. In this study’s data, we have observed examples of both adopting and altering this strategy.

In the context of two interpreted informal lunch conversations, this study used as point of departure the semiotically complex device of constructed dialogue. Investigations into how constructed dialogue is situated within a spoken language ecology has led to several observations: In the first two extract, the renditions were situated by mainly maintaining the semiotics. However, in the second extract, a communicative need to better frame the constructed dialogue sequence seemed to emerge. Consequently, an expanded descriptive part of the rendition appeared to contribute to the semiotic framing of the highly depictive constructed dialogue resource. In the last two extracts, semiotics were largely altered due to differing norms concerning depictive devices. Nevertheless, some facial expressions still exhibited depictive resources. Furthermore, I propose that the degree of altering is partly a result of accommodating the norms of utterance semiotics of the target language concerning depictive modes of communication. While the semiotic strategy of renditions is based on linguistic choices, I suggest that these choices are also inherently interactional. This is because properly situating renditions is tied to maintaining shared understanding or common ground, aligning with previous claims that linguistic choices are simultaneously subjective and intersubjective (Janzen 2019). Moreover, in the last extract, the interpreter incorporated depictive resources from the source text into her own rendition, creating a joint utterance. Perhaps the most significant finding is how the interpreter utilizes a variety of resources, a process that appears to depend on a mixture of her linguistic and pragmatic competence.

In all examples, discourse markers seem to perform crucial semiotic work in situating utterances, supporting claims that discourse markers in interpreted discourse create an illusion of direct communication (Blakemore and Gallai 2014).

Employing the metaphor of renditions as reconfiguring the semiotics of utterances is not intended to oversimplify the interpreting process as merely re-arranging building blocks. This paper uses this metaphor to underscore how semiotic strategies may or may not be compatible in a new language ecology. Further, investigations using multimodal interactional analysis reveal more details about this choice. An interpreter’s ability to align resources with the norms of the utterance semiotics within a language ecology is crucial to their task. Consequently, this study emphasizes that interpreters must consider whether utterances should be semiotically reconfigured to achieve a successful and acceptable rendition. This is important because it has implication for how we discuss and teach the interpreting profession. Discussing interpreters’ practice as recruiting from a semiotically diverse repertoire (Ferrara and Halvorsen 2017) underscores the importance of possessing such a repertoire.

The novelty of this approach lies in its combination of a semiotic and ecological approach to languages with a multimodal interactional analysis approach to interpreting studies. This allows for the treatment of semiotic strategies as both interactional and linguistic assets, reflecting my alignment with scholars who challenge a strict division between linguistics and pragmatics (e.g., Couper-Kuhlen and Selting 2018; Goodwin and Duranti 1992; Halliday and Matthiessen 2013). Viewing interpreting as the process of situating the semiotics of utterances enables a more nuanced discussion of interpreting practices, as it furthers our understanding of how interpreting is a discourse process (Roy 2000).

Finally, applying a semiotic approach to languages to interpreted discourse contributes to broadening the field of interpreting theory. As such, this study should be regarded as an invitation for further investigations into the semiotic work involved in situating utterances within new ecologies.


Corresponding author: Vibeke Bø, Oslo Metropolitan University, Oslo, Norway, E-mail:

  1. Research funding: This study was funded by OsloMet. Funding source was not involved in study design, study collection, analysis, and interpretation of data, writing the article or the decision to submit the article for publication.

Appendix

Main tiers Transcript conventions Explanation
P1 GLOSS Identifies tokens of lexical signs that are part of source utterances. P1: Participant 1
INT Norwegian An orthographic transcription of interpreter’s verbal rendition into Norwegian
Trans A translation into English. Source utterances and renditions are both provided with an English translation.
<CD----> Identifies glosses within brackets to be constructed dialogue
<DM:example> Identifies a discourse marker
PRO-1P Identifies first person pronoun
*Raised eyebrow------* Descriptions of embodied actions are delimited between * * Transcriptions of embodied actions are based on Mondada (2018)
+

+Gaze il---------
Identifies gaze towards interlocutor

+indicates the point where gaze shifts
+

+Gaze ss---------
Identifies gaze towards signing space

+indicates the point where gaze shifts
[

[
Identifies point of simultaneity between source utterance and rendition
*gesture--* Identifies an uncategorized hand gesture
*----> Action described continues across subsequent lines
*----->> Action described continues until and after extract’s end
# Indicates the exact moment at which the screen shot has been recorded

References

Allwood, Jens. 2008. Dimensions of embodied communication – towards a typology of embodied communication. In Ipke Wachsmuth, Manuela Lenzen & Günther Knoblich (eds.), Embodied communication in humans and machines, 257–284. Oxford: Oxford University Press.10.1093/acprof:oso/9780199231751.003.0012Search in Google Scholar

Arbona, Eléonore, Kilian G. Seeber & Marianne Gullberg. 2022. Semantically related gestures facilitate language comprehension during simultaneous interpreting. In Jubin Abutalebi & Harald Clahsen (eds.), Bilingualism: Language and cognition, 1–15. Cambridge: Cambridge University Press.10.1017/S136672892200058XSearch in Google Scholar

Bargiela-Chiappini, Francesca. 2013. Embodied discursivity: Introducing sensory pragmatics. Journal of Pragmatics 58. 39–42. https://doi.org/10.1016/j.pragma.2013.09.016.Search in Google Scholar

Beal-Alvarez, Jennifer S. & Jessica W. Trussell. 2015. Depicting verbs and constructed action: Necessary narrative components in deaf adults’ storybook renditions. Sign Language Studies 16(1). 5–29. https://doi.org/10.1353/sls.2015.0023.Search in Google Scholar

Berthelin, Signe R. & Kaja Borthen. 2019. The semantics and pragmatics of Norwegian sentence-internal jo. Nordic Journal of Linguistics 42(1). 3–30. https://doi.org/10.1017/s0332586519000052.Search in Google Scholar

Beukeleers, Inez & Myriam Vermeerbergen. 2019. On the role of eye gaze in depicting and enacting in Flemish Sign Language: Some methodological considerations. https://lirias.kuleuven.be/retrieve/536860 (accessed 10 August 2025).Search in Google Scholar

Blakemore, Diane & Fabrizio Gallai. 2014. Discourse markers in free indirect style and interpreting. Journal of Pragmatics 60. 106–120. https://doi.org/10.1016/j.pragma.2013.11.003.Search in Google Scholar

Boyes Braem, Penny. 2012. Evolving methods for written representations of signed languages of the Deaf. In Andrea Ender, Adrian Leemann & Bernhard Wälchli (eds.), Methods in contemporary linguistics, 411–438. Berlin: De Gruyter.10.1515/9783110275681.411Search in Google Scholar

Brennan, Mary. 1999. Signs of injustice. The Translator 5(2). 221–246. https://doi.org/10.1080/13556509.1999.10799042.Search in Google Scholar

Brennan, Susan E. & Herbert H. Clark. 1996. Conceptual pacts and lexical choice in conversation. Journal of Experimental Psychology: Learning, Memory, and Cognition 22(6). 1482–1493. https://doi.org/10.1037/0278-7393.22.6.1482.Search in Google Scholar

Clark, Herbert H. 1996. Using language. Cambridge: Cambridge University Press.Search in Google Scholar

Clark, Herbert H. 2005. Coordinating with each other in a material world. Discourse Studies 7(4/5). 507–525. https://doi.org/10.1177/1461445605054404.Search in Google Scholar

Clark, Herbert H. & Richard J. Gerrig. 1990. Quotations as demonstrations. Language 66(4). 764–805. https://doi.org/10.2307/414729.Search in Google Scholar

Clark, Herbert H. & Edward F. Schaefer. 1989. Contributing to discourse. Cognitive Science 13(2). 259–294. https://doi.org/10.1207/s15516709cog1302_7.Search in Google Scholar

Cormier, Kearsy, Jordan Fenlon, Trevor Johnston, Ramas Rentelis, Adam Schembri, Katherine Rowley, Robert Adam & Bencie Woll. 2012a. From corpus to lexical database to online dictionary: Issues in annotation of the BSL Corpus and the development of BSL SignBank. Poster presented at Workshop on the representation and processing of sign languages: Interactions between corpus and lexicon, Istanbul.Search in Google Scholar

Cormier, Kearsy, David Quinto-Pozos, Zed Sevcikova & Adam Schembri. 2012b. Lexicalisation and de-lexicalisation processes in sign languages: Comparing depicting constructions and viewpoint gestures. Language & Communication 32(4). 329–348. https://doi.org/10.1016/j.langcom.2012.09.004.Search in Google Scholar

Cormier, Kearsy, Sandra Smith & Zed Sevcikova. 2013. Predicate structures, gesture, and simultaneity in the representation of action in British Sign Language: Evidence from deaf children and adults. Journal of Deaf Studies and Deaf Education 18(3). 370–390. https://doi.org/10.1093/deafed/ent020.Search in Google Scholar

Couper-Kuhlen, Elizabeth & Margaret Selting. 2018. Interactional linguistics. Studying language in social interaction. Cambridge: Cambridge University Press.10.1017/9781139507318Search in Google Scholar

Crasborn, Onno & Han Sloetjes. 2008. Enhanced ELAN functionality for sign language corpora. In Sixth international conference on Language Resources and Evaluation (LREC 2008)/Third workshop on the representation and processing of sign languages: Construction and exploitation of sign language corpora, 39–43.Search in Google Scholar

Davitti, Elena & Sergio Pasquandrea. 2017. Embodied participation: What multimodal analysis can tell us about interpreter-mediated encounters in pedagogical settings. Journal of Pragmatics 107. 105–128. https://doi.org/10.1016/j.pragma.2016.04.008.Search in Google Scholar

Davitti, Elena. 2019. Methodological explorations of interpreter-mediated interaction: Novel insights from multimodal analysis. Qualitative Research 19(1). 7–29. https://doi.org/10.1177/1468794118761492.Search in Google Scholar

Deppermann, Arnulf & Jürgen Streeck. 2018. Time in embodied interaction: Synchronicity and sequentiality of multimodal resources. Amsterdam & Philadelphia: John Benjamins.10.1075/pbns.293Search in Google Scholar

Dingemanse, Mark. 2015. Ideophones and reduplication: Depiction, description, and the interpretation of repeated talk in discourse. Studies in Language 39(4). 946–970. https://doi.org/10.1075/sl.39.4.05din.Search in Google Scholar

Dittmann, Allen T. & Lynn G. Llewellyn. 1968. Relationship between vocalizations and head nods as listener responses. Journal of Personality and Social Psychology 9(1). 79–84. https://doi.org/10.1037/h0025722.Search in Google Scholar

Dudis, Paul. 2007. Types of depiction in ASL. https://studylib.net/doc/8432142/1-types-of-depiction-in-asl-paul-dudis-1.-introduction-th… (accessed 27 November 2023).Search in Google Scholar

Dudis, Paul. 2011. The body in scene depictions. In Cynthia B. Roy (ed.), Discourse in signed languages, 3–45. Washington, DC: Gallaudet University Press.10.2307/j.ctv2rh28s4.7Search in Google Scholar

Ferrara, Lindsay Nicole. 2019. Coordinating signs and eye gaze in the depiction of directions and spatial scenes by fluent and L2 signers of Norwegian Sign Language. Spatial Cognition & Computation 19(3). 220–251. https://doi-org.ezproxy.oslomet.no/10.1080/13875868.2019.1572151.10.1080/13875868.2019.1572151Search in Google Scholar

Ferrara, Kathleen Warden & Barbara Bell. 1995. Sociolinguistic variation and discourse function of constructed dialogue Introducers: The case of be + like. American Speech 70(3). 265–290. https://doi.org/10.2307/455900.Search in Google Scholar

Ferrara, Lindsay & Rolf Piene Halvorsen. 2017. Depicting and describing meanings with iconic signs in Norwegian Sign Language. Gesture 16(3). 371–395. https://doi.org/10.1075/gest.00001.fer.Search in Google Scholar

Ferrara, Lindsay & Gabrielle Hodge. 2018. Language as description, indication, and depiction. Frontiers in Psychology 9. 1–15. https://doi.org/10.3389/fpsyg.2018.00716.Search in Google Scholar

Ferrara, Lindsay & Vibeke Bø. 2022. Norwegian Sign Language Corpus – Pilot Corpus (Conversations): Common Language Resources and Technology Infrastructure Norway (CLARINO) Bergen Repository. Available at: http://hdl.handle.net/11509/147 (accessed 20 August 2025).Search in Google Scholar

Garner, Mark. 2004. Language: An ecological view. Bern: Peter Lang.Search in Google Scholar

Goodwin, Charles. 2000. Action and embodiment within situated human interaction. Journal of Pragmatics 32(10). 1489–1522. https://doi.org/10.1016/s0378-2166(99)00096-x.Search in Google Scholar

Goodwin, Charles. 2010. Multimodality in human interaction. Calidoscopio 8(2). 85–98. https://doi.org/10.4013/cld.2010.82.01.Search in Google Scholar

Goodwin, Charles. 2017. Co-operative action. Cambridge: Cambridge University Press.10.1017/9781139016735Search in Google Scholar

Goodwin, Charles & Alessandro Duranti. 1992. Rethinking context: An introduction. In Charles Goodwin & Alessandro Duranti (eds.), Rethinking context: Language as an interactive phenomenon, 1–42. Cambridge University Press.Search in Google Scholar

Halley, Mark. 2020. Rendering depiction: A case study of an American Sign Language/English interpreter. Journal of Interpretation 28(2). 1–40.Search in Google Scholar

Halliday, Matthew A. K. & Christian M. I. M. Matthiessen. 2013. Halliday’s introduction to functional grammar. London: Routledge.10.4324/9780203431269Search in Google Scholar

Haug, Tobias, Karen Bontempo, Lorraine Leeson, Jemina Napier, Brenda Nicodemus, Beppie Van den Bogaerde & Myriam Vermeerbergen. 2017. Deaf leaders’ strategies for working with signed language interpreters: An examination across seven countries. Across Languages and Cultures 18(1). 107–131. https://doi.org/10.1556/084.2017.18.1.5.Search in Google Scholar

Haugen, Einar. 1971. The ecology of language. Linguistic Reporter 13(1). 19–26.Search in Google Scholar

Hochgesang, Julie. 2022. Managing sign language acquisition video data: A personal journey in the organization and representation of signed data. In Andrea L. Berez-Kroeker, Bradley McDonnell, Eve Koller & Lauren B. Collister (eds.), The open handbook of linguistic data management, 367–383. Cambridge: MIT Press.10.7551/mitpress/12200.003.0035Search in Google Scholar

Hodge, Gabrielle. 2014. Patterns from a signed language corpus: Clause-like units in Auslan (Australian sign language). Sydney: Macquarie University.Search in Google Scholar

Hodge, Gabrielle & Kearsy Cormier. 2019. Reported speech as enactment. Linguistic Typology 23(1). 185–196. https://doi.org/10.1515/lingty-2019-0008.Search in Google Scholar

Hodge, Gabrielle & Lindsay Ferrara. 2014. Showing the story: Enactment as performance in Auslan narratives. In Proceedings of the Conference of the Australian Linguistic Society, Melbourne, 372–397. Melbourne: University of Melbourne. http://minerva-access.unimelb.edu.au/handle/11343/40973 (accessed 3 November 2020).Search in Google Scholar

Hodge, Gabrielle & Lindsay Ferrara. 2022. Iconicity as multimodal, polysemiotic, and plurifunctional. Frontiers in Psychology 13. 808896. https://doi.org/10.3389/fpsyg.2022.808896.Search in Google Scholar

Holler, Judith. 2022. Visual bodily signals as core devices for coordinating minds in interaction. Philosophical Transactions of the Royal Society of London: Series B. Biological Sciences 377(1859). 20210094. https://doi.org/10.1098/rstb.2021.0094.Search in Google Scholar

Holler, Judith & Katie Wilkin. 2011. Co-speech gesture mimicry in the process of collaborative referring during face-to-face dialogue. Journal of Nonverbal Behavior 35(2). 133–153. https://doi.org/10.1007/s10919-011-0105-6.Search in Google Scholar

Holt, Elizabeth & Rebecca Clift (eds.). 2006. Reporting talk: Reported speech in interaction. Cambridge: Cambridge University Press.10.1017/CBO9780511486654Search in Google Scholar

Janzen, Terry. 2017. Composite utterances in a signed language: Topic constructions and perspective-taking in ASL. Cognitive Linguistics 28(3). 511–538. https://doi.org/10.1515/cog-2016-0121.Search in Google Scholar

Janzen, Terry. 2019. Shared spaces, shared mind: Connecting past and present viewpoints in American Sign Language narratives. Cognitive Linguistics 30(2). 253–279. https://doi.org/10.1515/cog-2018-0045.Search in Google Scholar

Janzen, Terry & Barbara Shaffer. 2013. The interpreter’s stance in intersubjective discourse. In Aurélie Sinte, Laurence Meurant, Mieke Van Herreweghe & Myriam Vermeerbergen (eds.), Sign language research, uses, and practices, 63–84. Berlin & Boston: De Gruyter Mouton.Search in Google Scholar

Janzen, Terry, Barbara Shaffer & Lorraine Leeson. 2023. What I know is here; what I don’t know is somewhere else: Deixis and gesture spaces in American Sign Language and Irish Sign Language. In Terry Janzen & Barbara Shaffer (eds.), Signed language and gesture research in cognitive linguistics, 211–242. Berlin: De Gruyter.10.1515/9783110703788-009Search in Google Scholar

Johnston, Trevor. 2003. Language standardization and signed language dictionaries. Sign Language Studies 3(4). 431–468. https://doi.org/10.1353/sls.2003.0012.Search in Google Scholar

Johnston, Trevor. 2019. Auslan corpus annotation guidelines. Sydney: Macquarie University.Search in Google Scholar

Keevallik, Leelo. 2018. What does embodied interaction tell us about grammar? Research on Language and Social Interaction 51(1). 1–21. https://doi.org/10.1080/08351813.2018.1413887.Search in Google Scholar

Kendon, Adam. 2004. Gesture: Visible action as utterance. Cambridge: Cambridge University Press.10.1017/CBO9780511807572Search in Google Scholar

Kendon, Adam. 2014. Semiotic diversity in utterance production and the concept of “language”. Philosophical Transactions of the Royal Society B: Biological Sciences 369(1651). 20130293. https://doi.org/10.1098/rstb.2013.0293.Search in Google Scholar

Kendon, Adam. 2017. Languages as semiotically heterogenous systems. Behavioral and Brain Sciences 40. E59. https://doi.org/10.1017/s0140525x15002940.Search in Google Scholar

Linell, Per. 2005. The written language bias in linguistics: Its nature, origins and transformations, 2nd edn. London: Routledge.10.4324/9780203342763Search in Google Scholar

Llewellyn-Jones, Peter & Robert G. Lee. 2014. Redefining the role of the community interpreter: The concept of role-space. Lincoln, NB: SLI Press.Search in Google Scholar

Mason, Ian. 2012. Gaze, positioning, and identity in interpreter-mediated dialogues. In Claudio Baraldi & Laura Gavioli (eds.), Coordinating participation in dialogue interpreting, 177–199. Amsterdam & Philadelphia: John Benjamins.10.1075/btl.102.08masSearch in Google Scholar

McNeill, David. 2000. Introduction. In David McNeill (ed.), Language and gesture, 1–10. Cambridge: Cambridge University Press.10.1017/CBO9780511620850.001Search in Google Scholar

Metzger, Melanie. 1995. Constructed dialogue and constructed action in American Sign Language. In Ceil Lucas (ed.), Sociolinguistics in deaf communities, 255–271. Washington, DC: Gallaudet University Press.10.2307/jj.28174273.13Search in Google Scholar

Meurant, Laurence, Aurélie Sinte & Sílvia Gabarró-López. 2022. A multimodal approach to reformulation: Contrastive study of French and French Belgian Sign Language through the productions of speakers, signers and interpreters. Languages in Contrast 22(2). 322–360.10.1075/lic.00025.meuSearch in Google Scholar

Mohammad, Abeer & Camilla Vásquez. 2015. ‘Rachel’s not here’: Constructed dialogue in gossip. Journal of Sociolinguistics 19(3). 351–371. https://doi.org/10.1111/josl.12125.Search in Google Scholar

Mondada, Lorenza. 2013. Interactional space and the study of embodied talk-in-interaction. In Peter Auer, Martin Hilpert, Anja Stukenbrock & Benedikt Szmrecsanyi (eds.), Space in language and linguistics: Geographical, interactional, and cognitive perspectives, 247–275. Berlin & Boston: De Gruyter.10.1515/9783110312027.247Search in Google Scholar

Mondada, Lorenza. 2014. The local constitution of multimodal resources for social interaction. Journal of Pragmatics 65. 137–156. https://doi.org/10.1016/j.pragma.2014.04.004.Search in Google Scholar

Mondada, Lorenza. 2018. Multiple temporalities of language and body in interaction: Challenges for transcribing multimodality. Research on Language and Social Interaction 51(1). 85–106. https://doi.org/10.1080/08351813.2018.1413878.Search in Google Scholar

Moriarty, Erin & Annelies Kusters. 2021. Deaf cosmopolitanism: Calibrating as a moral process. International Journal of Multilingualism 18(2). 285–302. https://doi.org/10.1080/14790718.2021.1889561.Search in Google Scholar

Napier, Jemina. 2016. Linguistic coping strategies in sign language interpreting. Washington, DC: Gallaudet University Press.10.2307/j.ctv2rcnffbSearch in Google Scholar

Ochs, Elinor. 1979. Planned and unplanned discourse. In Talmy Givon (ed.), Syntax and semantics, vol. 12, 51–80. London: Brill.10.1163/9789004368897_004Search in Google Scholar

Opsahl, Toril & Jan Svennevig. 2012. -Og vi bare sånn ‘grattis!’ Sitatmarkørene bare og sånn i talespråk. In Hans-Olav Enger & Jan Terje Faarlund (eds.), Grammatikk, bruk og norm: Festskrift til Svein Lie på 70-årsdagen 15. Oslo: Novus. https://urn.nb.no/URN:NBN:no-nb_digibok_2019091977118 (accessed 12 December 2023).Search in Google Scholar

Padden, Carol. 2000. Simultaneous interpreting across modalities. Interpreting 5(2). 169–185. https://doi.org/10.1075/intp.5.2.07pad.Search in Google Scholar

Peirce, Charles Sanders. 1965. Basic concepts of Peircean sign theory. In Mark Gottdiener, Karin Boklund-Lagopoulou & Alexandros Ph. Lagopoulos (eds.), Semiotics, 105–135. London: SAGE.Search in Google Scholar

Petitta, Giulia, Mark Halley & Brenda Nicodemus. 2018. “What’s the sign for nitty gritty?”: Managing metalinguistic references in ASL-English dialogue interpreting. Translation and Interpreting Studies 13(1). 49–70. https://doi.org/10.1075/tis.00004.pet.Search in Google Scholar

Poignant, Elisabeth. 2021. The cross-lingual shaping of narrative landscapes: Involvement in interpreted storytelling. Perspectives 29(6). 814. https://doi.org/10.1080/0907676x.2020.1846571.Search in Google Scholar

Rasenberg, Marlou, Asli Özyürek & Mark Dingemanse. 2020. Alignment in multimodal interaction: An integrative framework. Cognitive Science 44(11). e12911. https://doi.org/10.1111/cogs.12911.Search in Google Scholar

Rosenthal, Abigail. 2009. Lost in transcription: The problematics of commensurability in academic representations of American Sign Language. Text & Talk 29(5). 595–614. https://doi.org/10.1515/text.2009.031.Search in Google Scholar

Roy, Cynthia B. 2000. Interpreting as a discourse process. New York: Oxford University Press.10.1093/oso/9780195119480.003.0002Search in Google Scholar

Shaffer, Barbara. 2012. Reported speech as an evidentiality strategy in American Sign Language. In Barbara Dancygier & Eve Sweetser (eds.), Viewpoint in language: A multimodal perspective, 139–155. Cambridge: Cambridge University Press.10.1017/CBO9781139084727.011Search in Google Scholar

Shaw, Emily. 2019. Gesture in multiparty interaction. Washington, DC: Gallaudet University Press.10.2307/j.ctv2rh2917Search in Google Scholar

Skedsmo, Kristian. 2020. Other-initiations of repair in Norwegian Sign Language. Social Interaction 3(2). 1–43. https://doi.org/10.7146/si.v3i2.117723.Search in Google Scholar

Skedsmo, Kristian. 2021. How to use comic-strip graphics to represent signed conversation. Research on Language and Social Interaction 54(3). 241–260. https://doi.org/10.1080/08351813.2021.1936801.Search in Google Scholar

Stokoe, William C. 1960. Sign language structure: An outline of the visual communication systems of the American Deaf. Journal of Deaf Studies and Deaf Education 10(1). 3–37. https://doi.org/10.1093/deafed/eni001.Search in Google Scholar

Stokoe, William C. 1996. The once new field: Sign language research, or breaking sod in the back forty. Sign Language Studies 93(1). 379–392. https://doi.org/10.1353/sls.1996.0005.Search in Google Scholar

Stone, Christopher & Thaïsa Hughes. 2020. Facilitating legitimate peripheral participation for student sign language interpreters in medical settings. In Izabel E. T. de V. Souza & Effrossyni Fragkou (eds.), Handbook of research on medical interpreting, 355–374. Hershey, PA: IGI Global.10.4018/978-1-5225-9308-9.ch015Search in Google Scholar

Svennevig, Jan. 2008. Ikke sant som respons i samtale. In Språk i Oslo: Ny Forskning Omkring Talespråk, 127–138. Oslo: Novus.Search in Google Scholar

Sweetser, Eve. 2023. Gestural meaning is in the body(-space) as much as in the hands. In Terry Janzen & Barbara Shaffer (eds.), Signed language and gesture research in cognitive linguistics, 157–180. Berlin: De Gruyter.10.1515/9783110703788-007Search in Google Scholar

Tannen, Deborah. 1986. Introducing constructed dialogue in Greek and American conversational and literary narrative. In Florian Coulmas (ed.), Direct and indirect speech, 311–360. Berlin & New York: De Gruyter.Search in Google Scholar

Thumann, Mary. 2011. Identifying depiction: Constructed action and constructed dialogue in ASL presentations. In Cynthia B. Roy (ed.), Discourse in signed languages, 46–66. Washington, DC: Gallaudet University Press.10.2307/j.ctv2rh28s4.8Search in Google Scholar

Tiselius, Elisabet. 2022. Tolkar du med knoppen eller med kroppen? Om förkroppsligad kognition i dialogtolkning. In Magnus Dahnberg & Yvonne Lindqvist (eds.), Tango För Tre. En Dansant Festskrift till Cecilia Wadensjö. Stockholm: Stockholms universitet.Search in Google Scholar

Vranjes, Jelena. 2021. Interpreter’s use of gestures in interpreter-mediated psychotherapy. http://hdl.handle.net/1854/LU-8695769 (accessed 14 December 2022).Search in Google Scholar

Vranjes, Jelena & Geert Brône. 2021. Interpreters as laminated speakers: Gaze and gesture as interpersonal deixis in consecutive dialogue interpreting. Journal of Pragmatics 181. 83–99. https://doi.org/10.1016/j.pragma.2021.05.008.Search in Google Scholar

Young, Lesa, Carla Morris & Clifton Langdon. 2012. “He said what?!”: Constructed dialogue in various interface modes. Sign Language Studies 12(3). 398–413. https://doi.org/10.1353/sls.2012.0000.Search in Google Scholar

Received: 2024-02-14
Accepted: 2025-06-10
Published Online: 2025-08-25

© 2025 the author(s), published by De Gruyter, Berlin/Boston

This work is licensed under the Creative Commons Attribution 4.0 International License.

Downloaded on 13.10.2025 from https://www.degruyterbrill.com/document/doi/10.1515/sem-2024-0018/html
Scroll to top button