Abstract
In this paper, I use methods from corpus linguistics and computer vision to find candidates for continuers – that is, conversational markers that signal comprehension and encouragement to the primary speaker/signer to continue – in a corpus of Swedish Sign Language (STS). Using different methods based on distributional patterns in conversational turns, I identify a small set of manual signs – particularly the sign JA@ub ‘yes’ – that exhibit the characteristics associated with continuers, such as occurring frequently in repeated sequences of overlapping but noncompetitive turns. The identified signs correspond to those found in previous research on manual backchannels in STS, demonstrating that quantitative, distribution-based approaches are successful in identifying continuers. In a second step, I employ methods from computer vision to analyze a subset of the corpus videos, and find that the continuer candidates show interesting form characteristics: they are small in visible articulation and thus conversationally unobtrusive by often being articulated low and with little movement in signing space. The results show that distribution-based approaches can be used successfully with sign language corpus data, and that the nature of continuers exhibits similarities across modalities of human language.
1 Introduction
Languages arise from the need to communicate and they form an indispensable tool for building and maintaining social bonds. One of the most central aspects of language is thus its use in interaction with others. Humans have developed many linguistic strategies to achieve efficient and successful communication, such as ways of signaling understanding with backchannels, serving as feedback signals from the addressee, as well as signaling a lack of understanding with repair, flagging misunderstanding or need for clarification (e.g., Dingemanse 2024; Dingemanse and Enfield 2024; Micklos and Woensdregt 2023; Schegloff et al. 1977). Such communicative strategies illustrate how conversation is not one-directional, but rather a continuous and simultaneous negotiation between the interlocutor who is currently (more) active in producing an utterance (the speaker/signer) and the interlocutor to whom it is directed (the addressee), a negotiation which renders both interlocutors active participants in the communication taking place (Bavelas and Gerwing 2011; Bavelas et al. 2000; Goodwin 1986a, 1986b). In this conversational negotiation between interlocutors, the feedback signal expressing comprehension and encouragement for the speaker/signer to go on (e.g., the English interjection uh-huh) is called a continuer (Goodwin 1986b). Continuers are an important tool at the addressee’s disposal to maintain the communicative flow, expressing that “one unit has been received and that another is now awaited” (Goodwin 1986b: 208). Naturally, this function could be expressed also with signals other than sounds, such as a head nod (see Lutzenberger et al. 2024). Using corpus data from a sample of genealogically diverse spoken languages, Dingemanse et al. (2022) illustrated that distributional patterns of words in a conversational corpus can be used to identify communicative devices, such as continuers, in a noncircular and language-independent way. By looking at sequences of repeated, identical turns by the addressee, interspersed with the primary speaker’s utterance(s), the authors could identify words, in the individual languages, that functioned as continuers. Thus, the words used in this core communicative function were shown to be identifiable through distributional patterns alone, in the annotated corpus data from spoken language interactions.
Languages arise through natural interaction between humans, wherever and whenever needed, and in whatever modality is available. For spoken languages, the primary channel of transmission is the auditory one, where spoken words are uttered by the speaker and perceived as sound waves and interpreted as language by the addressee. Spoken language interaction naturally involves other modalities, too, such as the visual channel (e.g., through bodily gestures and facial expressions) and the tactile channel (i.e., physical touch). Sign languages make use of the gestural-visual modality, being articulated by the signer with the hands, face, and body and perceived visually (or, alternatively, tactilely) by the addressee (also known as the “signee”). The modality is one of the reasons why it has been challenging to represent sign language data for linguistic analysis: whereas video is the only format that truly represents the language production, most research has been limited to written transcriptions that do not capture all aspects of production (see Crasborn 2015; Frishberg et al. 2012; Johnston 2014; Miller 2006). With the increased availability of video recording and storage technology, but also trends in linguistic research (cf. Ferrara and Hodge 2018; Vermeerbergen and Nilsson 2018), sign language researchers in a variety of countries started building sign language corpora, using software suitable for the synchronized display of multiple video files and machine-readable annotations (Fenlon and Hochgesang 2022). The possibility of creating and utilizing sign language corpora has revolutionized sign language linguistics, giving researchers the data and tools to investigate many linguistic phenomena based on, for example, distributional patterns (Börstell 2022a), and to research naturalistic, dyadic, conversational signing. As such, we now have the resources to utilize tools and methods from corpus linguistics and answer questions about language use in conversation, much like what has been done in the linguistic study of spoken languages for decades. In addition, even more recent technological advances have enabled computer vision-based approaches to analyzing sign language data (e.g., Börstell 2023; Kimmelman and Anželika 2023). These methods allow for some features of sign language production to be extracted directly from the video files (i.e., the recorded signing), without the need for time-consuming, manual annotation work.
The use of language in human interaction has been studied extensively within several subfields of linguistics, but like most linguistic research, there is a heavy skew towards spoken languages as the object of study. With the recent development in corpus building within sign language linguistics, there have been some advances in interaction studies on sign languages as well, although this is still sorely under-explored on the whole. Many of the existing studies on sign language conversation analysis have looked at the coordination of turn-taking in visual and tactile signed conversations (e.g., Baker 1977; Berge and Raanes 2013; Bjørgul 2022; Casillas et al. 2015; Coates and Sutton-Spence 2001; de Vos et al. 2015; Groeber and Pochon-Berger 2014; Iwasaki et al. 2022; McCleary and De Arantes Leite 2012; Mesch 1998; Van Herreweghe 2002), whereas other studies have focused on conversational repair (e.g., Byun et al. 2018; Lutzenberger et al. 2024; Manrique 2016; Manrique and Enfield 2015; Mesch and Schönström 2023; Safar and de Vos 2022; Skedsmo 2020, 2023). A number of studies have identified specific signs that are used extensively in conversation-regulating and discourse functions, among them pointing signs (Ferrara 2020, 2022; Lepeut and Shaw 2022) and the gesture-like palms-up (flat hands, palms facing up; see Figure 1). The latter has been shown to exhibit many discourse functions across sign languages, such as pause-filling, backchanneling, expressing questions, and politeness (e.g., Engberg-Pedersen 2002; Gabarró-López 2020, 2024; Hoza 2011; Lepeut and Shaw 2022; McKee and Wallingford 2011; Roush 2007; Ryttervik 2015), and also across many spoken languages (e.g., Cooperrider et al. 2018; Kendon 2004; Müller 2004). While there has been some debate around the division between signs and gestures in sign languages (e.g., Ferrara and Hodge 2018; Kendon 2008), I do not make any categorical distinction between them and consider items like palms-up to be signs.

Palms-up PU@g in STS (Svenskt teckenspråkslexikon 2023: 18717).
Both pointing signs and palms-up have been shown to be highly frequent items in conversational signing across sign languages (e.g., Börstell et al. 2016; Fenlon et al. 2014; Johnston 2012; McKee and Kennedy 2006), and Mesch (2016) found that these signs occur as manual backchannel responses in conversations in Swedish Sign Language (STS; svenskt teckenspråk). In her study, Mesch (2016) annotated 35 min of STS conversations from the STS Corpus (Öqvist et al. 2020) to identify the most common types of backchannel responses in the language. The study focused on manual signs since there is very limited annotation of nonmanual activity in the STS Corpus. However, mouthings, mouth activity reminiscent of silently articulated spoken language words (see Bisnath 2024), and mouth gestures, mouth activity not associated with spoken language (see Crasborn and Bank 2014), are frequently used in STS, including as backchannels (cf. Mesch and Schönström 2021; Mesch et al. 2021). Mesch (2016: 39) found that palms-up (glossed PU@g in the STS Corpus; see Figure 1) is one of the most frequent backchannels in STS, and Ryttervik (2015) found that it can be used to acknowledge or question the content uttered. However, the most frequent manual backchannel sign in STS was found to be the two variants for signing ‘yes’ (see Figure 2): originally JA@b (‘yes’; “@b” referring to bokstavering ‘fingerspelling’), derived from a clear fingerspelling of the letters J and A (for the Swedish word ja ‘yes’), but reduced as a backchannel into JA@ub (the “u” referring to uppbackning ‘backchannel, support’). Mesch (2016) found that the reduced backchannel JA@ub is often articulated with a different orientation (palm facing inwards), movement (repeated opening and closing of the fingers), and/or location (held at a lower location, e.g., closer to a resting position in the lap). While manual backchannels were frequent, Mesch (2016) found that most backchannel responses in her dataset consisted of nonmanual markers, similar to recent findings for British Sign Language (BSL), where head nods were found to be the most common expression (Lutzenberger et al. 2024).

Signs JA@b and JA@ub in STS (Svenskt teckenspråkslexikon 2023: 9027, 11810).
Mesch (2016) looked at backchannel responses in a wider sense, as in any utterance produced by the addressee in the conversation. However, there is a distinction to be made between continuer (generic) and assessment (specific) responses, in that the former is not commenting on the conversational content, but simply signals understanding and that the primary speaker/signer should continue their turn (Bavelas et al. 2000; Goodwin 1986b). In this paper, I look at continuers as a subset of backchannels, following the approach by Dingemanse et al. (2022) in attempting to identify them by distributional patterns alone. In their study, they searched for continuer candidates in a crosslinguistic sample based on their distribution in conversational corpus data, identifying “streaks of non-unique conversational turns that occur in frequent alternation with unique turns by other participants [… with] a minimum streak length of 3” (Dingemanse et al. 2022: 161). In a second step, the authors compared the form properties of the identified continuers against other high-frequency words and found that continuers have minimal form (albeit not shorter than top-frequency items) and often contain nasals and partially reduplicated form (e.g., English mhm; Dingemanse et al. 2022: 164).
The aim of this paper is (i) to test whether we can employ distributional patterns to identify continuer candidates in sign language corpus data from Swedish Sign Language (STS) in a similar fashion to what Dingemanse et al. (2022) did for spoken languages, and (ii) to make use of computer vision methods to analyze form properties of identified candidates from video data.
2 Methodology
The data for this study comes from the STS Corpus (Mesch et al. 2012; Öqvist et al. 2020) as stored in The Language Archive[1] online repository in July 2023, consisting of 298 corpus files (dyadic interactions of narrative and conversational texts) comprising 189,679 sign glosses across 42 signers and over 23 h in total duration. The interactions are video-recorded from multiple angles and annotated with ELAN (Wittenburg et al. 2006) for individual signs articulated by each signer’s two hands (see Figure 3 for an illustration of the corpus interface layout; note that two-handed signs are only annotated on a single tier in the STS Corpus, rendering the nondominant hand tiers empty in this excerpt). The main data includes ELAN annotation files and corresponding metadata files for the entire STS Corpus.

Illustration of the STS Corpus video player and annotation tiers (Öqvist et al. 2020): https://teckensprakskorpus.su.se/video/sslc01_241.eaf?t=358880.
In order to find continuers according to the distribution-based method used by Dingemanse et al. (2022), the sign annotations in the corpus data had to be grouped into units corresponding to turns. Since the STS Corpus does not have any such type of unit in its annotations (see Börstell 2024), turns had to be inferred. Three methods to inferring “turns” were applied, named interval, quantity, and sequential. The interval method uses pauses to infer turns: anytime a pause between sign annotations is longer than a set threshold (here: 500 ms), a segmentation of turns is made – it is thus possible for a single signer to have multiple consecutive turns. The quantity method calculates which of the two signers has the most signs in each window of three chronologically ordered signs, assigning this signer the turn for that stretch of signing, regardless of any interspersed sign by the other signer. The sequential method simply assigns a new turn as soon as there is a change of signer in the temporal sequence of chronologically ordered signs, even if it is only a single sign by one signer within a longer stretch of signs by the other. These three methods are applied separately to the corpus data and continuer candidates are identified for each method based on the definition of a streak of three nonunique (i.e., identical) turns, following Dingemanse et al. (2022: 161).[2]
Additionally, a fourth method was used, simply looking at the frequency distribution of signs in the context of overlapping and nonoverlapping signing, and whether or not there is a signer change at that location. Here, the frequency distribution based on these two variables (overlap and signer change) was used to calculate weighted log odds (Schnoebelen et al. 2022) of signs occurring in these contexts. This method thus identifies signs that are statistically overrepresented in a particular context – for example, whether a sign often occurs during signing that overlaps with the interlocutor’s signing, but does not result in a change of primary signer (i.e., an expected context for continuers).[3]
A subset of the data is used for the computer vision analysis. This subset consists of 15 of the corpus files covering 30 of the 42 signers, with 30 video files showing each of the signers from a front-facing angle (see Figure 3), comprising 13,507 sign glosses and about 2 h and 45 min in total duration. The subset video data was processed with the Python implementation of computer vision software MediaPipe (Lugaresi et al. 2019). MediaPipe is used to extract landmarks on the body of the signers for body-pose estimation, such that the coordinates of these different points on the body can be tracked frame by frame throughout a video. In this study, I focus on the landmarks representing the articulating hands, effectively extracting manual activity over time in the videos (see, e.g., Börstell 2023 on its application to sign language articulation). This method gives you data on where the hands are in the horizontal (x) and vertical (y) dimensions in each frame of the videos (i.e., location), and consequently how far the hands have moved (i.e., distance) between frames.
The corpus data and the coordinates from the video analysis were processed, analyzed, and visualized using R v4.3.2 (R Core Team 2023) and the packages glue (Hester and Bryan 2022), ggrepel (Slowikowski 2023), ggtext (Wilke and Wiernik 2022), here (Müller 2020), patchwork (Pedersen 2024), scales (Wickham et al. 2023b), signglossR (Börstell 2022b), slider (Vaughan 2022), tidylo (Schnoebelen et al. 2022), tidyverse (Wickham et al. 2019), and xml2 (Wickham et al. 2023a).
The data and scripts used for this study can be found at https://osf.io/hyxrp.
3 Results
3.1 Finding continuer candidates
Using the three methods for estimating turns (or utterance units) in the corpus data, there are only two signs that appear in the intersection of all three methods, namely JA@ub (Figure 2) and PU@g (Figure 1). For each of the three methods, JA@ub occurs in six to seven times more streaks than PU@g. These results would immediately corroborate the findings of Mesch (2016), who also identified these two signs as the most frequent individual manual backchannels in a subset of the STS Corpus. This also shows the crosslinguistic and crossmodal validity of the approach of Dingemanse et al. (2022), illustrating how this distribution-based method for identifying continuer candidates appears to be successful also when applied to the STS Corpus data.
The second approach to identifying continuers in the STS Corpus data was based on the distributional frequency patterns in contexts of overlapping/nonoverlapping signing and possible signer change, calculated with weighted log odds (Schnoebelen et al. 2022) in order to see overrepresentation of signs in a particular context. Figure 4 shows the results of this calculation, across the contexts of (i) overlapping signing without signer change, (ii) overlapping signing with signer change, and (iii) no overlapping signing with signer change – only signs with a token frequency above 20 for each context are included, in order to remove spurious low-frequency items. The first (left-most) context in Figure 4 would represent the prototypical context of a continuer: the addressee inserts a single response without attempting to take over the signing. The other two contexts are where there is a signer change.

Weighted log odds of sign token frequency in three distributional contexts based on signing overlap and signer change. Only signs with a context token frequency >20 are included.
As Figure 4 shows, the two continuer candidates are represented in the top 10 signs for most of these contexts. However, JA@ub is the only sign to stand out from the rest in any of the contexts, namely in the one expected for continuers (i.e., signing overlap without signer change), a context for which PU@g ranks only number 12, with low log odds. Based on the distribution in both continuer streaks and overlap/signer-change contexts, JA@ub appears to be the more prototypical continuer among these two candidates, with PU@g being more prominent in other contexts. Several other common backchannels are also found among the top signs in the context of signing overlap without signer change, such as EXACTLY (originally glossed PRECIS), ACCURATE (originally STÄMMA), and OH-REALLY (originally JASÅ), but these are assessment-type backchannels rather than generic continuers. Interestingly, even though the data generally concerns manual signs, and only very few nonmanual expressions have been annotated, one mouthing is included in the continuer context in Figure 4. This is the annotation “mtyp:ja”, which is a mouthing of the Swedish word ja (‘yes’) without any accompanying manual sign (Mesch and Wallin 2021: 38–39; see also Mesch et al. 2021). Since the weighted log odds calculation accounts for distribution relative to frequency, “mtyp:ja” can appear in the top 10 despite having relatively few tokens in total (n = 75), compared to JA@ub (n = 2,054) and PU@g (n = 3,283), because it is particularly associated with this context – compare this to the most frequent signs in the corpus: PRO1 (first person point; n = 10,846) and IX (originally PEK ‘point’; n = 6,925). Pointing signs are among the most frequent lexical items in STS discourse (Börstell et al. 2016) and Figure 4 shows that they are among the top signs occurring in contexts with signer change, but less so in the expected continuer context – that is, they appear to be frequent at the beginning and end of (extended) turns, but not as stand-alone turns.
Based on these two approaches to identifying continuers, both JA@ub and PU@g appear to be relevant candidates. However, whereas the streak method identified both of the signs as continuer candidates, JA@ub appears to be significantly more frequent in the expected continuer contexts in terms of overlap and signer changes, compared to PU@g and other high-frequency signs. Figure 5 shows an example of the distribution of the two candidates across three corpus files.

Location of JA@ub (yellow) and PU@g (blue) in three corpus files. Bands show the stretches of signing for each signer and triangles show the location of continuer candidates.
As Figure 5 shows, both JA@ub and PU@g occur frequently, sometimes as short responses during a longer stretch of signing by the primary signer, but also at end points of longer turns. Dingemanse and Liesenfeld (2022) compare the frequency of continuer responses in corpora across languages and find that 8 % of turns are continuers in English and 21 % of turns are continuers in Korean. In the STS Corpus data, about 6.5 % of (sequential) turns consist of a single JA@ub response, and about 2.5 % of a single PU@g response, putting STS closer to English than Korean when looking mainly at manual responses and disregarding nonmanual ones (cf. Lutzenberger et al. 2024; Mesch 2016).
3.2 Form properties of the continuer candidates
Börstell et al. (2016) showed that STS patterns as expected, based on spoken language corpora, in that more frequent lexical items are shorter (based on annotation durations). Figure 6 shows the correlation between token frequency and mean duration in the whole STS Corpus. Similar to what Dingemanse et al. (2022) found across spoken language corpora, Figure 6 shows that the most frequent items are almost an order of magnitude more frequent than the identified continuer candidates. However, both PU@g and – even more so – JA@ub appear to have longer durations than expected based on their frequency. This discrepancy was discussed by Börstell et al. (2016) as well as Mesch (2016) with regard to JA@ub, with its longer duration attributed to the fact that it is often articulated with multiple sequential repetitions. While Dingemanse et al. (2022) note that spoken language continuers are somewhat longer than top-frequency lexical items, Figure 6 seems to point to longer-than-expected durations for the two candidates, especially JA@ub.

Token frequency and mean duration of all signs in the STS Corpus, with the continuer candidates highlighted. Dashed lines show means of token frequency and duration across all signs. Signs with a mean duration ±3 standard deviations from the global mean are removed.
In order to look at form properties of the continuer candidates in more detail, we can turn to the subsample that was analyzed using computer vision, estimating articulatory features of the signing. Figure 7 shows the estimated hand height (vertical position of the wrist) and the total distance traveled by wrist and index finger combined for the articulating hand during signing. The values have been z-scored relative to the signer, and only across video frames time-aligned with an annotated sign (i.e., stretches of rest and transport movements are removed).

Mean estimated height and distance traveled by the active hand for each sign. Continuer candidates are highlighted. Signs with mean values ±3 standard deviations from the global mean (dashed lines) are removed.
What Figure 7 shows is that both JA@ub and PU@g have lower vertical articulation and generally travel a slightly shorter distance throughout the sign than most signs in the subsample. Mesch (2016) noted that many manual backchannel responses in the STS Corpus are articulated lower down, close to the signer’s lap (where the hands are often placed in a rest position when seated), attributed to a desire “to not direct attention away from the primary signer” (Mesch 2016: 32). Thus, it is unsurprising that the vertical articulation of JA@ub and PU@g is generally lower on average. However, the distance traveled is only slightly shorter than signs on average, which may be linked to Figure 6 and their longer-than-expected durations. Thus, Figure 8 shows the same values as Figure 7, but the distance traveled has been recalculated to show the relative distance traveled by dividing the total distance by the duration (i.e., distance traveled per time unit). From Figure 8, we can see that JA@ub shows a shorter relative distance traveled than most signs, whereas PU@g now falls slightly above the global mean. In fact, among signs with more than five occurrences in the subsample, thus removing low-frequency items, there are only four signs that have a lower articulation height and shorter relative distance traveled than JA@ub, and they all happen to be so-called buoys. Buoys are signs articulated on the more passive hand, often spanning multiple signs by the other hand, used for discourse reference and cohesion (see Liddell et al. 2007; Nilsson 2007; Vogt-Svendsen and Bergman 2007). This indicates that JA@ub is among the bottom-most signs in terms of visual prominence (low and stationary over time).

Mean estimated height and distance traveled per time unit by the active hand for each sign continuer candidates and signs with values lower than the candidates’ and >5 occurrences are highlighted. Signs with mean values ±3 standard deviations from the global mean (dashed lines) are removed.
4 Discussion
The goals of this study were (i) to see whether distributional patterns (cf. Dingemanse et al. 2022) could identify continuer candidates in STS, and (ii) to make use of computer vision to analyze form properties of identified candidates.
With regard to the first goal, it was shown that distributional patterns can be used to successfully identify continuers in the STS Corpus, even when turns are inferred from the data rather than manually annotated. The streak method based on Dingemanse et al. (2022) singled out the two signs that Mesch (2016) identified as the top manual backchannels in a manually annotated subsample of the STS Corpus, namely JA@ub and PU@g. A second approach calculated weighted log odds (Schnoebelen et al. 2022) for the distribution of signs in different conversational contexts: whether there was overlap between interlocutors’ signing and whether there was a signer change around the occurrence. Here, the results point to JA@ub being the more dedicated continuer, whereas PU@g occurred relatively more frequently in the other contexts. Based on previous work, PU@g has been shown to be used in a multitude of functions in both STS (Ryttervik 2015) and other languages (e.g., Cooperrider et al. 2018; Gabarró-López 2024; Lepeut and Shaw 2022), and should be categorized as a wider backchannel rather than a dedicated continuer. Thus, this combined approach of using both sequences of signs (streaks) and contextual frequencies (occurring with or without overlap or signer change) may be a way to validate potential candidates from multiple angles and spot nuances in their distribution. The context approach would also have other applications, such as identifying turn-regulating signs from signing context alone. For instance, Figure 4 showed that pointing signs are more prominent when there is a signer change, which mirrors findings from Norwegian Sign Language, where index points can be used to coordinate turn-beginnings (Ferrara 2020, 2022). A more comprehensive annotation of the nonmanual expression of backchannels and conversation-regulating devices (e.g., head nods, mouthings, eye gaze) should be a fruitful future direction for investigating continuers in STS (cf. Lutzenberger et al. 2024). In principle, with annotations of such expressions in a corpus, the methods for identifying continuers employed in this study should be applicable. The current results should thus be seen as an incomplete picture, as the data is almost entirely focused on the manual channel.
Several form properties were identified through sign durations as well as the articulatory features extracted with computer vision. While preliminary, these results point to similarities in the patterns found across spoken languages, stating that optimal continuers are “(i) easy to plan and produce, (ii) unobtrusive, and (iii) sufficiently distinct from regular words to be seen as ceding the conversational floor” (Dingemanse et al. 2022: 161). Based on the results of this study, the sign JA@ub is articulated lower in height with the hand traveling a shorter distance (i.e., moving less) than in most other signs. These properties make the sign easy to produce and simultaneously less visually prominent by having a lower articulation activity, hence rendering the sign less obtrusive when overlapping in conversation. Although a more detailed phonetic analysis is needed to measure distinctiveness of form across more aspects of the sign than just height and movement (e.g., handshape and palm orientation in combination with repeated local movements), these preliminary form characteristics may potentially distinguish JA@ub from other signs.
In conclusion, this study has shown that the distributional patterns and form-based properties used to identify and compare continuers across spoken languages can be transferred successfully to sign language data. The identified candidates align with those signs observed in previous research based on manual annotation of turn-taking and backchanneling in STS, thereby validating the search methods. The form (e.g., unobtrusive) and distributional properties (e.g., frequent, repeated sequences) of the STS continuer candidates, particularly JA@ub, are similar to continuers identified in various spoken languages. The signed and spoken modalities may come with slightly different affordances. For example, extended production overlap between interlocutors may have lower signal interference in the visual modality compared to the auditory one, thus potentially allowing for longer continuers in sign languages. Nonetheless, the articulation of JA@ub, while often longer than expected based on spoken language counterparts, seems to adhere to the idea of unobtrusiveness by exhibiting lower manual activity. This highlights the modality-independence of continuers as a linguistic concept and illustrates their central role in communication, which expands on our understanding and definitions of continuers as a conversation-regulating device across individual languages and different modalities.
Acknowledgments
I am grateful for comments and suggestions from Kristian Skedsmo and a second, anonymous reviewer, which improved this paper.
References
Baker, Charlotte. 1977. Regulators and turn-taking in American Sign Language discourse. In Lynn A. Friedman (ed.), On the other hand: New perspectives on American Sign Language, 215–236. New York: Academic Press.Search in Google Scholar
Bavelas, Janet B. & Jennifer Gerwing. 2011. The listener as addressee in face-to-face dialogue. International Journal of Listening 25(3). 178–198. https://doi.org/10.1080/10904018.2010.508675.Search in Google Scholar
Bavelas, Janet B., Linda Coates & Trudy Johnson. 2000. Listeners as co-narrators. Journal of Personality and Social Psychology 79(6). 941–952. https://doi.org/10.1037/0022-3514.79.6.941.Search in Google Scholar
Berge, Sigrid Slettebakk & Eli Raanes. 2013. Coordinating the chain of utterances: An analysis of communicative flow and turn taking in an interpreted group dialogue for deaf-blind persons. Sign Language Studies 13(3). 350–371. https://doi.org/10.1353/sls.2013.0007.Search in Google Scholar
Bisnath, Felicia. 2024. Mouthing constructions in 37 signed languages: Typology, ecology and ideology. Journal of Language Contact 16(4). 565–614. https://doi.org/10.1163/19552629-01604004.Search in Google Scholar
Bjørgul, Astrid Næss. 2022. Turn-timing in Norwegian Sign Language: A study of transition durations. Bergen: University of Bergen master’s thesis. Available at: https://hdl.handle.net/11250/2999123.Search in Google Scholar
Börstell, Carl. 2022a. Searching and utilizing corpora. In Jordan Fenlon & Julie A. Hochgesang (eds.). Signed language corpora (Sociolinguistics in deaf communities 25), 90–127. Washington, DC: Gallaudet University Press.10.2307/j.ctv2rcnfhc.9Search in Google Scholar
Börstell, Carl. 2022b. Introducing the signglossR package. In Eleni Efthimiou, Stavroula-Evita Fotinea, Thomas Hanke, Julie A. Hochgesang, Jette Kristoffersen, Johanna Mesch & Marc Schulder (eds.). Proceedings of the LREC2022 10th workshop on the representation and processing of sign languages: Multilingual sign language resources, 16–23. Marseille: European Language Resources Association (ELRA) [Version 2.2.6 of package used here.] https://www.sign-lang.uni-hamburg.de/lrec/pub/22006.pdf (accessed 26 June 2024).Search in Google Scholar
Börstell, Carl. 2023. Extracting sign language articulation from videos with MediaPipe. In Proceedings of the 24th Nordic conference on computational linguistics (NoDaLiDa) (NEALT proceedings series, No. 52), 169–178. Tórshavn, Faroe Islands: University of Tartu Library. Available at: https://aclanthology.org/2023.nodalida-1.18.Search in Google Scholar
Börstell, Carl. 2024. Evaluating the alignment of utterances in the Swedish Sign Language corpus. In Eleni Efthimiou, Stavroula-Evita Fotinea, Thomas Hanke, Julie, A., Johanna Mesch & Marc Schulder (eds.). Proceedings of the LREC2024 11th workshop on the representation and processing of sign languages: Evaluation of sign language resources, 36–45. Turin: European Language Resources Association (ELRA).Search in Google Scholar
Börstell, Carl, Thomas Hörberg & Robert Östling. 2016. Distribution and duration of signs and parts of speech in Swedish Sign Language. Sign Language and Linguistics 19(2). 143–196. https://doi.org/10.1075/sll.19.2.01bor.Search in Google Scholar
Byun, Kang-Suk, Connie de Vos, Anastasia Bradford, Ulrike Zeshan & Stephen C. Levinson. 2018. First encounters: Repair sequences in cross-signing. Topics in Cognitive Science 10(2). 314–334. https://doi.org/10.1111/tops.12303.Search in Google Scholar
Casillas, Marisa, Connie de Vos, Onno Crasborn & Stephen C. Levinson. 2015. The perception of stroke-to-stroke turn boundaries in signed conversation. In Proceedings of the 37th annual meeting of the cognitive science society (CogSci 2015), 315–320. Cognitive Science Society.Search in Google Scholar
Coates, Jennifer & Rachel Sutton-Spence. 2001. Turn-taking patterns in deaf conversation. Journal of Sociolinguistics 5(4). 507–529. https://doi.org/10.1111/1467-9481.00162.Search in Google Scholar
Cooperrider, Kensy, Natasha Abner & Susan Goldin-Meadow. 2018. The palm-up puzzle: Meanings and origins of a widespread form in gesture and sign. Frontiers in Communication 3. https://doi.org/10.3389/fcomm.2018.00023.Search in Google Scholar
Crasborn, Onno. 2015. Transcription and notation methods. In Eleni Orfanidou, Bencie Woll & Gary Morgan (eds.). Research methods in sign language studies, 74–88. Chichester: John Wiley.10.1002/9781118346013.ch5Search in Google Scholar
Crasborn, Onno & Richard Bank. 2014. An annotation scheme for mouth actions in sign languages. In Onno Crasborn, Eleni Efthimiou, Stavroula-Evita Fotinea, Thomas Hanke, Jette Kristoffersen & Johanna Mesch (eds.). Proceedings of the LREC2014 6th workshop on the representation and processing of sign languages: Beyond the manual channel, 23–28. Reykjavik, Iceland: European Language Resources Association (ELRA). https://www.sign-lang.uni-hamburg.de/lrec/pub/14034.pdf (accessed 26 June 2024).Search in Google Scholar
de Vos, Connie, Francisco Torreira & Stephen C. Levinson. 2015. Turn-timing in signed conversations: Coordinating stroke-to-stroke turn boundaries. Frontiers in Psychology 6. https://doi.org/10.3389/fpsyg.2015.00268.Search in Google Scholar
Dingemanse, Mark. 2024. Interjections at the heart of language. Annual Review of Linguistics 10(1). 257–277. https://doi.org/10.1146/annurev-linguistics-031422-124743.Search in Google Scholar
Dingemanse, Mark & N. J. Enfield. 2024. Interactive repair and the foundations of language. Trends in Cognitive Sciences 28(1). 30–42. https://doi.org/10.1016/j.tics.2023.09.003.Search in Google Scholar
Dingemanse, Mark & Andreas Liesenfeld. 2022. From text to talk: Harnessing conversational corpora for humane and diversity-aware language technology. In Proceedings of the 60th annual meeting of the association for computational linguistics (vol. 1: Long papers), 5614–5633. Dublin: Association for Computational Linguistics.10.18653/v1/2022.acl-long.385Search in Google Scholar
Dingemanse, Mark, Andreas Liesenfeld & Marieke Woensdregt. 2022. Convergent cultural evolution of continuers (mhmm). In Andrea Ravignani, Rie Asano, Daria Valente, Francesco Ferretti, Stefan Hartmann, Misato Hayashi, Yannick Jadoul, Mauricio Martins, Yoshei Oseki, Evelina Daniela Rodrigues, Olga Vasileva & Slawomir Wacewicz (eds.), Proceedings of the joint Conference on language evolution (JCoLE), 160–167. Kanazawa, Japan.10.31234/osf.io/65c79Search in Google Scholar
Engberg-Pedersen, Elisabeth. 2002. Gestures in signing: The presentation gesture in Danish Sign Language. In Rolf Schulmeister & Heimo Reinitzer (eds.), Progress in sign language research: In honor of Siegmund Prillwitz, 143–162. Hamburg: Signum.Search in Google Scholar
Fenlon, Jordan & Julie A. Hochgesang (eds.). 2022. Signed language corpora (Sociolinguistics in deaf communities 25). Washington, DC: Gallaudet University Press.10.2307/j.ctv2rcnfhcSearch in Google Scholar
Fenlon, Jordan, Adam Schembri, Ramas Rentelis, David Vinson & Kearsy Cormier. 2014. Using conversational data to determine lexical frequency in British Sign Language: The influence of text type. Lingua 143. 187–202. https://doi.org/10.1016/j.lingua.2014.02.003.Search in Google Scholar
Ferrara, Lindsay. 2020. Some interactional functions of finger pointing in signed language conversations. Glossa: A journal of General Linguistics 5(1). https://doi.org/10.5334/gjgl.993.Search in Google Scholar
Ferrara, Lindsay. 2022. Indexing turn-beginnings in Norwegian Sign Language conversation. Gesture 21(1). 1–27. https://doi.org/10.1075/gest.21004.fer.Search in Google Scholar
Ferrara, Lindsay & Gabrielle Hodge. 2018. Language as description, indication, and depiction. Frontiers in Psychology 9. https://doi.org/10.3389/fpsyg.2018.00716.Search in Google Scholar
Frishberg, Nancy, Nini Hoiting & Dan I. Slobin. 2012. Transcription. In Roland Pfau, Markus Steinbach & Bencie Woll (eds.). Sign language: An international handbook, 1045–1075. Berlin: De Gruyter Mouton.10.1515/9783110261325.1045Search in Google Scholar
Gabarró-López, Sílvia. 2020. Are discourse markers related to age and educational background? A comparative account between two sign languages. Journal of Pragmatics 156. 68–82. https://doi.org/10.1016/j.pragma.2018.12.019.Search in Google Scholar
Gabarró-López, Sílvia. 2024. Towards a description of palm-up in bidirectional signed language interpreting. Lingua 300. 103646. https://doi.org/10.1016/j.lingua.2023.103646.Search in Google Scholar
Goodwin, Charles. 1986a. Audience diversity, participation and interpretation. Text: Interdisciplinary Journal for the Study of Discourse 6(3). 283–316. https://doi.org/10.1515/text.1.1986.6.3.283.Search in Google Scholar
Goodwin, Charles. 1986b. Between and within: Alternative sequential treatments of continuers and assessments. Human Studies 9(2–3). 205–217. https://doi.org/10.1007/BF00148127.Search in Google Scholar
Groeber, Simone & Evelyne Pochon-Berger. 2014. Turns and turn-taking in sign language interaction: A study of turn-final holds. Journal of Pragmatics 65. 121–136. https://doi.org/10.1016/j.pragma.2013.08.012.Search in Google Scholar
Hester, Jim & Jennifer Bryan. 2022. Glue: Interpreted string literals, version 1.6.2 [R package]. Available at: https://CRAN.R-project.org/package=glue.Search in Google Scholar
Hoza, Jack. 2011. The discourse and politeness function of HEY and WELL in American Sign Language. In Cynthia B. Roy (ed.). Discourse in signed languages, 69–95. Washington, DC: Gallaudet University Press. Available at: https://muse.jhu.edu/pub/18/monograph/chapter/450822.10.2307/j.ctv2rh28s4.9Search in Google Scholar
Iwasaki, Shimako, Meredith Bartlett, Louisa Willoughby & Howard Manns. 2022. Handling turn transitions in Australian tactile signed conversations. Research on Language and Social Interaction 55(3). 222–240. https://doi.org/10.1080/08351813.2022.2101293.Search in Google Scholar
Johnston, Trevor. 2012. Lexical frequency in sign languages. Journal of Deaf Studies and Deaf Education 17(2). 163–193. https://doi.org/10.1093/deafed/enr036.Search in Google Scholar
Johnston, Trevor. 2014. The reluctant oracle: Adding value to, and extracting of value from, a signed language corpus through strategic annotations. Corpora 9(2). 155–189. https://doi.org/10.3366/cor.2014.0056.Search in Google Scholar
Kendon, Adam. 2004. Gesture: Visible action as utterance. Cambridge: Cambridge University Press.10.1017/CBO9780511807572Search in Google Scholar
Kendon, Adam. 2008. Some reflections on the relationship between “gesture” and “sign”. Gesture 8(3). 348–366. https://doi.org/10.1075/gest.8.3.05ken.Search in Google Scholar
Kimmelman, Vadim & Anželika Teresė. 2023. Analyzing literary texts in Lithuanian Sign Language with computer vision: A proof of concept. In NAIS 2023: The 2023 symposium of the Norwegian AI society (CEUR workshop proceedings), Vol. 3431. Bergen: Technical University of Aachen. https://ceur-ws.org/Vol-3431/paper5.pdf (accessed 26 June 2024).Search in Google Scholar
Lepeut, Alysson & Emily Shaw. 2022. Time is ripe to make interactional moves: Bringing evidence from four languages across modalities. Frontiers in Communication 7. 780124. https://doi.org/10.3389/fcomm.2022.780124.Search in Google Scholar
Liddell, Scott K., Marit Vogt-Svendsen & Brita Bergman. 2007. A crosslinguistic comparison on buoys: Evidence from American, Norwegian, and Swedish Sign Language. In Myriam Vermeerbergen, Lorraine Leeson & Onno Crasborn (eds.), Simultaneity in signed languages: Form and function (Current issues in linguistic theory 281), 187–215. Amsterdam: John Benjamins.10.1075/cilt.281.09lidSearch in Google Scholar
Lugaresi, Camillo, Jiuqiang Tang, Hadon Nash, Chris McClanahan, Esha Uboweja, Michael Hays, Fan Zhang, Chuo-Ling Chang, Ming Guang Yong, Juhyun Lee, Wan-Teh Chang, Wei Hua, Manfred, Georg & Matthias, Grundmann. 2019. MediaPipe: A framework for building perception pipelines. https://arxiv.org/abs/1906.08172.Search in Google Scholar
Lutzenberger, Hannah, Lierin de Wael, Rehana Omardeen & Mark Dingemanse. 2024. Interactional infrastructure across modalities: A comparison of repair initiators and continuers in British Sign Language and British English. Sign Language Studies 24(3). 548–581. https://doi.org/10.1353/sls.2024.a928056.Search in Google Scholar
Manrique, Elizabeth. 2016. Other-initiated repair in Argentine Sign Language. Open Linguistics 2(1). https://doi.org/10.1515/opli-2016-0001.Search in Google Scholar
Manrique, Elizabeth & N. J. Enfield. 2015. Suspending the next turn as a form of repair initiation: Evidence from Argentine Sign Language. Frontiers in Psychology 6. https://doi.org/10.3389/fpsyg.2015.01326.Search in Google Scholar
McCleary, Leland Emerson & Tarcísio De Arantes Leite. 2012. Turn-taking in Brazilian Sign Language: Evidence from overlap. Journal of Interactional Research in Communication Disorders 4(1). 123–154. https://doi.org/10.1558/jircd.v4i1.123.Search in Google Scholar
McKee, David & Graeme Kennedy. 2006. The distribution of signs in New Zealand Sign Language. Sign Language Studies 6(4). 372–391. https://doi.org/10.1353/sls.2006.0027.Search in Google Scholar
McKee, Rachel Locker & Sophia Wallingford. 2011. So, well, whatever’: Discourse functions of palm-up in New Zealand Sign Language. Sign Language and Linguistics 14(2). 213–247. https://doi.org/10.1075/sll.14.2.01mck.Search in Google Scholar
Mesch, Johanna. 1998. Teckenspråk i taktil form: Turtagning och frågor i dövblindas samtal på teckenspråk. Edsbruk: Akademitryck.Search in Google Scholar
Mesch, Johanna. 2016. Manual backchannel responses in signers’ conversations in Swedish Sign Language. Language & Communication 50. 22–41. https://doi.org/10.1016/j.langcom.2016.08.011.Search in Google Scholar
Mesch, Johanna & Krister Schönström. 2021. Use and acquisition of mouth actions in L2 sign language learners: A corpus-based approach. Sign Language and Linguistics 24(1). 36–62. https://doi.org/10.1075/sll.19003.mes.Search in Google Scholar
Mesch, Johanna & Krister Schönström. 2023. Self-repair in hearing L2 learners’ spontaneous signing: A developmental study. Language Learning 73(S1). 136–163. https://doi.org/10.1111/lang.12612.Search in Google Scholar
Mesch, Johanna & Lars Wallin. 2021. Annoteringskonventioner för teckenspråkstexter. Version 8. [Annotation guidelines for sign language texts]. Available at: https://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-193356.Search in Google Scholar
Mesch, Johanna, Lars Wallin, Anna-Lena Nilsson & Brita Bergman. 2012. Swedish Sign Language corpus project 2009–2011 (version 1) [Dataset]. Sign language Section. Department of Linguistics, Stockholm University. https://teckensprakskorpus.su.se (accessed 26 June 2024).Search in Google Scholar
Mesch, Johanna, Krister Schönström & Sebastian Embacher. 2021. Mouthings in Swedish Sign Language: An exploratory study. Grazer Linguistische Studien 93. 107–135. https://doi.org/10.25364/04.48:2021.93.4.Search in Google Scholar
Micklos, Ashley & Marieke Woensdregt. 2023. Cognitive and interactive mechanisms for mutual understanding in conversation. In Jon Nussbaum (ed.). Oxford research encyclopedia of communication. Oxford: Oxford University Press.10.1093/acrefore/9780190228613.013.134Search in Google Scholar
Miller, Christopher. 2006. Sign language: Transcription, notation, and writing. In Keith Brown (ed.). Encyclopedia of language and linguistics, 353–354. Oxford: Elsevier.10.1016/B0-08-044854-2/00242-XSearch in Google Scholar
Müller, Cornelia. 2004. Forms and uses of the palm up open hand: A case of a gesture family. In Cornelia Müller & Roland Posner (eds.). The semantics and pragmatics of everyday gestures, Vol. 9, 233–256. Berlin: Weidler.Search in Google Scholar
Müller, Kirill. 2020. Here: A simpler way to find your files, version 1.0.1 [R package]. Available at: https://CRAN.R-project.org/package=here.Search in Google Scholar
Nilsson, Anna-Lena. 2007. The non-dominant hand in a Swedish Sign Language discourse. In Myriam Vermeerbergen, Lorraine Leeson & Onno Crasborn (eds.), Simultaneity in signed languages: Form and function (Current issues in linguistic theory 281), 163–185. Amsterdam: John Benjamins.10.1075/cilt.281.08nilSearch in Google Scholar
Öqvist, Zrajm, Nikolaus Riemer Kankkonen & Johanna Mesch. 2020. STS-korpus: A sign language web corpus tool for teaching and public use. In Eleni Efthimiou, Stavroula-Evita Fotinea, Thomas Hanke, Julie A. Hochgesang, Jette Kristoffersen & Johanna Mesch (eds.). Proceedings of the LREC2020 9th workshop on the representation and processing of sign languages: sign language resources in the service of the Language community, technological challenges and application perspectives, 177–180. Marseille: European Language Resources Association (ELRA). https://www.sign-lang.uni-hamburg.de/lrec/pub/20014.pdf (accessed 26 June 2024).Search in Google Scholar
Pedersen, Thomas Lin. 2024. Patchwork: The composer of plots, version 1.2.0 [R package]. Available at: https://CRAN.R-project.org/package=patchwork.Search in Google Scholar
R Core Team. 2023. R: A language and environment for statistical computing. Vienna: R Foundation for Statistical Computing. Available at: https://www.R-project.org/.Search in Google Scholar
Roush, Daniel. 2007. Indirectness strategies in American Sign Language requests and refusals: Deconstructing the deaf-as-direct stereotype. In Melanie Metzger & Earl Fleetwood (eds.), Translation, sociolinguistic, and consumer issues in interpreting, 103–156. Washington, DC: Gallaudet University Press.10.2307/j.ctv2rcnnz0.6Search in Google Scholar
Ryttervik, Magnus. 2015. Gesten PU i svenskt teckenspråk: En studie i dess form och funktion. Stockholm: Stockholm University Master’s Thesis. https://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-122055 (accessed 26 June 2024).Search in Google Scholar
Safar, Josefina & Connie de Vos. 2022. Pragmatic competence without a language model: Other-initiated repair in Balinese homesign. Journal of Pragmatics 202. 105–125. https://doi.org/10.1016/j.pragma.2022.10.017.Search in Google Scholar
Schegloff, Emanuel A., Gail Jefferson & Harvey Sacks. 1977. The preference for self-correction in the organization of repair in conversation. Language 53(2). 361–382. https://doi.org/10.2307/413107.Search in Google Scholar
Schnoebelen, Tyler, Julia Silge & Alex Hayes. 2022. Tidylo: Weighted tidy log odds ratio, version 0.2.0 [R package]. Available at: https://CRAN.R-project.org/package=tidylo.Search in Google Scholar
Skedsmo, Kristian. 2020. Multiple other-initiations of repair in Norwegian Sign Language. Open Linguistics 6(1). 532–566. https://doi.org/10.1515/opli-2020-0030.Search in Google Scholar
Skedsmo, Kristian. 2023. Repair receipts in Norwegian Sign Language multiperson conversation. Journal of Pragmatics 215. 189–212. https://doi.org/10.1016/j.pragma.2023.07.015.Search in Google Scholar
Slowikowski, Kamil. 2023. Ggrepel: Automatically position non-overlapping text labels with “ggplot2”, version 0.9.3 [R package]. Available at: https://CRAN.R-project.org/package=ggrepel.Search in Google Scholar
Svenskt teckenspråkslexikon. 2023. Swedish Sign Language dictionary online. Stockholm: Sign Language Section. Department of Linguistics. Stockholm University. Available at: https://teckensprakslexikon.su.se.Search in Google Scholar
Van Herreweghe, Mieke. 2002. Turn-taking mechanisms and active participation in meetings with Deaf and hearing participants in Flanders. In Ceil Lucas (ed.). Turn-taking, fingerspelling and contact in signed languages (Sociolinguistics in deaf communities 8), 73–103. Gallaudet University Press.Search in Google Scholar
Vaughan, Davis. 2022. Slider: Sliding window functions, version 0.3.0 [R package]. Available at: https://CRAN.R-project.org/package=slider.Search in Google Scholar
Vermeerbergen, Myriam & Anna-Lena Nilsson. 2018. Introduction. In Anne Aarssen, René Genis & Eline van der Veken (eds.), A bibliography of sign languages, 2008-2017, ix–xxxi. Leiden: Brill.Search in Google Scholar
Vogt-Svendsen, Marit & Brita Bergman. 2007. Point buoys: The weak hand as a point of reference for time and space. In Myriam Vermeerbergen, Lorraine Leeson & Onno Crasborn (eds.). Simultaneity in signed languages: Form and function (Current issues in linguistic theory 281), 217–236. Amsterdam: John Benjamins.10.1075/cilt.281.10vogSearch in Google Scholar
Wickham, Hadley, Mara Averick, Jennifer Bryan, Winston Chang, Lucy McGowan, Romain François, Garrett Grolemund, Alex Hayes, Lionel Henry, Jim Hester, Max Kuhn, Thomas Pedersen, Evan Miller, Stephan Bache, Kirill Müller, Jeroen Ooms, David Robinson, Dana Seidel, Vitalie Spinu, Kohske Takahashi, Davis Vaughan, Claus Wilke, Kara Woo & Hiroaki Yutani. 2019. Welcome to the tidyverse. Journal of Open Source Software 4(43). 1686. https://doi.org/10.21105/joss.01686.Search in Google Scholar
Wickham, Hadley, Jim Hester & Jeroen Ooms. 2023a. xml2: Parse XML, version 1.3.4 [R package]. Available at: https://CRAN.R-project.org/package=xml2.Search in Google Scholar
Wickham, Hadley, Thomas Lin Pedersen & Dana Seidel. 2023b. Scales: Scale functions for visualization, version 1.3.0 [R package]. Available at: https://CRAN.R-project.org/package=scales.Search in Google Scholar
Wilke, Claus O. & Brenton M. Wiernik. 2022. Ggtext: Improved text rendering support for “ggplot2”, version 0.1.2 [R package]. Available at: https://CRAN.R-project.org/package=ggtext.Search in Google Scholar
Wittenburg, Peter, Hennie Brugman, Albert Russel, Alex Klassmann & Han Sloetjes. 2006. ELAN: A professional framework for multimodality research. In Nicoletta Calzolari, Khalid Choukri, Aldo Gangemi, Bente Maegaard, Joseph Mariani, Jan Odijk & Daniel Tapias (eds.). Proceedings of the 5th international conference on language resources and evaluation (LREC 2006), 1556–1559. Genoa: European Language Resources Association (ELRA). Available at: https://aclanthology.org/L06-1082/.Search in Google Scholar
© 2024 the author(s), published by De Gruyter, Berlin/Boston
This work is licensed under the Creative Commons Attribution 4.0 International License.
Articles in the same Issue
- Frontmatter
- Editorial
- Editorial 2024
- Phonetics & Phonology
- The role of recoverability in the implementation of non-phonemic glottalization in Hawaiian
- Epenthetic vowel quality crosslinguistically, with focus on Modern Hebrew
- Japanese speakers can infer specific sub-lexicons using phonotactic cues
- Articulatory phonetics in the market: combining public engagement with ultrasound data collection
- Investigating the acoustic fidelity of vowels across remote recording methods
- The role of coarticulatory tonal information in Cantonese spoken word recognition: an eye-tracking study
- Tracking phonological regularities: exploring the influence of learning mode and regularity locus in adult phonological learning
- Morphology & Syntax
- #AreHashtagsWords? Structure, position, and syntactic integration of hashtags in (English) tweets
- The meaning of morphomes: distributional semantics of Spanish stem alternations
- A refinement of the analysis of the resultative V-de construction in Mandarin Chinese
- L2 cognitive construal and morphosyntactic acquisition of pseudo-passive constructions
- Semantics & Pragmatics
- “All women are like that”: an overview of linguistic deindividualization and dehumanization of women in the incelosphere
- Counterfactual language, emotion, and perspective: a sentence completion study during the COVID-19 pandemic
- Constructing elderly patients’ agency through conversational storytelling
- Language Documentation & Typology
- Conative animal calls in Macha Oromo: function and form
- The syntax of African American English borrowings in the Louisiana Creole tense-mood-aspect system
- Syntactic pausing? Re-examining the associations
- Bibliographic bias and information-density sampling
- Historical & Comparative Linguistics
- Revisiting the hypothesis of ideophones as windows to language evolution
- Verifying the morpho-semantics of aspect via typological homogeneity
- Psycholinguistics & Neurolinguistics
- Sign recognition: the effect of parameters and features in sign mispronunciations
- Influence of translation on perceived metaphor features: quality, aptness, metaphoricity, and familiarity
- Effects of grammatical gender on gender inferences: Evidence from French hybrid nouns
- Processing reflexives in adjunct control: an exploration of attraction effects
- Language Acquisition & Language Learning
- How do L1 glosses affect EFL learners’ reading comprehension performance? An eye-tracking study
- Modeling L2 motivation change and its predictive effects on learning behaviors in the extramural digital context: a quantitative investigation in China
- Ongoing exposure to an ambient language continues to build implicit knowledge across the lifespan
- On the relationship between complexity of primary occupation and L2 varietal behavior in adult migrants in Austria
- The acquisition of speaking fundamental frequency (F0) features in Cantonese and English by simultaneous bilingual children
- Sociolinguistics & Anthropological Linguistics
- A computational approach to detecting the envelope of variation
- Attitudes toward code-switching among bilingual Jordanians: a comparative study
- “Let’s ride this out together”: unpacking multilingual top-down and bottom-up pandemic communication evidenced in Singapore’s coronavirus-related linguistic and semiotic landscape
- Across time, space, and genres: measuring probabilistic grammar distances between varieties of Mandarin
- Navigating linguistic ideologies and market dynamics within China’s English language teaching landscape
- Streetscapes and memories of real socialist anti-fascism in south-eastern Europe: between dystopianism and utopianism
- What can NLP do for linguistics? Towards using grammatical error analysis to document non-standard English features
- From sociolinguistic perception to strategic action in the study of social meaning
- Minority genders in quantitative survey research: a data-driven approach to clear, inclusive, and accurate gender questions
- Variation is the way to perfection: imperfect rhyming in Chinese hip hop
- Shifts in digital media usage before and after the pandemic by Rusyns in Ukraine
- Computational & Corpus Linguistics
- Revisiting the automatic prediction of lexical errors in Mandarin
- Finding continuers in Swedish Sign Language
- Conversational priming in repetitional responses as a mechanism in language change: evidence from agent-based modelling
- Construction grammar and procedural semantics for human-interpretable grounded language processing
- Through the compression glass: language complexity and the linguistic structure of compressed strings
- Could this be next for corpus linguistics? Methods of semi-automatic data annotation with contextualized word embeddings
- The Red Hen Audio Tagger
- Code-switching in computer-mediated communication by Gen Z Japanese Americans
- Supervised prediction of production patterns using machine learning algorithms
- Introducing Bed Word: a new automated speech recognition tool for sociolinguistic interview transcription
- Decoding French equivalents of the English present perfect: evidence from parallel corpora of parliamentary documents
- Enhancing automated essay scoring with GCNs and multi-level features for robust multidimensional assessments
- Sociolinguistic auto-coding has fairness problems too: measuring and mitigating bias
- The role of syntax in hashtag popularity
- Language practices of Chinese doctoral students studying abroad on social media: a translanguaging perspective
- Cognitive Linguistics
- Metaphor and gender: are words associated with source domains perceived in a gendered way?
- Crossmodal correspondence between lexical tones and visual motions: a forced-choice mapping task on Mandarin Chinese
Articles in the same Issue
- Frontmatter
- Editorial
- Editorial 2024
- Phonetics & Phonology
- The role of recoverability in the implementation of non-phonemic glottalization in Hawaiian
- Epenthetic vowel quality crosslinguistically, with focus on Modern Hebrew
- Japanese speakers can infer specific sub-lexicons using phonotactic cues
- Articulatory phonetics in the market: combining public engagement with ultrasound data collection
- Investigating the acoustic fidelity of vowels across remote recording methods
- The role of coarticulatory tonal information in Cantonese spoken word recognition: an eye-tracking study
- Tracking phonological regularities: exploring the influence of learning mode and regularity locus in adult phonological learning
- Morphology & Syntax
- #AreHashtagsWords? Structure, position, and syntactic integration of hashtags in (English) tweets
- The meaning of morphomes: distributional semantics of Spanish stem alternations
- A refinement of the analysis of the resultative V-de construction in Mandarin Chinese
- L2 cognitive construal and morphosyntactic acquisition of pseudo-passive constructions
- Semantics & Pragmatics
- “All women are like that”: an overview of linguistic deindividualization and dehumanization of women in the incelosphere
- Counterfactual language, emotion, and perspective: a sentence completion study during the COVID-19 pandemic
- Constructing elderly patients’ agency through conversational storytelling
- Language Documentation & Typology
- Conative animal calls in Macha Oromo: function and form
- The syntax of African American English borrowings in the Louisiana Creole tense-mood-aspect system
- Syntactic pausing? Re-examining the associations
- Bibliographic bias and information-density sampling
- Historical & Comparative Linguistics
- Revisiting the hypothesis of ideophones as windows to language evolution
- Verifying the morpho-semantics of aspect via typological homogeneity
- Psycholinguistics & Neurolinguistics
- Sign recognition: the effect of parameters and features in sign mispronunciations
- Influence of translation on perceived metaphor features: quality, aptness, metaphoricity, and familiarity
- Effects of grammatical gender on gender inferences: Evidence from French hybrid nouns
- Processing reflexives in adjunct control: an exploration of attraction effects
- Language Acquisition & Language Learning
- How do L1 glosses affect EFL learners’ reading comprehension performance? An eye-tracking study
- Modeling L2 motivation change and its predictive effects on learning behaviors in the extramural digital context: a quantitative investigation in China
- Ongoing exposure to an ambient language continues to build implicit knowledge across the lifespan
- On the relationship between complexity of primary occupation and L2 varietal behavior in adult migrants in Austria
- The acquisition of speaking fundamental frequency (F0) features in Cantonese and English by simultaneous bilingual children
- Sociolinguistics & Anthropological Linguistics
- A computational approach to detecting the envelope of variation
- Attitudes toward code-switching among bilingual Jordanians: a comparative study
- “Let’s ride this out together”: unpacking multilingual top-down and bottom-up pandemic communication evidenced in Singapore’s coronavirus-related linguistic and semiotic landscape
- Across time, space, and genres: measuring probabilistic grammar distances between varieties of Mandarin
- Navigating linguistic ideologies and market dynamics within China’s English language teaching landscape
- Streetscapes and memories of real socialist anti-fascism in south-eastern Europe: between dystopianism and utopianism
- What can NLP do for linguistics? Towards using grammatical error analysis to document non-standard English features
- From sociolinguistic perception to strategic action in the study of social meaning
- Minority genders in quantitative survey research: a data-driven approach to clear, inclusive, and accurate gender questions
- Variation is the way to perfection: imperfect rhyming in Chinese hip hop
- Shifts in digital media usage before and after the pandemic by Rusyns in Ukraine
- Computational & Corpus Linguistics
- Revisiting the automatic prediction of lexical errors in Mandarin
- Finding continuers in Swedish Sign Language
- Conversational priming in repetitional responses as a mechanism in language change: evidence from agent-based modelling
- Construction grammar and procedural semantics for human-interpretable grounded language processing
- Through the compression glass: language complexity and the linguistic structure of compressed strings
- Could this be next for corpus linguistics? Methods of semi-automatic data annotation with contextualized word embeddings
- The Red Hen Audio Tagger
- Code-switching in computer-mediated communication by Gen Z Japanese Americans
- Supervised prediction of production patterns using machine learning algorithms
- Introducing Bed Word: a new automated speech recognition tool for sociolinguistic interview transcription
- Decoding French equivalents of the English present perfect: evidence from parallel corpora of parliamentary documents
- Enhancing automated essay scoring with GCNs and multi-level features for robust multidimensional assessments
- Sociolinguistic auto-coding has fairness problems too: measuring and mitigating bias
- The role of syntax in hashtag popularity
- Language practices of Chinese doctoral students studying abroad on social media: a translanguaging perspective
- Cognitive Linguistics
- Metaphor and gender: are words associated with source domains perceived in a gendered way?
- Crossmodal correspondence between lexical tones and visual motions: a forced-choice mapping task on Mandarin Chinese