Abstract
This experimental study examines the relationship between accented speech and community interpreters’ stress. While previous studies have examined the potential impact of accent on interpreter performance, the interplay between stress and accent has received limited attention. The present study combines both in a single experimental study to investigate their potential relationship along with other participant-level variables. The results of this study were inconclusive regarding the level of stress experienced by community interpreters between the standard and regional US accents, which may suggest that accented language within a country may not be as strong of an influence on the stress levels in community interpreting. Participant-level variables including interpreter self-efficacy and trainee versus professional interpreters also showed non-significant results. However, the effect of task order was observed in the experimental setting, with the first interpreting task triggering a higher level of stress than the second task. This result suggests that there is a potential practice effect on interpreting performance and that stress results from the onset of the interpreting task as opposed to accented speech. In general, this study sheds new light on accent as a variable in interpreting, suggesting that the previously-observed impact of regional accentual varieties in simultaneous interpreting may not neatly extrapolate to other interpreting contexts or settings.
1 Introduction
Affective aspects of community interpreting are a growing area of inquiry in the field, with much of this scholarship focusing on the potential psychological burden triggered by interpreting emotionally-charged content (e.g., Mehus and Becher 2016; Ndongo-Keller 2015; Sultanić 2021) and interpreters’ experience of stress (e.g., Moser-Mercer et al. 1998; Rojo et al. 2021; Roziner and Shlesinger 2010). Understood as “a real or anticipated threat to homeostasis or an anticipated threat to well-being” (Herman 2011: 117), stress has both physiological and psychological manifestations. It entails not only a bodily reaction to a stimulus (Selye 1936) but also a subjective appraisal of a stressor, which may depend on the individual’s available resources (Lazarus and Folkman 1984). Previous research has empirically tested potential stress factors in conference interpreting, including different interpreting modalities such as remote interpreting (Roziner and Shlesinger 2010), as well as task-specific variables such as prolonged interpreting turns (Moser-Mercer et al. 1998), and high source language delivery rate (Korpal 2017). In addition, emotionally-charged content has been linked to increased levels of stress and anxiety in both conference and community interpreting (e.g., Rojo and Foulquié 2025), and researchers have begun to examine other potential outcomes resulting from this content as well as how to mitigate the influence resulting from the content and setting in which the interpreting occurs. Such research includes questions surrounding ethical stress resulting from the complex dynamics of interaction in public settings (Hubscher-Davidson 2021), physical exhaustion (Holmgren et al. 2003), and self-care strategies (Korpal and Mellinger 2022) that may be linked to an interpreter’s resilience (Crezee and Lai 2022; Crezee and Major 2021). A feature that has been shown to impact interpreting performance is self-efficacy (Lee 2018), understood as “beliefs in one’s capabilities to organize and execute the courses of action required to produce given attainments” (Bandura 1997: 3). However, little is known about the potential relationship between interpreters’ resilience and self-efficacy. This study explores whether community interpreters’ self-reported stress and self-efficacy may be related.
While previous research establishes a connection between emotionally-charged content and stress, other characteristics of a speaker’s language have yet to be sufficiently explored as potential means by which to elicit interpreter stress. In this study, we examine a common feature anecdotally reported by interpreters and training materials as a problem trigger for working interpreters – namely, a speaker’s accent – to determine the extent to which an accent that is unfamiliar to interpreters may elicit stress. While accented speech is common to the interpreting task, scholarship recently has pointed to the potential for phonetic and phonological features of a source utterance to impact interpreter performance, thereby necessitating additional scholarship on these variables (e.g., Colina 2025). Moreover, researchers focusing on simultaneous interpreting have also described these features from the perspective of English as a lingua franca to the extent that linguistic features may be perceived as non-standard linguistic input to the interpreting task (e.g., Albl-Mikasa and Gieshoff 2025). This study is an initial inquiry into the extent to which source language features can elicit stress due to an interpreter’s lack of familiarity with the speaker’s accent. To do so, we conducted an experimental study to elicit self-report stress data derived from a psychometrically validated instrument administered as a baseline measure and after two experimental tasks. More specifically, we focus on the impact of regional accentual varieties that are potentially unfamiliar to participants on stress experienced by professional community interpreters and interpreting trainees, triangulating the observation of latent constructs with semi-structured interview data.
2 Literature review
Broadly speaking, accent can be defined as “the cumulative auditory effect of those features of pronunciation that identify where a person is from, regionally or socially” (Crystal 2003: 3). Given the range of speakers with whom interpreters interact, it is perhaps unsurprising that practicing interpreters and trainers comment on the importance of being able to understand a range of accents. For instance, Setton and Dawrant (2016) note that the ability to understand a range of accents is a prerequisite to admission into a conference interpreting program. Similarly, Rudvin and Tomassini (2011) explicitly mention language varieties as part of public service and community training programs.
Research on the question of accent has taken several forms in interpreting studies, primarily in simultaneous interpreting research. On the one hand, accent has been examined from the perspective of an interpreter’s output influencing the perceived quality of the interpreter’s rendition (e.g., Cheung 2013; Chevalier and Gile 2015). In some cases, these studies point to prosodic and intonation parameters of an interpreter’s voice, which are implicit in overarching discussions of accent (Chevalier and Gile 2015), while in others, these are addressed from the perspective of a perceived nativeness in an interpreter’s rendition (Cheung 2020, 2022). This scholarship figures into discussions of interpreting quality more broadly, such that an interpreter’s actual performance and rendition may be impacted by the perceived quality and listener expectations (García Becerra and Collados Aís 2019) as well as the elusive concept of voice (Wang 2022).
On the other hand, some scholars have been interested in the relationship between accent and cognitive load, focusing on strategies interpreters may use to mitigate an increase in cognitive load precipitated by source language features (e.g., increased speed or unusual accents) (Pöchhacker 2009). For instance, Gile (2009: 193) refers to “strong accents” that may increase the processing requirements for listening and analysis and, as a result, deplete the interpreter’s cognitive resources required to provide an accurate interpretation. Similarly, McAllister (2000: 61) concludes that interpreters may suffer from the effects of “perceptual foreign accent.” In reviewing this work, Colina (2025: 138) notes that “even very proficient interpreters perceive non-native source text differently from natives,” which may affect interpreter comprehension. Research has identified differences in interpreter comprehension across different levels of interpreting experience (see Díaz-Galaz 2020), such that it is plausible that accent or language varieties may be worth inquiry.
Work on accent and cognitive load has subsequently been linked to the quality of an interpreter’s performance, particularly when these linguistic features are considered in tandem (e.g., Han and Riazi 2017). The potential of task-specific influences on an interpreter’s cognitive load has been conceptualized in the interpreting studies literature as problem triggers (e.g., Gile 1995, 2009; Mankauskienė 2018), allowing researchers to examine the influence of specific task features on interpreting. These triggers have been further classified in several ways. For instance, Gile (2009) identifies cognitive problem triggers, language-specific problem triggers, and speaker-factor problem triggers. Mankauskienė (2018: 17) divided the challenging aspects of interpreting into sender-related problem triggers, problem triggers associated with the speech in the source language, problem triggers related to an interpreter, and technical problem triggers. Both classifications incorporate the speaker’s accent as an example of a problem trigger: a cognitive problem trigger in Gile’s classification (2009: 193) and a sender-related problem trigger in Mankauskienė’s typology (2018: 17).
Empirical research on the impact of these source language triggers, such as accent, has adopted a range of terminology to refer to these source utterance-specific problem triggers. In the case of Mankauskienė (2008: 14), “a speaker’s strong or unfamiliar accent” is linked to the idea of signal distortion. Albl-Mikasa and Gieshoff (2025) approach this influence from the perspective of non-standard input, wherein interpreters are regularly confronted with English as a lingua franca in interpreting contexts. Consequently, interpreters may work with source language input characterized by linguistic features arising from the use of English as a second language that may not coincide with those exhibited by L1 or native speakers of English. In a survey of 32 conference interpreters, Albl-Mikasa (2010) investigated how the growing popularity of English as a lingua franca may impact the interpreter profession. When asked about their preference for either native or non-native source language speakers, 69 % of interpreters responded that they preferred speeches delivered by a native speaker. The following challenges of speeches in non-native English were identified:
“unorthodox grammatico-syntactic structures
elliptical structures
unusual ways of putting things
imprecision, unclear wording and phrases
wrong intonation (overshadowing the overall line of argumentation)
generally reduced language” (Albl-Mikasa 2010: 134).
Similarly, a speaker’s accent has also been identified as a potential stressor in a survey conducted with 127 Polish court translators and interpreters (Korpal 2021). In the group of stress factors related to interpreting, “speaker’s accent” ranked fourth, following “fast delivery rate”, “no materials available before an assignment”, and “poor working conditions – room acoustics” (Korpal 2021: 560). Both accent and a fast delivery rate are source utterance linguistic characteristics, illustrating an affective dimension or sensitivity to these features as potential problem triggers.
The potential role of accent on interpreting quality has also been studied in experimental research in which participants interpreted non-native speakers or those using non-standard and regional accents. For example, Mazzetti (1999) investigated the impact of segmental and prosodic errors in the source speech on interpreting performance among interpreting trainees and showed that such deviations from native-like pronunciation may compromise interpreting quality. Elsewhere, Sabatini (2000) tested interpreting students’ performance in three language tasks: listening comprehension, shadowing, and simultaneous interpreting from English into Italian. Participants were presented with accented speeches by an Indian speaker with English as L2 and by an American speaking native English “with a strong accent” (Sabatini 2000: 25). The results obtained point to a higher performance for listening comprehension relative to shadowing and simultaneous interpreting, with no significant differences between the two latter tasks. Kurz (2008) tested the impact of non-native English accent on the interpreting performance of 10 student interpreters, who were asked to interpret a source speech divided into two parts: one delivered by a native speaker of English, and the other – by a non-native speaker of English. The analysis of interpreting accuracy pointed to a more significant loss of information for the part of the speech delivered by a non-native speaker.
Still other scholarship has studied the effect of phonemic and prosodic deviations in English on simultaneous interpreting accuracy. Lin et al. (2013) demonstrated that prosodic deviations impacted interpreting accuracy more than phonemic deviations in a group of interpreting studies, particularly those that “deviated North American English post-vowel /r/, intonation and rhythm” (Lin et al. 2013: 30) as the major problem triggers. The detrimental effect of strong accent on interpreting quality was also confirmed in the sample of 32 professional interpreters in a study by Han and Riazi (2017). Strong accent negatively affected interpreting quality in all three measures of performance adopted in the study: information completeness, fluency of delivery, and target language quality. A negative impact of non-native accent on interpreting accuracy was also observed by Staszewska (2020), wherein 10 Polish interpreting trainees interpreted English speeches delivered by a native English speaker and a native French speaker. The speech by a French speaker was also considered more difficult in a post-experiment survey.
While this growing body of scholarship has suggested a potential negative effect of non-native accents on interpreting performance (e.g. Kurz 2008; Han and Riazi 2017; Lin et al. 2013), there remains limited scholarship on linguistic varieties or regional variation across a specific language community. In this study, we focus on regional varieties of English – i.e., local varieties of English that may be unfamiliar to interpreters as a result of limited contact with speakers of these varieties. Of particular interest in this study are language varieties found within the United States as opposed to variation across larger geographic distances (e.g., Australian English vs. South African English), particularly since community interpreters may encounter a range of accents in their primary place of work. We test whether interpreting a speaker using a potentially unfamiliar accent would elicit self-reported stress experienced by community interpreters. Although accent and stress have both been extensively studied, there remains relatively limited experimental data to corroborate or refute observational data collected via interviews and surveys. Moreover, the influence of accented speech on interpreter stress is not often combined in a single study. Therefore, this study seeks to examine experimentally how regional accent varieties may modulate community interpreters’ stress. To do so, we adopt a psychometrically validated self-report tool, the Short Stress State Questionnaire (SSSQ; Helton 2004) to provide a quantitative measure of stress experienced by community interpreters in response to interpreting tasks performed. This measure was complemented by semi-structured post-task and post-experiment interviews that aimed to collect data on participants’ perceptions of task difficulty, challenges involved, and the stress experienced.
3 Methods
3.1 Materials
The materials comprised two job interviews (interview A and interview B) between an interviewer speaking English and an interviewee speaking Spanish. Job interviews were selected as an encounter that would be suited for a community interpreter since interviews contain prototypical community interpreting interactions that follow a question-and-answer discourse and relatively short exchanges between interlocutors. Moreover, these encounters are likely to be between two parties who have not previously met or interacted, such that the interpreter is unlikely to draw on previous knowledge of the speakers when completing the tasks.
The interviews were of equal length (interview A – 899 words, including 433 words in the interviewer’s part, 6 min and 9 s; interview B – 899 words, including 431 words in the interviewer’s part, 6 min and 7 s). The materials contained no specialized terminology, affect-laden language, or other problem triggers such as complex numbers. The interviews also followed the same structure: opening, a series of interview questions, a closing part including information on how feedback will be provided to the interviewee. For comparability, both interviews were for the position of Senior Manager in the banking industry.
Two versions of each interview were prepared: one with a male English speaker using Midwestern American English (sometimes referred to as General American English or Standard American English), and the other with a male English speaker using Southern American English, a regional accent variety of English with linguistic features of Southern Midland English.[1] Midwestern American English is often considered the mainstream variety of English spoken by most Americans and is not conceptualized as being a single language accent (Kövecses 2000), while Southern Midland English is more characteristic of speakers inland in the Missouri, Arkansas, and Tennessee regions. The latter share similar vowel patterns as Southern accents in the United States along with sonorant consonants (Labov et al. 2006). Both speakers are native speakers of English, were educated in the United States, and hold doctoral degrees. Each has lived for extended periods of time in the respective regions where each language variety is spoken. The same female speaker recorded the Spanish-language interviewee’s parts for both interviews. Speakers were told to speak at their regular rate of speed and were not instructed to change their manner of speech for the recording.
3.2 Participants
A total of 18 interpreters took part in the study via purposive sampling. They were recruited through professional interpreting listservs and invitations sent by email in order to specifically identify interpreters working in community settings. Interpreting students were also recruited via email by instructors currently or previously teaching community interpreting. To participate in the study, they needed to be at least 18 years old, be either an interpreting student or a professional community interpreter, and have some experience interpreting in the English-Spanish language pair. Experience was not controlled as an inclusion criterion in order to provide a sufficiently broad range of participants in the study – so long as the participants had interpreting experience in the Spanish-English language pair and met the other inclusion criteria, they were not disqualified from the study. In the final sample, 9 were US-based interpreting trainees and the other 9 were US-based professional community interpreters. The sample included 3 men and 15 women, ranging in age from 20 to 69.[2] At the time of participation, interpreting trainees had 1–2 years of translation/interpreting education. All professional interpreters were certified (either court or medical interpreters). All participants had English and Spanish as their primary interpreting language combination. 14 reported formal education in Spanish, while 16 had experience living in a Spanish-speaking country. Given the challenges associated with identifying ‘native language’ or ‘L1’ in relation to language proficiency in interpreting and using these as potential demographic indicators (for a discussion, see Tiselius 2025), participants were instead asked to self-assess their language proficiency in English and Spanish on a 1–5 scale (1 = ‘none’; 5 = ‘native-like or near-native’) in four language-related abilities: speaking, listening, reading, and writing. Participants rated their language proficiency in English as follows: speaking: M = 4.94, SD = 0.24; listening: M = 4.88, SD = 0.33; reading: M = 4.94, SD = 0.24; writing: M = 4.94, SD = 0.24, while in Spanish the ratings were as follows: speaking: M = 4.61, SD = 0.5; listening: M = 4.82, SD = 0.39; reading: M = 4.65, SD = 0.49; writing: M = 4.53, SD = 0.62. This more granular information provides greater insight since phonological awareness and perception across accents cannot be fully accounted for with native-language indicators alone (see Colina 2025). Since one participant did not provide answers for listening, reading, and writing, the results in these categories are based on 17 responses, while the results for speaking were calculated for all 18 participants.
3.3 Measurement
The construct of self-efficacy was considered in this study so as to look for potential relationships with interpreters’ self-reported stress. As self-efficacy has been shown to modulate a physiological stress response (e.g., O’Leary 1992), with this study we aimed to test whether a similar relationship can be observed for self-reported stress. To this end, the latent construct of interpreter self-efficacy was measured utilizing the Interpreting Self-Efficacy (ISE) scale (Lee 2014). ISE is a 21-item scale with three sub-dimensions of Self-Confidence (SC), Self-Regulatory Efficacy (SR), and Preference for Task Difficulty (TD; Lee 2014), which aligns with Kim and Park’s (2001) model of academic self-efficacy. Responses to the scale’s items are provided on a 6-point Likert-type scale in line with its original development and implementation. We calculated values for both general self-efficacy and separately for the three sub-dimensions using the following scale: “definitely true of me” = 1, “true of me” = 2, “somewhat true of me” = 3, “somewhat untrue of me” = 4, “untrue of me” = 5, and “very untrue of me” = 6; because these linguistic markers are the reverse of Lee’s (2014) original response scale, the scores were reversed to allow interpretability and comparability to previous research. Additionally, reverse scoring was applied for 5 items, based on Lee’s (2014) instructions. Since the original scale was designed with the aim of measuring student interpreters’ self-efficacy, the language for professional interpreters taking part in this study was slightly edited to consider the participants’ professional status. For example, “other students” was replaced with “other professional interpreters” and “In an interpretation classroom” was changed to “When interpreting.”
Stress experienced during the task was measured using the Short Stress State Questionnaire (SSSQ; Helton 2004). The SSSQ is a 24-item tool used to assess subjective state stress as a result of a performed task; it comprises three main factors: task engagement, distress, and worry, and answers are provided on a 5-point Likert scale. The tool features two versions: one to measure perceived stress before the task (State Pre-Questionnaire) and after it is completed (State Post-Questionnaire).
3.4 Procedure
The study was approved by the Institutional Review Board of UNC Charlotte [approval no. 21-0332]. It involved interpreting two audio recordings of job interviews consecutively between an English speaker and a Spanish speaker. The study was conducted via Zoom, and each experimental session took approximately 60 min. All survey data were collected using Qualtrics. Informed consent was obtained electronically via DocuSign before participation in the experiment. Participants could withdraw from the study at any time, and no incentive was provided for participation in the study.
Prior to the start of the experimental tasks, participants completed three online surveys: a demographic questionnaire, Interpreting Self-Efficacy (ISE) scale, and the Short Stress State Pre-Questionnaire (SSSQ). Each participant was assigned a participant number to allow the data from each of the experimental tasks to be matched throughout the experiment. No identifiable data were collected so participant identities could not be determined after the experimental tasks were completed.
In the main part of the experiment, each participant interpreted two job interviews differing in the accent of the English speaker. The order of presentation (interview A vs. interview B) and the version provided (general accent vs. regional accentual variety) were counterbalanced across the participants. Interviews were interpreted consecutively in short fragments – 33 interpreting chunks in both interviews. The average number of words per chunk equaled 27.2 for both interviews. The recording was managed by the experimenter, who stopped the recording after each fragment so that an interpretation could be provided. Participants were instructed that they could take notes if they wished.
After each interpreting task the Short Stress State Post-Questionnaire (SSSQ; Helton 2004) was completed by the participant to test the impact of the experimental condition on perceived stress. Short semi-structured interviews were also conducted to collect qualitative data. After each task, participants were asked about their impressions of the task, potential challenges, and perceived stress. After both interpretations were completed, participants were also asked to compare the tasks regarding difficulty, the speakers’ accent, the level of perceived stress, and physical manifestations of the stress experienced (see Appendix for the list of post-task and post-experiment questions). At the end of the experiment, participants were debriefed about the study’s aims and hypotheses.
3.5 Data analysis
The scores for the Interpreting Self-Efficacy (ISE) scale (Lee 2014) were analyzed for three factors, i.e., Self-Confidence, Self-Regulatory Efficacy, and Preference for Task Difficulty (Lee 2014). Scores were calculated for each subscale (Self-Confidence – 9 items; Self-Regulatory Efficacy – 4 items; Preference for Task Difficulty – 8 items), as well as for all 21 items as a measure of overall self-efficacy.
Participants’ stress level was measured with the Short Stress State Questionnaire (SSSQ; Helton 2004). The pre-task levels were treated as a point of reference, and for both interpreting tasks, the measured SSSQ score was divided by the baseline level. Thus, the calculated values measured how the level of perceived stress changed as a function of the interpreting task completed. Scores were calculated separately for the three SSSQ subscales: task engagement, distress, and worry. As a robustness check, all analyses were repeated using the difference (rather than the ratio) between baseline and experimental conditions; results were qualitatively similar.
Correlation analysis was employed to test the relationship between self-efficacy and self-reported stress. The effect of accent on stress was tested with repeated measures ANOVA and paired t-tests. Data analysis was conducted using R software.
4 Results
4.1 Stress and self-efficacy correlation
When analyzing responses to the ISE, participants generally reported moderate to strong self-efficacy with the following summary statistics: self-confidence (SC): M = 4.94, SD = 0.57; self-regulatory efficacy (SR): M = 5.07, SD = 0.59; preference for task difficulty (TD): M = 4.12, SD = 0.84; total ISE: M = 4.65, SD = 0.60. The scale’s reliability was strong for SC (Cronbach’s α = 0.89, 95 % CI [0.82, 0.97]) and TD (Cronbach’s α = 0.86, 95 % CI [0.77, 0.96]). Reliability for SR was a bit weaker with a wide confidence interval (Cronbach’s α = 0.63, 95 % CI [0.32, 0.94]), in part likely due to the smaller number of items for the subscale.
Pearson’s correlations were computed between the ISE and SSSQ measures in both the accented and unaccented condition, and results appear in Table 1. The magnitude of all correlations is quite small, with none of them statistically significant at the 5 % level. Therefore, the relationship observed between stress and self-efficacy was small for the participants in this study (Mellinger and Hanson 2017).
Pearson correlation coefficients and p-values (in brackets) between ISE and SSSQ scores.
| Accented condition | SC | SR | TD |
|---|---|---|---|
| Distress | 0.109 (0.667) | –0.010 (0.969) | –0.305 (0.218) |
| Engagement | 0.164 (0.516) | –0.058 (0.819) | –0.008 (0.975) |
| Worry | 0.047 (0.853) | –0.307 (0.215) | 0.236 (0.346) |
|
|
|||
| Unaccented condition | |||
|
|
|||
| Distress | 0.004 (0.987) | 0.028 (0.912) | –0.057 (0.822) |
| Engagement | 0.273 (0.273) | –0.156 (0.536) | –0.068 (0.789) |
| Worry | –0.044 (0.862) | –0.218 (0.385) | 0.440 (0.068) |
4.2 Self-reported stress
The potential effect of accent on self-reported stress was tested with three repeated measures ANOVA models, one for each subscale of the SSSQ. This omnibus test helps control for Type I error. The primary factor of interest was accent, but the model also included professional status (professional or student), the experimental order (accented or unaccented version first), and the interaction between accent and experimental order.
Accent was not statistically significant at the 5 % level in any of the three models: worry, F[1] = 0.003, p = 0.957; engagement, F[1] = 0.201, p = 0.657; distress, F[1] = 3.58, p = 0.068. Neither professional status nor the experimental order was statistically significant in any of the three models. However, there was a statistically significant interaction between order effect and accent for the distress dimension of the SSSQ (F[1] = 4.80, p = 0.037).
While the omnibus tests were not statistically significant, results of paired t-tests for accent and order effect allow for clearer presentation of the descriptive statistics and effect sizes. For the distress subscale, there was no statistically significant difference between the accented (M = 1.34, SD = 0.46) and non-accented (M = 1.27, SD = 0.34) conditions with a small effect size (t[17] = 0.56, p = 0.582, Cohen’s d = 0.173). Results were similarly not statistically significant for the engagement subscale when comparing the accented (M = 0.94, SD = 0.16) and non-accented (M = 0.95, SD = 0.15) conditions with a small effect size (t[17] = 0.21, p = 0.836, Cohen’s d = 0.06). Finally, there was no statistically significant difference observed between the accented (M = 1.06, SD = 0.47) and non-accented (M = 1.07, SD = 0.48) conditions for the worry subscale of the SSSQ with a small effect size (t[17] = 0.26, p = 0.801, Cohen’s d = 0.02). Overall, the change in self-reported stress was negligible when comparing the accented and non-accented versions.
In order to determine whether there was a task effect present, we also incorporated task order as part of the overall analytical model. Experimental order did not have a strong influence on self-reported stress either. The results of t-tests include descriptive statistics and effect sizes. For the distress subscale of the SSSQ there was no statistically significant difference between the accented-first (M = 1.42, SD = 0.50) and non-accented-first (M = 1.22, SD = 0.29) conditions with a medium effect size (t[22.8] = 1.41, p = 0.171, Cohen’s d = 0.49). There was also no statistically significant difference for the engagement subscale between the accented-first (M = 0.90, SD = 0.16) and non-accented-first (M = 0.98, SD = 0.14) conditions with a medium effect size (t[29.3] = 0.21, p = 0.129, Cohen’s d = 0.53). Finally, no statistically significant difference was observed between the accented-first (M = 1.14, SD = 0.62) and non-accented-first (M = 1.01, SD = 0.30) conditions for the worry subscale with a small effect size (t[20.5] = 0.77, p = 0.452, Cohen’s d = 0.27).
The one-dimensional analyses confirm the omnibus ANOVA results that neither accent nor experimental order are independently associated with statistically significant differences in self-reported stress levels, though the effect sizes related to task order are stronger. However, the one-way ANOVA did reveal an interaction effect of these two dimensions that suggests experimental order plays at least some role. The interaction effect was statistically significant for the distress scale. Descriptive statistics, partitioned by both accent and experimental order provide further insight, as seen in Table 2. Whether participants began with the accented version or the non-accented version, in both situations the second task was reported as less stressful, which results in the observed interaction effect. The pattern is similar for the engagement and worry subdimensions of the SSSQ, though the results are not statistically significant. These results reinforce that accent is not a primary driver of participant stress, but there may be some evidence of a warm-up effect such that participants become more comfortable with the second task. The difference could also arise from increasing comfort with the experimental setup or a combination of the two effects.
Descriptive statistics for distress subscale of SSSQ.
| Accented first | Non-accented first | |
|---|---|---|
| First task | 1.60 | 1.31 |
| (0.58) | (0.36) | |
| Second task | 1.23 | 1.13 |
| (0.34) | (0.17) |
4.3 Semi-structured interviews
Semi-structured interviews were conducted after each task to collect qualitative data about participants’ impressions about the tasks, potential difficulties involved, and perceived stress. At the end of the experiment, participants were also asked questions that aimed to compare the tasks in terms of perceived difficulty, the speakers’ accent, the level of perceived stress, including its physical manifestations. Transcripts of interview questions were prepared and then annotated for recurring topics. These topics referred to their perception of the tasks, the challenges involved and the stress experienced while performing the tasks.
One of the recurrent topics in the participants’ answers was task immersion, that is, how involved participants were in an interpreting task. Given the nature of the interpreted events (i.e., job interviews), participants reported the responsibility they felt for the interpreted content. Even though they were aware that they were involved in a research project, some of them said that the task was stressful as making errors could result in the candidate not being able to secure the position she was applying for.
Actually, yes, I found it very stressful in the moment because I thought of it as I’m interpreting for these people in real life. (Participant 24)
I would say difficult because the stakes are so high (…). This is hiring a senior manager. And like, you know, I don’t want to blow it for her, and I don’t want him to get a wrong impression there. (Participant 2)
So I don’t think it’s possible to interpret without any stress at all, because we’re always in important settings where things matter to people. (Participant 4)
Participants were also asked questions related to stress while interpreting. In terms of stress factors, some participants referred to some cognitive challenges that triggered a stressful reaction, such as the need to memorize the content of the chunks (Participants 5, 24, and 35), information overload (Participant 25), and fast delivery rate (Participant 5). When asked about bodily manifestations of stress, participants provided the following examples:
I found myself playing with a pen or trying to take notes and just moving around a lot with my hands. (Participant 24)
I noticed, especially at the beginning, my heart rate was really fast. I really noticed that. (Participant 2)
There is more tension in the body when stress increases. (Participant 3)
When I began I crossed my legs underneath me, which I sometimes do when I’m feeling nervous. (Participant 20)
I just found myself like shaking my head like feeling a little frustrated with myself. (Participant 21)
Psychological literature recognizes both negative (distress) and positive (eustress) aspects of stress (e.g., Selye 1974). This distinction was also visible in the answers to interview questions provided by participants. While some participants recognized that the experience of stress may be perceived as unpleasant and trigger anxiety, they also appreciated the positive aspects of stress, as exemplified by the following two passages:
I feel invigorated when I have to try to kind of work around things. And I also felt like I needed to make sure I wasn’t falling into any of the traps of false cognates, because I felt like there were several. (Participant 6)
It’s just the typical adrenaline rush. So being put on the spot. (Participant 3)
Accented speech was hypothesized to serve as a potential stress factor in this study. However, a quantitative analysis of the Short Stress State Questionnaire (SSSQ; Helton 2004) scores showed that an accented condition was not more stressful than interpreting a non-accented interview at a statistically significant level. This result may result from US interpreters’ exposure to various accents of both English and Spanish, a factor that was also mentioned in some interviews. Some participants reported that since they have experience hearing and interpreting many accentual varieties of English or Spanish, the content that they interpreted was not a problematic factor that could affect their performance.
It was American English and I’ve been in the US for 10 years and that’s the American, that’s the English that I’m used to. So it wasn’t challenging at all in that sense. (Participant 23)
I live in that, in like the southern US. I’m kind of already accustomed to that. Some of my co-workers talk like that and sometimes I catch myself speaking like that. So that was easier. (Participant 17)
I lived in Spain for, for six years so I’m very used to hearing Spanish accents. (Participant 2)
Analysis of interviews pointed to two pertinent methodological considerations. First, when studying stress in the context of interpreting, there may be a confluence of stress resulting from experimental manipulation (independent variable) and stress triggered by a test situation. In other words, what might have increased the level of stress experienced by participants in this study is not only accented speech, but also the fact that participants might have been anxious about their performance being thoroughly examined and quantified in a research study setting. This potential observer effect is noted by Participant 1:
Yes, I found it stressful. Just, you know, like the testing situation. (Participant 1)
Second, in their answers many participants discussed the effects of exposure to an interpreting task. For the sake of experimental control, interviews were carefully matched for the type of interpreted event, topic, length, and the level of complexity. Having interpreted the first interview, participants could have anticipated the content of the interpreted event and potential challenges involved. These conclusions inspired us to test the potential impact of task order on stress experienced by participants, which is supported by the analysis of the SSSQ scores. The potential role of task order can be discussed by looking at the following excerpts from interviews:
I feel like I was more confident in my interpretation and kind of could anticipate more what was coming. (Participant 8)
I feel like the first exercise was more stressful and I don’t know if it’s because it was totally unfamiliar. And the second one was similar, and so I had some basis of knowledge for it. It wasn’t the first time that I had heard it. So I had time to think a little bit about some of the terms. (Participant 9)
I believe the first one felt more difficult because it was my first encounter, and I got a bit more comfortable after the first one. And I was a bit less nervous so yeah the first one did feel a bit more difficult. (Participant 22)
However, Participant 20 suggested that the second task might have been more stressful as a result of accumulated stress from both tasks:
It felt like a compounding effect from what little stress I had accrued from the first task. (Participant 20)
In summary, semi-structured interviews were conducted to triangulate quantitative data from psychometric tools with a qualitative approach. Answers provided by the participants offered a more nuanced insight into stress experienced by community interpreters along with its physiological manifestations during the experimental task. Answers to interview questions also offered a plausible explanation of the obtained results. Lack of statistically significant differences between the accented and non-accented condition may result from US interpreters’ extensive exposure to accents in English and Spanish. Answers provided by the participants also appear to support the SSSQ results suggesting a possible practice effect (stress experienced in response to first vs. second task).
5 Discussion
As noted, research on interpreters has suggested the challenging nature of non-native and strong accent on interpreting performance (e.g., Kurz 2008; Han and Riazi 2017; Lin et al. 2013). However, experimental research on regional accent as a potential stress factor in interpreting has to date been scarce. The present study aimed to elucidate the potential relationship between regional accentual varieties and community interpreters’ self-reported stress. To this end, we presented the participants with two varieties of American accent, with one of them considered General American English (GA), and the other one regional, and adopted the Short Stress State Pre-Questionnaire (SSSQ; Helton 2004) to measure the differences in the level of stress experienced as a result of the interpreting task performed. In general, no statistically significant differences were found between the two conditions, suggesting that regional American accents might not be a significant challenge to US community interpreters. Moreover, this finding may also suggest that the role of accent in interpreting, as discussed in previous research, may be overly simplified and requires further consideration of various factors that may mitigate this relationship, including the mode of interpreting.
Study results suggested a possible practice effect, in which interpreters were less stressed after the second task relative to the first task. Qualitative data from semi-structured interviews suggests a high level of task immersion among study participants. In addition, these data support the idea that interpreters may be most stressed when starting to interpret during an event, perhaps as a result of not knowing the details of the content of a conversation. As soon as the participants had a better sense of the task, the level of stress appeared to decrease. In other words, the challenging aspect of the task does not appear to be the content or linguistic characteristics of the interpreting tasks, but instead the interpreting task itself. Given the observed task order effect on self-reported stress in an experimental setting, participants’ stress may result from the task rather than the included and tested variable.
This research serves as a starting point to discuss the interrelationship between accent and stress in interpreting, taking into account potential individual difference variables such as self-efficacy as a moderating variable. While the statistical testing is inconclusive at this stage – and a lack of statistically significant differences cannot be understood to be a lack of relationship between these variables – the omnibus statistical model that accounts for both participant-level variables alongside experimentally-controlled variables provides a framework to account for the multi-dimensional nature of this type of work. Moreover, the qualitative data aligns with a perceived influence of specific source-language characteristics on the interpreting task.
Moving forward, additional research could elucidate whether stress experienced by community interpreters is related to interpreting quality. A negative correlation between stress and interpreting accuracy was observed in a study by Korpal (2017) involving conference interpreters and interpreting trainees as participants and the speaker’s delivery rate as an independent variable. However, a similar relationship has yet to be examined in a community interpreting setting involving regional accentual varieties or accents more generally. Future research could also address different aspects of accent and their role in interpreting. Previous studies appear to have concentrated on non-native speakers of English who used English in a source speech to be interpreted (e.g., Kurz 2008; Lin et al. 2013; Staszewska 2020). In this study, we instead focused on native speakers who may use either an accent that is considered standard or a more regional variety, the latter being potentially less familiar to speakers of that language. In general, the notion of accent, its conceptualizations, and its role in interpreting should be addressed in further research involving various language pairs and interpreting settings. Given the broad range of speakers with whom community interpreters regularly interact, interpreting in social services, immigration and asylum, medical, and legal settings seem particularly well suited as future investigations. Finally, more research is required to empirically test potential stress factors in community interpreting. Most experimental research on stress factors in interpreting involved conference interpreters (e.g., AIIC 2002; Korpal 2017; Moser-Mercer et al. 1998; Moser-Mercer 2005), while research on stress and the psychological effects of community interpreting mainly involved survey-based and ethnographic research (e.g., Hetherington 2011; Mehus and Becher 2016).
There are inherent trade-offs in experimental research that ought to be recognized as part of this study’s limitations. As with many cognitive studies on interpreting, the sample size is relatively small compared with other psychological or psycholinguistic studies that can more readily recruit participants. This study prioritized recruitment of practicing community interpreters or interpreter trainees in order to isolate accent as a potential variable on the interpreting task; however, subsequent studies might be expanded to include non-professional interpreters as another participant-level variable to augment the participant pool. Moreover, the study was conducted online to enhance the possibility of a wider participant pool. Whereas this experimental setup enables a more heterogeneous group and moves away from a narrow convenience sample, there may be difficulty with task immersion when working with pre-recorded materials. Nevertheless, the qualitative data suggest significant immersion during the interpreting tasks, such that there is less concern associated with participant engagement with the experimental task.
The experiment was designed to mimic the working conditions of community interpreters insofar as the participants were not given a warm-up task to get used to the experimental setup. As Mellinger and Hanson (2022) note, experimental research in cognitive translation and interpreting studies can approach questions of ecological validity from at least three levels: the tasks, the materials, and the behavioral responses of participants. In this particular experiment, the goal was to provide participants with a similar interpreting task that typically only affords interpreters with general information about the task before interpreting in an effort to isolate accent as a potential problem trigger. The task itself did not feature any affect-laden language or any complex language that may occur in community interpreting settings. That said, the decision not to include a warm-up interpreting task could have resulted in elevated stress during the interpreting task and led to the observed order effect. Order effects have been observed in experimental studies previously and are inherent to translation and interpreting tasks (e.g., Mellinger and Hanson 2018), particularly since these tasks, by their very nature, can only be performed in order rather than concurrently. This task order interaction ultimately raises new questions about stress and interpreting and the extent to which the onset of the interpreting task may result in elevated stress, such that task order or experimental variables may need to account for a certain warm-up period to account for this task-related stress trigger.
6 Conclusions
This study examined the relationship between accent and stress in the U.S.-based professional and student interpreters. Although both accent and stress have been extensively researched in interpreting studies, these variables are rarely combined in the same experimental study to allow the role of accented speech in community interpreting to be empirically tested. The results suggest that accented language within a country may not be a considerable influence on interpreter stress, which demonstrates that findings from previous research on simultaneous and conference interpreting, in which accented speech has been defined as speech produced by a non-native speaker, cannot be extrapolated to other settings or contexts. Further research is needed to elucidate various conceptualizations of accent in interpreting and the impact of accent on interpreting performance. Given the inconclusive nature of the results, the study provides avenues for additional research and application within interpreter education, particularly in relation to accent-related factors. Moreover, the study provides a methodological framework to incorporate individual-level differences into analytical models in order to account for multiple variables within a single study such that these participant variables can be accounted for at the same time as experimental task variables.
Funding source: Fulbright Program
Award Identifier / Grant number: Fulbright Senior Award
Acknowledgements
This research was conducted as part of Paweł Korpal’s Fulbright scholarship (Fulbright Senior Award) at the University of North Carolina at Charlotte.
Post-task questions:
Do you have any general thoughts or impressions about this task?
Have you interpreted anything like this before?
Did you find this task to be easy or difficult?
What did you find difficult about the task?
Did the language used by either the English or the Spanish speaker cause any problems? This can be grammar, terminology, accent, speed, or any other features.
Did you find this task to be stressful? If so, why? What did you find stressful about the task?
Post-experiment questions:
Did you find any of the tasks more difficult?
Did you find any of the speakers’ accents challenging and if yes, why?
Was one of the tasks more stressful to you? If so, why? Did you physically react in some way to any of the tasks? It can be, for example, increased heart rate, sweating, trembling or a shaky voice.
References
AIIC (International Association of Conference Interpreters). 2002. Workload study – full report. (http://aiic.net/page/657/interpreter-workload-study-full-report/lang/1).Search in Google Scholar
Albl-Mikasa, Michaela. 2010. Global English and English as a lingua franca (ELF): Implications for the interpreting profession. Trans-Kom 3(2). 126–148.Search in Google Scholar
Albl-Mikasa, Michaela & Anne Catherine Gieshoff. 2025. Non-standard input in interpreting (research). In Christopher D. Mellinger (ed.), The Routledge handbook of interpreting and cognition, 205–223. New York: Routledge.10.4324/9780429297533-16Search in Google Scholar
Bandura, Albert. 1997. Self-efficacy: The exercise of control. New York: W.H. Freeman and Company.Search in Google Scholar
Cheung, Andrew K. F. 2013. Non-native accents and simultaneous interpreting quality perceptions. Interpreting 15(1). 25–47. https://doi.org/10.1075/intp.15.1.02che.Search in Google Scholar
Cheung, Andrew K. F. 2020. Interpreters’ perceived characteristics and perception of quality in interpreting. Interpreting 22(1). 35–55. https://doi.org/10.1075/intp.00033.che.Search in Google Scholar
Cheung, Andrew K. F. 2022. Listeners’ perception of the quality of simultaneous interpreting and perceived dependence on simultaneous interpreting. Interpreting 24(1). 38–58. https://doi.org/10.1075/intp.00070.che.Search in Google Scholar
Chevalier, Lucille & Daniel Gile. 2015. Interpreting quality: A case study of spontaneous reactions. Forum 13(1). 1–26. https://doi.org/10.1075/forum.13.1.01che.Search in Google Scholar
Colina, Sonia. 2025. Interpreting, phonetics, and phonology. In Christopher D. Mellinger (ed.), The Routledge handbook of interpreting and cognition, 135–150. New York: Routledge.10.4324/9780429297533-11Search in Google Scholar
Crezee, Ineke & Miranda Lai. 2022. Interpreters’ resilience and self-care during pandemic restrictions in Australia and New Zealand. New Voices in Translation Studies 27. 90–118.Search in Google Scholar
Crezee, Ineke & George Major. 2021. Maintaining our resilience as interpreters. International Journal of Interpreter Education 13(1). 1–3. Article 2 https://doi.org/10.34068/ijie.13.01.02.Search in Google Scholar
Crystal, David. 2003. A dictionary of linguistics and phonetics. Oxford: Blackwell.Search in Google Scholar
Díaz-Galaz, Stephanie. 2020. Listening and comprehension in interpreting: Questions that remain open. Translation and Interpreting Studies 15(2). 304–323. https://doi.org/10.1075/tis.20074.dia.Search in Google Scholar
García Becerra, Olalla & Ángela Collados Aís. 2019. Quality, interpreting. In Mona Baker & Gabriela Saldanha (eds.), The Routledge encyclopedia of translation studies, 3rd edn. 454–458. New York: Routledge.10.4324/9781315678627-97Search in Google Scholar
Gile, Daniel. 1995. Basic concepts and models for interpreter and translator training. Amsterdam: John Benjamins.10.1075/btl.8(1st)Search in Google Scholar
Gile, Daniel. 2009. Basic concepts and models for interpreter and translator training (revised edition). Amsterdam: John Benjamins.10.1075/btl.8Search in Google Scholar
Han, Chao & Mehdi Riazi. 2017. Investigating the effects of speech rate and accent on simultaneous interpretation: A mixed-methods approach. Across Languages and Cultures 18(2). 237–259. https://doi.org/10.1556/084.2017.18.2.4.Search in Google Scholar
Helton, William S. 2004. Validation of a short stress state questionnaire. In Proceedings of the human factors and ergonomics society annual meeting, vol. 48, 1238–1242.10.1177/154193120404801107Search in Google Scholar
Herman, James P. 2011. Central nervous system regulation of the hypothalamic–pituitary–adrenal axis stress response. In Cheryl D. Conrad (ed.), The handbook of stress: Neuropsychological effects on the brain, 29–46. Chichester: Blackwell Publishing Ltd.10.1002/9781118083222.ch2Search in Google Scholar
Hetherington, Ali. 2011. A magical profession? Causes and management of occupational stress in the signed language interpreting profession. In Lorraine Leeson, Svenja Wurm & Myriam Vermeerbergen (eds.), Signed language interpreting: Preparation, practice and performance, 138–159. Manchester: St. Jerome.Search in Google Scholar
Holmgren, Helle, Hanne Søndergaard & Ask Elklit. 2003. Stress and coping in traumatised interpreters: A pilot study of refugee interpreters working for a humanitarian organization. Intervention 1(3). 22–27.Search in Google Scholar
Hubscher-Davidson, Séverine. 2021. Ethical stress in translation and interpreting. In Kaisa Koskinen & Nike K. Pokorn (eds.), The Routledge handbook of translation and ethics, 415–430. New York: Routledge.10.4324/9781003127970-31Search in Google Scholar
Kim, Ah-Young & In-Young Park. 2001. Construction and validation of academic self-efficacy scale. The Journal of Educational Research 39(1). 95–123.Search in Google Scholar
Korpal, Paweł. 2017. Linguistic and psychological indicators of stress in simultaneous interpreting. Poznań: Wydawnictwo Naukowe UAM.Search in Google Scholar
Korpal, Paweł. 2021. Stress experienced by Polish sworn translators and interpreters. Perspectives 29(4). 554–571. https://doi.org/10.1080/0907676X.2021.1889004.Search in Google Scholar
Korpal, Paweł & Christopher D. Mellinger. 2022. Self-care strategies of professional community interpreters: An interview-based study. Translation, Cognition & Behavior 5(2). 275–299. https://doi.org/10.1075/tcb.00069.kor.Search in Google Scholar
Kövecses, Zoltán. 2000. American English: An introduction. Ontario: Broadview Press.Search in Google Scholar
Kurz, Ingrid. 2008. The impact of non-native English on students’ interpreting performance. In Gyde Hansen, Andrew Chesterman & Heidrun Gerzymisch-Arbogast (eds.), Efforts and models in interpreting and translation research: A tribute to Daniel Gile, 179–192. Amsterdam: John Benjamins.10.1075/btl.80.15kurSearch in Google Scholar
Labov, William, Sharon Ash & Charles Boberg. 2006. The atlas of North American English: Phonetics, phonology and sound change: A multimedia reference tool. Berlin: Mouton de Gruyter.10.1515/9783110167467Search in Google Scholar
Lazarus, Richard S. & Susan Folkman. 1984. Stress, appraisal, and coping. New York: Springer Publishing Company.Search in Google Scholar
Lee, Sang-Bin. 2014. An interpreting self-efficacy (ISE) scale for undergraduate students majoring in consecutive interpreting: Construction and preliminary validation. The Interpreter and Translator Trainer 8(2). 183–203. https://doi.org/10.1080/1750399X.2014.929372.Search in Google Scholar
Lee, Sang-Bin. 2018. Exploring a relationship between students’ interpreting self-efficacy and performance: Triangulating data on interpreter performance assessment. The Interpreter and Translator Trainer 12(2). 166–187. https://doi.org/10.1080/1750399X.2017.1359763.Search in Google Scholar
Lin, I-hsin Iris, Feng-lan Ann Chang & Feng-lan Kuo. 2013. The impact of non-native accented English on rendition accuracy in simultaneous interpreting. Translation & Interpreting 5(2). 30–44. https://doi.org/10.12807/ti.105202.2013.a03.Search in Google Scholar
Mankauskienė, Dalia. 2018. Problem triggers in simultaneous interpreting from English into Lithuanian. Vilnius, Lithuania: Vilnius University Doctoral dissertation.Search in Google Scholar
Mazzetti, Andrea. 1999. The influence of segmental and prosodic deviations on source-text comprehension in simultaneous interpretation. The Interpreters’ Newsletter 9. 125–147.Search in Google Scholar
McAllister, Robert. 2000. Perceptual foreign accent and its relevance for simultaneous interpreting. In Birgitta Englund Dimitrova & Kenneth Hyltenstam (eds.), Language processing and simultaneous interpreting: Interdisciplinary perspectives, 45–64. Amsterdam: John Benjamins.10.1075/btl.40.05mcaSearch in Google Scholar
Mehus, Christopher J. & Emily H. Becher. 2016. Secondary traumatic stress, burnout and compassion satisfaction in a sample of spoken-language interpreters. Traumatology 22(4). 249–254. https://doi.org/10.1037/trm0000023.Search in Google Scholar
Mellinger, Christopher D. & Thomas A. Hanson. 2017. Quantitative research methods in translation and interpreting studies. New York: Routledge.10.4324/9781315647845Search in Google Scholar
Mellinger, Christopher D. & Thomas A. Hanson. 2018. Order effects in the translation process. Translation, Cognition & Behavior 1(1). 1–20. https://doi.org/10.1075/tcb.00001.mel.Search in Google Scholar
Mellinger, Christopher D. & Thomas A. Hanson. 2022. Considerations of ecological validity in cognitive translation and interpreting studies. Translation, Cognition & Behavior 5(1). 1–26. https://doi.org/10.1075/tcb.00061.mel.Search in Google Scholar
Moser-Mercer, Barbara. 2005. Remote interpreting: The crucial role of presence. Bulletin VALS-ASLA (Swiss association of applied linguistics) 81. 73–97.Search in Google Scholar
Moser-Mercer, Barbara, Alexander Künzli & Marina Korac. 1998. Prolonged turns in interpreting: Effects on quality, physiological and psychological stress (pilot study). Interpreting 3(1). 47–64. https://doi.org/10.1075/intp.3.1.03mos.Search in Google Scholar
Ndongo-Keller, Justine. 2015. Vicarious trauma and stress management. In Holly Mikkelson & Renée Jourdenais (eds.), The Routledge handbook of interpreting, 337–351. London: Routledge.Search in Google Scholar
O’Leary, Ann. 1992. Self-efficacy and health: Behavioral and stress-physiological mediation. Cognitive Therapy and Research 16(2). 229–245. https://doi.org/10.1007/BF01173490.Search in Google Scholar
Pöchhacker, Franz. 2009. Issues in interpreting studies. In Jeremy Munday (ed.), The Routledge companion to translation studies, 128–140. New York: Routledge.Search in Google Scholar
Rojo López, Ana María & Ana Isabel Foulquié Rubio. 2025. Interpreting, affect, and emotion. In Christopher D. Mellinger (ed.), The Routledge handbook of interpreting and cognition, 307–323. New York: Routledge.10.4324/9780429297533-23Search in Google Scholar
Rojo López, Ana María, Ana Isabel Foulquié Rubio, Laura Espín López & Francisco Martínez Sánchez. 2021. Analysis of speech rhythm and heart rate as indicators of stress on student interpreters. Perspectives: Studies in Translation Theory and Practice 29(4). 591–607. https://doi.org/10.1080/0907676X.2021.1900305.Search in Google Scholar
Roziner, Ilan & Miriam Shlesinger. 2010. Much ado about something remote: Stress and performance in remote interpreting. Interpreting 12(2). 214–247. https://doi.org/10.1075/intp.12.2.05roz.Search in Google Scholar
Rudvin, Mette & Elena Tomassini. 2011. Interpreting in the community and workplace: A practical teaching guide. Basingstoke, UK: Palgrave.10.1057/9780230307469Search in Google Scholar
Sabatini, Elisabetta. 2000. Listening comprehension, shadowing and simultaneous interpretation of two ‘non-standard’ English speeches. Interpreting 5(1). 25–48. https://doi.org/10.1075/intp.5.1.03sab.Search in Google Scholar
Selye, Hans. 1936. A syndrome produced by diverse nocuous agents. Nature 138. 32. https://doi.org/10.1038/138032a0.Search in Google Scholar
Selye, Hans. 1974. Stress without distress. Philadelphia: J. B. Lippincott.Search in Google Scholar
Setton, Robin & Andrew Dawrant. 2016. Conference interpreting: A complete course. Amsterdam: John Benjamins.10.1075/btl.120Search in Google Scholar
Staszewska, Ewa. 2020. The influence of non-native English source text on rendition accuracy in simultaneous interpreting. Poznań, Poland: Adam Mickiewicz University Unpublished MA dissertation.Search in Google Scholar
Sultanić, Indira. 2021. Interpreting traumatic narratives of unaccompanied child migrants in the United States: Effects, challenges and strategies. Linguistica Antverpiensia, New Series: Themes in Translation Studies 20. 227–247. https://doi.org/10.52034/lanstts.v20i.601.Search in Google Scholar
Tiselius, Elisabet. 2025. Interpreting and language proficiency. In Christopher D. Mellinger (ed.), The Routledge handbook of interpreting and cognition, 238–253. New York: Routledge.10.4324/9780429297533-18Search in Google Scholar
Wang, Caiwen. 2022. A theoretical model to elucidate the elusive concept ‘voice’ for interpreters. Perspectives 30(4). 569–584. https://doi.org/10.1080/0907676X.2021.1922472.Search in Google Scholar
© 2025 the author(s), published by De Gruyter, Berlin/Boston
This work is licensed under the Creative Commons Attribution 4.0 International License.
Articles in the same Issue
- Frontmatter
- Editorial
- Cognitive translation and interpreting studies – an evolving research area and a thriving community of practice
- Research Articles
- Reader differences in navigating English–Chinese sight interpreting/translation
- How does interpreting training affect the executive function of switching? A longitudinal EEG-study of task switching
- Stress and accent in community interpreting
- Many roads lead to Rome: an empirical study of summarizing translation processes
- Dancing with words: the emotional reception of creative audio description in contemporary dance
- Mapping metaphor research in translation and interpreting studies: a bibliometric analysis from 1964 to 2023
- Spotlight on the reader: methodological challenges in combining translation process, product, and translation reception
Articles in the same Issue
- Frontmatter
- Editorial
- Cognitive translation and interpreting studies – an evolving research area and a thriving community of practice
- Research Articles
- Reader differences in navigating English–Chinese sight interpreting/translation
- How does interpreting training affect the executive function of switching? A longitudinal EEG-study of task switching
- Stress and accent in community interpreting
- Many roads lead to Rome: an empirical study of summarizing translation processes
- Dancing with words: the emotional reception of creative audio description in contemporary dance
- Mapping metaphor research in translation and interpreting studies: a bibliometric analysis from 1964 to 2023
- Spotlight on the reader: methodological challenges in combining translation process, product, and translation reception