Startseite The effects of recalling and imagining prompts on writing engagement, syntactic and lexical complexity, accuracy, and fluency: A partial replication of Cho (2019)
Artikel Open Access

The effects of recalling and imagining prompts on writing engagement, syntactic and lexical complexity, accuracy, and fluency: A partial replication of Cho (2019)

  • Syed Muhammad Mujtaba , Barry Lee Reynolds EMAIL logo , Yang Gao , Rakesh Parkash und Xuan Van Ha
Veröffentlicht/Copyright: 22. Dezember 2023
Veröffentlichen auch Sie bei De Gruyter Brill

Abstract

This replication study examined the effects of writing prompt type on second language (L2) learners’ writing performance. Fifty undergraduate academic and professional writing course pupils wrote narrative essays about a past event (recalling group/high formulation demand condition) or a future event (imagining group/high conceptualization demand condition). Writers completed a freewriting draft and were then given unlimited opportunities to revise. The writing was subjected to syntactic complexity, fluency, accuracy, and lexical complexity analyses. Writer engagement was computed as the time spent revising drafts. The previous study’s results were confirmed in that the recalling group exhibited more complexity and less accuracy in their writing than the imagining group. The recalling group also exhibited a higher level of writing fluency and possessed a higher level of engagement. Furthermore, the results of our study showed that the imagining group produced writing that was slightly more lexically complex than the recalling group. The pedagogical importance of writing prompts and their potential for affecting writing performance and writing engagement was discussed.

1 Introduction

Second language (L2) writing – a multidimensional and complex activity (Cumming 1990) – has received extensive research attention. Studies in the field of L2 writing are varied and have taken into consideration contexts, genres, purposes, teaching, and learning (Hirvela et al. 2016). Within this body of research, the growth ‘of writing as a literacy skill’ is being given due attention (Cho 2019, 2); however, less emphasis has been placed on the psycholinguistic facets of L2 writing development and production (Schoonen et al. 2009). Echoing the concept of writing-to-learn put forth by Manchón (2011), Ortega (2012) argued that L2 writing development can play a role in general language development. Moreover, apart from acting as a means ‘for L2 production and practice’ (Cho 2019, 2), writing can foster L2 learning by enabling learners to actively involve themselves in cognitive processes such as noticing, monitoring, and hypothesis testing – all essential for L2 development (Schoonen et al. 2009, Kormos 2011). The current study aimed to continue this direction of research by providing more empirical evidence for the potential of L2 writing as an avenue for language learning.

Many researchers have considered how L2 acquisition theories can inform L2 writing teaching and learning (e.g. Ruiz-Funes 2015, Johnson 2017, Kormos 2011). Robinson’s (2001) cognition hypothesis and Skehan’s (1998) limited attentional capacity model have been adopted to explain how task complexity affects the cognitive processes of writing, yet the results of the studies framed by these theories were found to be inconclusive (see Johnson 2017 for a discussion). It still remains elusive how L2 writing paves the way for L2 learning, mainly because studies have been conducted in rather constrained contexts. More specifically, the creativity of learners can potentially be stifled by writing prompts (Kuiken and Vedder 2007, Kormos and Trebits 2012, Frear and Bitchener 2015). Tasks with restrictive prompts lack the power to motivate L2 writers to make meaning or synthesize L2 knowledge (Kormos 2011). This increases the chances of limiting the creative process of writing and reducing individualized meaning-making (Cho 2019).

One of the initial attempts to address these issues was made by Cho’s (2019) study in a US English as a second language (ESL) context. In this current study, we conducted a partial replication of Cho’s (2019) research to provide more cumulative empirical evidence of how writing prompts affect L2 learners’ writing production. There are several differences between Cho’s (2019) study and ours. First, Cho’s study was conducted with international students in the United States, while our study was conducted with Pakistani students learning English as an L2 (ESL). Second, Cho’s participants’ English proficiency ranged from mid-intermediate to advanced level, while our participants’ English proficiency was at an upper-intermediate level (B2 in CEFR). Prompts have been defined as “the stimulus for the students to respond to” (Kroll and Reid 1994, 231); writing prompts have the potential of allowing for either creative liberty of interpretation or providing bounds and limits. In addition, prompts may include guidelines on the topics, structure, and strategies that can be used to frame the written content (Chlarson 2011, Cho 2019). Previous studies on L2 development and L2 writing have emphasized cognitive or linguistic aspects of L2 performance; however, L2 writing tasks used in classes can have unintentional effects on other aspects of task performance. Therefore, it is imperative to take into consideration both linguistic and nonlinguistic features of writing performance to comprehend the roles task features and prompts play in the quality of the writing produced by L2 learners.

Replication studies play a critical role in validating and building upon existing findings. They help verify the robustness and generalizability of prior research results, which is crucial for the advancement of scientific knowledge. In the current study, by replicating Cho’s research, we have aimed to contribute to the issue of complexity, accuracy and fluency (CAF) in L2 development in a few ways: for example, we aimed to validate the robustness of Cho’s findings by examining whether similar patterns emerge in a different linguistic context or with a different participant group; in addition, through replication, we can extend Cho’s study by investigating potential boundary conditions, such as the impact of age, proficiency, or language background on CAF effects; also, replicating Cho’s study allows for a comparative analysis of the phenomenon across multiple studies, enabling us to identify consistencies, discrepancies, or unique factors that contribute to CAF; by closely reproducing the original methodology, we can also gain deeper insights into the underlying mechanisms of CAF and potentially uncover factors that were not fully explored in the initial study.

Given the rationale we provided, replicating Cho’s study is valuable for advancing our understanding of CAF in L2 development. It helps validate, extend, and refine the original findings while contributing to the broader scientific discourse on the topic. The current study aims to provide additional layers of insight and context to the complex phenomenon of crosslinguistic influence on the acquisition of an L2.

2 Review of the literature

2.1 Prompts and writing performance

Writing production refers to how the writer creates meaning in their work using linguistic knowledge and experience (Smagorinsky 1991); written language production is affected by psychological factors and the context of the writing prompt (Cho 2019). Prompts usually point to the topic of the writing task (Yang et al. 2015). They may hint at an expected framework regarding the content (Way et al. 2008) and, thus, can be used to score the final written product. Individual differences in the written product, despite being informed by the same prompt, point toward the effect of a writer’s familiarity with the topic and the amount of support provided by the prompt (Cho 2019). Writing prompts, which rely on background knowledge, reading material, and experiences, are believed to foster the quality of the written product (Kroll and Reid 1994). Viewing prompts through a cognitive lens, background knowledge, or familiarity with the topic can assist writers with generating ideas as it lessens cognitive effort (Cho 2019); this can then be applied to other components of writing such as structural development and linguistic elaboration. Similarly, since accessing and retrieving linguistic resources are intertwined, goal attention allocation and writing strategies used could also affect the written output (Kroll and Reid 1994).

A strand of research has examined how the difference in prompt types affects L2 learners’ writing performance (Cho 2019, Shaw and Weir 2007, Yang et al. 2015). Cho (2019) conducted a study to assess the effects of different types of prompts on L2 learners’ writing measured in terms of complexity, accuracy, fluency, and lexical complexity (CAFL). Fifty-one learners were selected and later divided into two groups: recalling and imagining. The recalling group was instructed to write about their past success, whereas the imagining group was instructed to write about their future success. The accuracy of the writing performance of both groups was analysed with respect to CAFL. The results of the study by Cho (2019) showed that the recalling group exhibited greater complexity but less accuracy in contrast to the imagining group which used more accurate but less complex sentences. Moreover, the learners who responded to the recalling prompt were more engaged in their writing, which was measured by taking into account the time participants spent revising their essays. Cho’s (2019) initial findings offered important pedagogical implications for teaching writing; therefore, more studies of this kind are needed to provide more empirical evidence of the benefits of prompts in L2 writing teaching and learning.

Aspects of writing quality and complexity have also been shown to have been impacted by the different topics present in the prompts given to L2 writers (Yang et al. 2015, Kormos and Trebits 2012). Yang et al. (2015) assessed how different prompts influence overall writing output and syntactic complexity. Two writing topics were used in their study, which were different in terms of cognitive demand. The first topic required the use of causal reasons to supplement the author’s claim, whereas the second topic did not need such reasoning. The results revealed that the differential nature of writing topics affected different aspects of syntactic complexity. In particular, the writing task which did not ask learners to use causal reasoning was coupled with more elaboration at the finite clausal level. On the other hand, more subordination was observed when the learners wrote about the topic requiring causal reasoning.

Further insights into how prompts affect writing processes would help deepen the understanding of the relationship between writing output and prompts. The process-based writing model (Kellogg 1999) breaks writing into four components: planning, formulation, execution, and monitoring. Ideas are organized, and knowledge is retrieved from long-term memory, which is then converted into linguistic code, after which the motor aspect of writing is performed by hand, and finally, writing is reviewed and edited (Kellogg 1999). Although Kellogg’s model portrays stages of cognitive processes and explains how a writer’s attention is distributed during production, it fails to report how these cognitive processes affect the production of language (Ruiz-Funes 2015).

Research on task complexity in L2 writing has led to inconsistent results (Ong and Zhang 2010, Kormos 2011, Ruiz-Funes 2015, Kuiken and Vedder 2007, Farahani and Meraji 2011), but a systematic review of the literature shows a positive relationship between task complexity and monitoring of L2 writing (Johnson 2017). Robinson’s (2001) cognition hypothesis model explains the relationship between prompts, the L2 writing process, and performance. Tasks vary in conceptualization demands. When tasks demand high levels of conceptualization, learners pay more attention to language form and use more complex grammatical structures for complex concepts. Thus, complex planning leads to complex formulation/encoding, which in turn increases the complexity and accuracy of the written material.

Kormos (2011) and Kormos and Trebits (2012) were skeptical about the automatic association between planning and formulation. They claimed that conceptual and formulation demands are not interrelated and could represent independent cognitive demands. Further, tasks can also generate independent complexity demands in different phases of the writing process, subject to whether they need the expression of the pre-selected content. For instance, Kormos and Trebits (2012) used two narrative activities (i.e. a story narration and a cartoon description) that were different with respect to structures and picture prompts to investigate this issue. Learners given the story narration activity had to construct a story based on a number of different pictures. Since the learners had to formulate a story according to the pictures given to them, the conceptualization demands were high, whereas the formulation demands were comparatively low. This was because the learners could regulate their stories according to the given pictures. On the other hand, the cartoon description task provided a clear description of the story resulting in formulation demands that were high since the learners had to narrate a pre-determined story; however, the conceptualization demands were low. The result of the study found stories without pre-determined content possessed higher syntactic complexity in contrast to the stories with pre-determined content. Kormos and Trebits (2012) argued that task demands differed subject to the need to create ideas and subsequent linguistic encoding.

Although Kormos and Trebit’s (2012) experiment showed that prompts can affect the written performance of L2 writers, we must consider the possibility that pictorial prompts may also constrain L2 writers’ creativity. As writing is a place for articulating one’s thoughts, an observation of how prompts influence the writing process in a comparatively unconstrained context is needed. As argued by Ortega (2012), this approach considers the necessity for incorporating the notion of “writing as a literacy skill for L2 development in L2 pedagogy” (Cho 2019, 4).

2.2 Learner engagement and prompts

Engagement has been used to categorize the behaviour of students according to their involvement, commitment, motivation, and responsiveness (Zhang and Hyland 2018). Zhang and Hyland (2018) viewed engagement as an umbrella term for learners’ interest and willingness to use the linguistic skills they have acquired to improve their L2 performance. Engagement has also been referred to as the willingness of learners to voluntarily complete linguistic exercises to improve their language skills (Krause and Coates 2008). Other definitions of engagement have included learners’ attention to their learning material (Philp and Duchesne 2016), participation due to the expectation of achievement of learning goals and personal accomplishments (Chen et al. 2008), and the time learners concentrate solely on learning material and the task at hand (Beer et al. 2010).

Individual variations occur in engagement due to differences in interest towards certain topics or content (Poupore 2014), the concreteness of the topic (Tapola et al. 2014), and the difficulty level of the topic (Révész and Brunfaut 2013). Topics with personal implications tend to evoke stronger motivation than impersonal ones (Poupore 2014). Similarly, emotion-evoking topics lead to greater motivation to write (D’Argembeau and Van der Linden 2004), and positively stimulating topics and personal anecdotes may develop stronger learner engagement (Arnold et al. 2007).

Engagement of L2 learners has been measured in interactional classroom settings by observing peer-to-peer interaction (Baralt et al. 2016, Dörnyei and Kormos 2000); however, observations of L2 learner engagement during the completion of L2 writing tasks are more limited (Cho 2019). Some studies have concluded that the manner in which writing assignments or prompts are presented determines learners’ focus during revision (Peck 1990, Takagaki 2003). Witte (1983) claimed that the frequency of revision affected the topical structure of writing. Thus, building on the argument that prompts have the potential to affect L2 writing performance, eliciting some degree of affective and emotional responses towards the writing task or prompt could potentially lead to greater learner engagement (Dörnyei and Otto 1998). Therefore, the current research aims to investigate the following research questions:

  1. Do recalling and imagining prompts differentially affect the syntactic CAFL of learners’ L2 writing?

  2. Do recalling and imagining prompts differentially affect learners’ engagement in revision?

3 Methods

3.1 Participants

The current study involved 50 ESL learners enrolled in an undergraduate Academic and Professional writing course. The participants were at the upper-intermediate level as assessed by their university entrance test scores which were based on their reading, grammar, and writing performance. An upper-intermediate level of performance on this entrance test was equivalent to a B2 level on the CEFR.

The participants were randomly assigned to two groups: recalling (N = 22) and imagining (N = 28). In the process of group allocation, we sought students’ willingness, anticipating that their preferences would contribute to the depth of information and data in their essays. This approach not only upheld ethical considerations for writing but also respected participants’ autonomy by allowing them to select topics aligned with their preferences and avoid any unwillingness to write on certain subjects. The selection of participants’ proficiency levels was based on convenience sampling.

3.2 Tasks

There were two tasks: recalling and imagining, both of which were significant to the learners. The prompts were considered as evoking positive emotions as they were narratives requiring a personal connection. Personal narratives require less planning than impersonal narratives (Polio and Glew 1996), making them more suitable tasks for the current investigation. Similar to the experiment reported in the study by Kormos and Trebits (2012), it was assumed that both prompts relied on different cognitive demands: Recalling tasks produce less conceptual demands, as the content is predetermined, but produce greater linguistic demands to convert existing ideas into L2 writing. On the contrary, conceptual demands are higher when imagining future success because learners need to build up images and expand ideas. Linguistic demands, on the other hand, would be considered lower as the learners would be at liberty to mould their stories subject to their language abilities.

In recalling tasks, the content is provided beforehand, reducing the need for generating new ideas. However, the challenge lies in expressing these ideas effectively in the target language, leading to greater linguistic demands. On the other hand, imagining tasks necessitate the creation of new mental images and the elaboration of ideas, placing higher conceptual demands on learners. In this scenario, learners have the creative freedom to craft their narratives, which can lead to reduced linguistic demands as they adapt their language use according to their proficiency.

3.2.1 Writing prompts

Both writing prompts were devised after consultation with the class teacher. Some modifications were applied to the initial prompts offered by the researchers to align them more effectively with the language proficiency of the learners. These adjustments, involving rephrasing and adaptation, aimed to provide a clearer understanding of the prompts, reducing potential confusion and allowing students to swiftly initiate and draft their essays. To ensure validity of the writing prompts, they were also validated by two ESL experts who were not involved in the study. Both of these prompts were expected to produce dissimilar demands on the learners.

3.2.2 Recalling prompt

For the prompt targeting past success – the recalling condition – the learners were instructed to narrate significant past achievements or successful past events.

Think about the time when you were successful in achieving your goal. What was the goal? How hard did you work to achieve it? Why was that goal important to you? While writing this narrative essay, think about the following questions: who were the people involved in your achievement of the goal? Were there any hurdles in achieving it? What do you see yourself doing at that time?

3.2.3 Imagining prompt

For the imagining condition, the learners were asked to imagine themselves successful in the future and to describe their emotions and feelings in regard to this success.

You are in 2050, and you have achieved all you wanted in your life. Keeping this scenario in mind, write one narrative essay describing your future life. While writing, think about the following questions: What are you doing in the future? What will you have done with your life? What do you do on weekends/weekdays?

3.3 Study procedure

The first author explained the procedure of the experiment to the participants at the onset of the study. Both tasks were divided into two stages. In the first stage – freewriting – the participants were instructed to type for 15 min without monitoring their writing for errors; once the time was up, the learners were asked to save their typed writing. This was considered the first draft. During the second stage – revision – participants could consult online dictionaries and revise their writing. During this stage, however, the participants would be signalled every 3 min, via an on-screen message, inquiring if they wanted to continue with their revision or if they wanted to submit their document. Each message bypassed would count as a successive revision attempt. Participants were not time-bound in the second part of the task; thus, the amount of time required by the participants for revision was not the same, varying between 3 and 30 min in length. The writing process, inclusive of composing the first draft for 15 min and revising it, ranged from 18 to 45 min for each participant.

3.4 Writing performance measurement

The learners’ writing performance was measured through analysis of writing produced during the first stage – freewriting – to avoid the potentially intervening variable of revision. We adopted T-unit analysis to evaluate syntactic complexity using the online software L2 Syntactic Complexity Analyzer developed by Lu (2010) following similar procedures used in previous studies (Cho 2019, Barrot 2018). Although syntactic complexity can be computed in a number of different ways, Yoon and Polio (2017) stressed that length-based complexity indexes are more suitable for narrative writing. Therefore, syntactic complexity was computed using clauses per T-unit (CT) and the number of words per T-unit (WT).

Accuracy was computed in two ways: computing the ratio of error-free T-units (EFT) and the total number of errors per 100 words (EW). These measures of accuracy have been used widely in previous studies (e.g. Cho 2019, Barrot 2018, Polio and Shea 2014). We considered both grammatical and lexical errors in our analyses. As controversies exist pertaining to the accuracy measures used for both written and spoken data (Polio and Shea 2014), decisions regarding accuracy must be clearly elucidated. We excluded punctuation, orthographic, and word choice errors that did not create a barrier to understanding the intended meaning of the sentence (Ferris 2011, Wigglesworth and Storch 2009) from error analyses. While punctuation and orthographic errors are self-explanatory, further explanation of word choice errors considered as barriers to understanding was necessary. For example, we considered the sentence ‘It was a difficulty situation to manage.’ as containing a lexical error, but not impeding understanding as only the incorrect word form was used (i.e. difficulty was used instead of difficult). In contrast, the use of mission in place of mansion in the following sentence was considered as impeding understanding and was coded as a lexical error ‘I will have owned a big mission.’ Use of a different word than the one intended would impede understanding.

We meticulously upheld the rigor of our rating process throughout the study. Our initial step involved the establishment of a comprehensive grading rubric, serving as a fundamental framework. Subsequently, we conscientiously aligned all procedural aspects with this rubric, ensuring a harmonious and consistent evaluation process. To ensure uniformity, we provided explicit instructions to our human raters, urging them to emulate and mirror the practices of our principal, experienced rater. Furthermore, we engaged in thorough and exhaustive discussions with the principal rater, actively addressing any concerns and thoughtfully deliberating upon various facets of the rating process.

After meticulous adherence to these measures, we conducted a comprehensive assessment of the reliability of our procedures. For instance, essays were subjected to dual coding by the first and fourth authors. The outcomes of this coding exhibited a remarkable level of reliability, as indicated by the intraclass correlation coefficient of 0.969, with a 95% confidence interval ranging from 0.940 to 0.990. This statistical evaluation underscores the consistency and robustness of our coding process.

Fluency was computed by totalling the number of tokens produced in the particular time frame total words produced (Cho 2019). Since fluency is a complex construct, it was defined in the current study as written production speed (Lennon 1990). Moreover, computing fluency is more complicated as writing allows online revising and planning (Kellogg 1996). Since the participants were given the opportunity to write freely to complete their first draft, the number of tokens produced while freewriting can be used to measure writing fluency (Cho 2019). Furthermore, since the participants were given equal time to free write, the total number of tokens produced could be taken to represent fluency (Johnson et al. 2012).

Lexical complexity was computed by applying the type-token ratio (TTR), computed by dividing the total number of types by the total number of tokens. This process was automated by online software (Lu 2010), which has been widely used in previous studies (Cho 2019, Dewi 2017, Kim 2014; see Lu 2010 for a full description of lexical complexity analyser). TTR is the most widely used method of computing the lexical diversity of texts (Chotlos 1944, Richards 1987). TTR is considered to be an accurate measure of lexical diversity; however, it can vary as a function of a sample length (Fergadiotis et al. 2013). In this study, we employed TTR as a measure of lexical diversity to maintain consistency with previous research, including the study by Cho (2019).

There are multiple ways of computing engagement. The current study aimed at understanding the behavioural engagement of the participants as reflected in the time spent on the given task (Maehr and Braskamp 1986). Therefore, the current study considered participants taking advantage of a greater number of revisions to have greater persistence, allowing for a measurement of their behavioural engagement in writing.

4 Results

4.1 Do recalling and imagining prompts differentially affect the syntactic CAFL of learners’ L2 writing?

Addressing the issues sequentially, the first question sought to uncover whether there was a relationship between prompt type and the syntactic CAFL of the writing produced by the participants. Descriptive statistics of the writing of both participant groups is presented in Table 1. The results showed the recalling group exhibited higher complexity, but this content was less accurate than the imagining group. The imagining group produced 81% EFT, while the recalling group produced 73% EFT. When compared by errors per 100 words, the recalling group made an average of 12.14 errors, while the imagining group made an average of 7.28 errors. Moreover, the recalling group was found to be more fluent than the imaging group. The recalling group produced an average of 273.68 words during the 15-min time period, while the imagining group produced an average of 238.36 words. Although both groups differed in terms of syntactic complexity, accuracy, and fluency, the lexical complexity of their writings was rather similar (Table 1). However, further analysis is required to determine whether these differences are statistically significant.

Table 1

Descriptive statistics of the recalling and imagining groups’ L2 writing

Recalling (n = 22) M (SD) Imagining (n = 28) M (SD) Total (n = 50) M (SD)
Syntactic complexity CT 2.14 (0.204) 1.59 (0.246) 1.83 (0.359)
WT 14.86 (3.43) 12.25 (4.49) 13.40 (4.22)
Accuracy EFT 73.27 (7.75) 80.96 (7.36) 77.58 (8.39)
EW 12.14 (7.28) 7.28 (4.05) 9.03 (3.96)
Fluency TWP 273.68 (55.98) 238.36 (52.76) 253.90 (52.76)
Lexical complexity TTR 0.48 (0.083) 0.59 (0.064) 0.55 (0.091)

Note. CT = number of clauses per T-unit; EFT = ratio of error-free T-units; EW = total number of errors per 100 words; TWP = total words produced; TTR = type-token ratio; WT = number of words per T-unit.

Although both groups of participants had similar language proficiency, the researchers wished to further control for any potential confounding effects of proficiency by using language proficiency as a covariate. To investigate the differential effects of prompt type on the six variables (i.e. CT, WT, EFT, EW, TWP, and TTR), the assumptions for multivariate analysis of covariance (MANCOVA) were conducted. MANCOVA requires the assumption of normality and equality of variance of residuals. As such, these assumptions were subject to statistical testing. The normality of residuals was assessed using the Kolmogorov–Smirnov approach, confirming that the residuals of all variables were normally distributed (p = 0.200 CT; p = 0.200 WT; p = 0.200 EFT; p = 0.081 EW; p = 0.088 TWP; p = 0.200 for TTR; p = 0.051 proficiency). In addition, the Box’s M test (Box 1949) was employed to verify the equality of covariance matrices of dependent variables (DVs) across groups (p = 0.797), which is integral to the assumption of equality of variance–covariance matrices. Levene’s test, assessing the quality of error variance across groups, revealed consistent variances (p = 0.136 CT; p = 0.231 WT; p = 0.977 EFT; p = 0.122 EW; p = 0.377 TWP; p = 0.075 TTR). Furthermore, Box’s M test (p = 0.915) showed equality of covariance matrices across groups. The collected data met the required assumptions of normality for conducting MANCOVA.

MANCOVA was used to determine statistical significance between the two groups with respect to syntactic CAFL, where language proficiency was taken as a covariate; the alpha level for this study was set as 0.05. Interaction between condition and proficiency was found to be statistically insignificant according to Wilks’ Lambda = 0.965 (p = 0.954). The multivariate tests indicated a significant result for the condition: F (6,42) = 25.44, p = 0.000, Wilks’ Lambda = 0.216, partial η 2 = 0.784; and the multivariate tests indicated a non-significant result for proficiency: F (6,42) = 1.047, p = 0.409, Wilks’ lambda = 0.870, partial η 2 = 0.13. After controlling for participants’ proficiency, writing prompt (recalling and imagining) was shown to have a significant effect on all factors investigated (Table 2).

Table 2

MANCOVA for the condition effect on performance after controlling for proficiency

Source DV Type III SS df MS F p η 2
Proficiency CT 0.005 1 0.005 0.088 0.768 0.002
WT 8.626 1 8.626 0.518 0.475 0.011
EFT 46.890 1 46.890 0.823 0.369 0.017
EW 15.509 1 15.509 1.528 0.223 0.031
TWP 4.052 1 4.052 0.002 0.969 0.000
TTR 0.014 1 0.014 2.815 0.100 0.057
Condition (recalling and imagining) CT 3.639 1 3.639 68.501 0.000 0.593
WT 72.012 1 72.012 4.321 0.043 0.084
EFT 772.433 1 772.433 13.564 0.001 0.224
EW 293.161 1 293.161 28.879 0.000 0.381
TWP 14953.444 1 14953.444 5.807 0.020 0.110
TTR 0.132 1 0.132 25.994 0.000 0.356

Note. DV = dependent variable; CT = number of clauses per T-unit; EFT = ratio of error-free T-units; EW = total number of errors per 100 words; TWP = total words produced; TTR = type-token ratio; WT = number of words per T-unit.

To comprehensively assess the practical significance of the observed differences in students’ writing performance attributed to the different writing prompts, we calculated the effect size using η 2. Effect size offers insight into the proportion of variance in writing performance that can be attributed to the influence of writing prompts. Specifically, Table 2 displays the effect of proficiency levels and conditions on various DVs, using η 2 values to quantify the proportion of explained variance. For example, proficiency levels (e.g. CT, WT, EFT, EW, TWP, TTR) demonstrate minor influence, with η 2 values ranging from 0.002 to 0.031, signifying modest contributions. Conversely, the condition (recalling and imagining) wields substantial impact, notably in CT (recalling and imagining) with an η 2 of 0.593, suggesting that 59.3% of variance in clauses per T-unit stems from condition differences. Other conditions also influence their respective DVs, as seen with η 2 values ranging from 0.084 to 0.381, highlighting their notable effects on writing performance variations.

4.2 Do recalling and imagining prompts differentially affect learners’ engagement in revision?

The second objective of this research was to examine how engagement as manifested by revision was affected by the prompts. Both groups were allowed to make as many revisions as they needed, with learners in the recalling group performing an average of 5.23 revisions (SD = 2.6) and participants in the imagining group performing an average of 3.28 revisions (SD = 1.21).

An analysis of covariance (ANCOVA) was run to examine the difference in revision behaviour of the two groups with proficiency entered as the covariate (Table 3). Proficiency did not affect learner engagement (p = 0.081); however, learner engagement was affected by task condition (p = 0.002).

Table 3

Revision engagement of the recalling and imaging groups after controlling for proficiency

DVs SS (Type III) df Mean square F p
Proficiency 11.488 1 11.488 3.175 0.081
Condition (Recalling and Imaging) 37.442 1 37.442 10.346 0.002
Error 170.089 47 3.619
Total 1085.000 50

4.3 Summary of Results

In sum, we ascertained the distinct impacts of recalling and imagining prompts on syntactic CAFL in participants’ L2 writing through our analyses. Descriptive statistics in Table 1 revealed that the recalling group exhibited higher complexity yet less accuracy compared to the imagining group. While both groups differed in various aspects, their lexical complexity remained similar. η 2 values (e.g. 0.593 for CT) illustrated significant effects of prompt conditions on writing characteristics, with minor contributions from proficiency levels (e.g. η 2 values 0.002–0.031), as depicted in Table 2.

Furthermore, the research investigated revision engagement under different prompts. Participants in the recalling group performed more revisions (M = 5.23, SD = 2.6) than those in the imagining group (M = 3.28, SD = 1.21), as confirmed by a significant mean difference. ANCOVA, with proficiency as the covariate, demonstrated that proficiency did not influence engagement (p = 0.081), while task condition did (p = 0.002), as detailed in Table 3.

5 Discussion

5.1 Prompts and L2 writing performance

The first question addressed how prompts potentially influence the syntactic complexity, fluency, accuracy, and lexical complexity of L2 learners’ writing. The results of the study showed that prompt type affected all of the targeted variables. The recalling group participants produced more syntactically complex (i.e. CT and WT) sentences than the participants in the imaging group. However, the imagining group produced more accurate sentences (i.e. EFT and EW) than the participants in the recalling group. This result is not in line with the study by Kormos and Trebits (2012), which revealed learners exhibited higher complexity and less accuracy in a task that was conceptually more demanding. The difference of the current study’s result could be attributed to differences in study procedures. While Kormos and Trebits (2012) assessed writing performance of the participants who had presumably revised and edited their writing, the writing draft evaluated in the current study was the result of freewriting. A freewriting draft is more spontaneous and requires less planning and monitoring compared to a final draft. Although during freewriting there exists a possibility that learners will revise and monitor the writing they are producing, the cyclic nature of writing may influence L2 learners’ writing performance. Moreover, since the current study aimed to ascertain the influence of prompts on both groups’ writing performance and writing engagement, it was necessary to have two separate phases in the writing process: freewriting (first draft) and revision. We investigated the learners’ performance through an examination of their freewriting was to avoid potentially unwanted influences of revision on performance. For example, it is highly likely that more engaged learners (i.e. learners who have more revision behaviour) would end up performing better than less engaged learners. As the current study was designed with an interest in examining the direct influence of prompts on learners’ writing performance, it was necessary for us to control for revision by analysing the first draft written by the learners.

The results of the current study indicate that syntactic complexity depends on the type of cognitive demand. Resource-directing tasks increase formulation demands by directing learners to attend to language form, thereby increasing complexity. On the other hand, resource-dispersing tasks disperse attention from language forms, thereby increasing conceptualization demands. The current study featured a recalling prompt as part of a resource-directing task, leading learners to search within their linguistic repertoires to describe events without having to tax themselves by imagining something new that has never happened. Imagining prompts serve as resource-dispersing tasks as they stretch learners’ attention for idea generation and conceptualization while reducing the burden of formulation as the narrative can be less detail specific (Cho 2019).

The current study observed a reciprocity effect in the complexity (i.e. CT and WT) and accuracy (i.e. EFT) of the recall condition writings, which points to the possibility that increasing the complexity of a text increases the chances of errors (Cho 2019). The current and previous studies support the trade-off hypothesis (Skehan 1998): an increase in complexity or accuracy is mutually exclusive. Although trade-offs have generally been observed in spoken production, witnessing similar trade-offs in writing tasks is intriguing. The process of writing is less time bound than speaking, and it gives writers more opportunities to self-monitor, thus minimizing the likelihood of committing errors. However, the results of our study indicated that it was difficult for the learners to focus on both the complexity and accuracy of their L2 writing Kormos and Trebits (2012).

Writing is a cyclical process. Thus, the possibility exists that the participants focused on creating sentences and postponed error corrections until they started revising. The participants in this study were informed that they would receive time to edit their drafts, and they may not have aimed for error correction in the first draft. A follow-up analysis was done to see whether a larger number of errors were rectified in the subsequent revision by the recalling group (who had more errors than the imagining group in the first draft). The analysis showed that there was no significant difference (p > 0.05) in the number of errors corrected during the revision for the recalling and imaging groups, further confirming the trade-off hypothesis.

Fluency was found to be dependent on prompt type. The participants in the recalling group exhibited higher fluency than the imagining group participants. This was also the case for the participants in the study by Cho (2019). Prompts that require participants to write narratives about personal experiences do not require as much planning (Polio and Glew 1996). Consequently, in a recalling task, the conceptual demands are relatively low as the story is pre-determined. This might be the reason to account for the learners in the recalling group to have exhibited a higher level of writing fluency.

Lexical complexity was affected by the cognitive demands of the tasks. The writing of the participants in the imagining group exhibited higher lexical complexity than the recalling group participants’ writing. This finding contradicted the results of the studies by Cho (2019) and Kormos and Trebits (2012). There could be a number of different reasons for the increased lexical complexity of the imaging group’s writing. Skehan’s (1998) trade-off hypothesis, which states that improvement in one part comes at the expense of another, is one explanation for this result. The imaging group’s lower fluency can be explained by the additional time invested in improving the lexical complexity of their writing.

The current study points towards the possibility that cognitive demand varies by the nature of the prompt, and the type of demand affects learning potential. Prompts that impose a formulative burden led to increased syntactic complexity but with a reduction in accuracy. Writing tasks which are embedded with recalling past events generate cognitive demands in linguistic formulation, thus facilitating learners to use syntactically complex sentences at the expense of a decrease in linguistic accuracy. The findings of the present study echo the results of the studies by Kormos (2011), Kormos and Trebits (2012), and Cho (2019). These three studies indicated that through manipulation of prompts, independent cognitive demands can be created in the writing production process either in linguistic encoding or in conceptualization.

5.2 L2 writing and learner engagement

The second question aimed to understand how prompts have the potential of affecting the engagement of writers during revision. Participants in the recalling group were more actively engaged in revision than the imagining group. There are two likely causes for this outcome. The first cause may be that the participants completing the recalling task had a more specific prompt compared to the imaging task. The recalling group wrote about a true event from memory, thus may have felt the need to match the details of what they had written with what had actually occurred. In other words, the participants in the recalling group crafted narratives centered on actual memories, which could have resulted in an inclination to ensure that the content they produced closely corresponded with the factual elements of the event they were describing. This inclination to align their written accounts with the actual occurrences they remembered might have influenced their writing process.

On the other hand, the imagining group had more leeway regarding the content of their story, as it was created for that occasion. The topic itself may have offered an increase in content, as it would be easier to write of success that one had experienced, which would be rich in detail, thus minimizing the possibility of editing and revising the content.

The second cause arises from the relevance of the topics and the perceived content of the prompts, which are related to the participants personally (Chlarson 2011). Intrinsic motivation usually arises from positive emotional judgement (Horwitz 2001), and as both prompts were interesting and positive in nature, and this may have motivated the writers to be more engaged (Harackiewicz et al. 2016). The difference in engagement for the tasks may have emerged due to greater interest and motivation towards recalling a success that has already occurred than imagining one that has yet to happen. Past studies showed that events which are closer to learners are likely to be envisaged more intensely, resulting in robust sensory details and clearer depictions (D’Argembeau and Van der Linden 2004). This holds true for the current study, as participants in the imagining group invented and described scenarios that were distant from their current statuses (such as successes in their distant futures, when they would be in their 50s or 60s), while those in the recalling group wrote of successes that had occurred much more recently (an example being gaining the opportunity to go abroad to study). The directing of attention to an actual event, as well as a strong positive personal attachment to the said event, may have attributed to the inclination towards repeated revision of the participants in the recalling group.

5.3 Implications

Cho (2019) investigated the influence of task complexity on L2 writing, focusing on prompts’ effects at different production stages and learners’ revision engagement. Two groups of English as a foreign language/ESL participants composed narrative essays – one recalling a past success (high formulation demand) and the other imagining a future success (high conceptualization demand). Findings revealed the recalling prompt led to more complex but less accurate sentences and increased revision engagement compared to the imagining prompt. The study highlighted linguistic and behavioural prompt effects, providing pedagogical implications.

Our study, however, aimed to validate Cho’s findings. Fifty undergraduates wrote essays about past and future events, akin to recalling and imagining prompts. Consistent with Cho’s results, the recalling group exhibited higher syntactic complexity, fluency, and engagement, yet lower accuracy, compared to the imagining group. In addition, the imagining group demonstrated slightly greater lexical complexity. Both studies underscored the significance of writing prompts in shaping writing performance and engagement, suggesting their potential in educational contexts.

With the aforementioned differences between our study and the study by Cho (2019), there are a number of educational implications regarding the findings of the current study. Firstly, the type of prompts instructors use will affect the writing produced by their learners. Prompts that cast a greater linguistic demand than conceptual strain relieve the writers of the burden of idea generation and allow them to allocate their freed resources to linguistic performance (Cho 2019). Examples of such prompts are those that require detailed descriptions of pre-existing circumstances and prompts that contain several questions that guide the writer in the structure of the piece of writing instead of a singular, more general idea. These prompts address several areas related to the topic, thereby giving learners more opportunities to consider an array of relevant responses.

This study has underscored the need for instructors to highlight the reciprocal relationship between syntactic complexity and accuracy. To improve the accuracy level of learners’ writing, teachers are encouraged to use model texts which are more organized and better planned. For instance, teachers can consider using prewriting exercises which provide learners the opportunity to analyse model texts related to a specific writing genre to enable the learners to understand the characteristics and features of that particular genre. Moreover, learners can be given added chances for editing and reviewing their drafts for errors before turning in their final writing (Grabowski 2007, Granfeldt 2007). Our study found learners’ primary focus during revision was not on error correction, but instead focused on content expansion. Instructors may need to do more than simply provide learners with time to revise or edit drafts for mistakes. Instead, instructors may need to provide guidance to enhance learners’ language awareness and attention to language form Reynolds (2015, 2016).

The findings of the current study further indicate that personal, tangible, and positive emotion-evoking prompts are more likely to encourage learner engagement. Increased engagement provides additional opportunities for learners to focus on writing quality and linguistic performance (Cho 2019). Previous studies have shown that engagement in learning materials can be influenced by the nature of writing prompts, such as the type of emotion they provoke, or whether they are personal or impersonal (Cho 2019, Mueller and Kraus 2018). In addition, the current study showed that prompts also cater to the linguistic abilities of the learners, which provide more reason for instructors to utilize prompts as a motivational and educational tool (Cho 2019).

5.4 Limitations

While the current study was considered novel in several ways, there were also several limitations which should be borne in mind when considering the findings. Firstly, regarding the selection of complexity measures, we acknowledge that our decision to replicate the procedures used in the study by Cho (2019) might have led to a limitation of our study. While our focus on structural subordination as a measure of grammatical complexity was intended to maintain consistency with the original study, we recognize that this approach has been subject to criticism in the recent literature (Biber et al. 2020, 2021, Norris and Ortega 2009). The limitations of relying on a limited number of measures are duly noted. In future research, scholars can address this limitation by adopting a more diverse range of complexity measures. Secondly, instead of examining the overall writing quality, key linguistic aspects were focused on in our analyses (Yang et al. 2015, Way et al. 2008). This means that rhetoric and more general discourse features were not considered in our evaluations (Kormos 2011). Furthermore, our results were limited to a single genre of writing. This reduces the external validity of the findings and according to the study by Yoon and Polio (2017), and each genre of writing would yield different results in terms of syntactic CAFL. Thus, future researchers should consider approximate replications of our study using a similar prompt and task design but for different writing genres. Doing so will allow for investigations into the writing performance of different genres of writing. To avoid the potential influence of revision on writing performance while also investigating the effect of prompt types required analysis of the learners’ writing performance in their first drafts. This limitation could be addressed in future studies that do not focus on writing prompts as a major variable of interest. The last potential limitation was related to the measurement of learners’ engagement. The current study operationalized engagement of the learners based on their attempts at revision. Other methods of measuring engagement should be considered, such as classroom observation, self-report, and keystroke logging (Philp and Duchesne 2016). We also urge caution in interpreting TTR differences as solely indicative of lexical diversity, considering the potential confounding introduced by varying text lengths. Future researchers investigating lexical diversity may wish to explore more advanced measures that effectively address the influence of text length while investigating similar research questions. Future studies could also consider taking a triangulated approach by collecting data on more than a single measurement of engagement.

6 Conclusion

The current study has considered how the prompts used in writing tasks can affect the engagement and accuracy of the writing produced by L2 writers. The study has shown how prompts that direct learners to recall true events resulted in greater formulation demands which helped writers utilize their linguistic repertoires to construct more complex texts. By contrast, prompts that direct learners to imagine events resulted in less complex but more accurate writing.

Acknowledgments

The authors are grateful to the anonymous reviewers that provided constructive comments on earlier drafts of this paper.

  1. Funding information: This research was partially funded by the University of Macau (project number MYRG2022-00091-FED). This research was also partially funded by the National Social Science Foundation grant ‘Chinese College Foreign Language Teachers’ Beliefs and Practices of Value-Based Instruction’ (grant# 23XYY005).

  2. Author contributions: Syed Muhammad Mujtaba: conceptualization, methodology, investigation, resources, and writing original draft; Barry Lee Reynolds: validation, writing – original draft, writing – review and editing, supervision, project administration, and funding acquisition; Yang Gao: validation and writing – review and editing; Rakesh Parkash: software, formal analysis, and data curation; Xuan Van Ha: validation and writing – review and editing.

  3. Conflict of interest: The authors state no conflict of interest. B.L.R. is a member of Open Linguistics’ Editorial Board. He was not, however, involved in the review process of this article. It was handled entirely by other Editors of the journal.

  4. Data availability statement: The datasets generated during and/or analysed during the current study are available from the fourth author (RP) on reasonable request.

References

Arnold, Jane, Herbert Puchta, and Mario Rinvolucri. 2007. Imagine that! Mental imagery in the EFL classroom. Cambridge, UK: Cambridge University Press & Helbing.Suche in Google Scholar

Baralt, Melissa, Laura Gurzynski-Weiss, and YouJin Kim. 2016. “Engagement with the language: How examining learners’ affective and social engagement explains successful learner-generated attention to form.” In Peer interaction and second language learning, p. 209–39. Amsterdam: John Benjamins Publishing Company.10.1075/lllt.45.09barSuche in Google Scholar

Barrot, Jessie S. 2018. “Using the sociocognitive-transformative approach in writing classrooms: Effects on l2 learners’ writing performance.” Reading & Writing Quarterly 34(2), 187–201.10.1080/10573569.2017.1387631Suche in Google Scholar

Beer, Colin, Ken Clark, and David Jones. 2010. “Indicators of engagement. Curriculum, technology & transformation for an unknown future.” Proceedings Ascilite Sydney 75–86. http://ascilite.org/conferences/sydney10/procs/Bee.Suche in Google Scholar

Biber, Douglas, Bethany Gray, Shelley Staples, and Jesse Egbert. 2020. “Investigating grammatical complexity in L2 English writing research: Linguistic description versus predictive measurement.” Journal of English for Academic Purposes 46, 1–15. 10.1016/j.jeap.2020.100869.Suche in Google Scholar

Biber, Douglas, Bethany Gray, Shelley Staples, and Jesse Egbert. 2021. The register-functional approach to grammatical complexity: Theoretical foundation, descriptive research findings, application. Philadelphia: Routledge. 10.4324/9781003087991.Suche in Google Scholar

Box, George E. 1949. “A general distribution theory for a class of likelihood criteria.” Biometrika 36(3/4), 317–46.10.1093/biomet/36.3-4.317Suche in Google Scholar

Chen, Pu-Shih D., Robert Gonyea, and George Kuh. 2008. “Learning at a distance: Engaged or not?” Innovate: Journal of Online Education 4(3). https://nsuworks.nova.edu/innovate/vol4/iss3/3.Suche in Google Scholar

Chlarson, Kelsey J. 2011. Effects of high-interest writing prompts on performance of students with learning disabilities. Master’s thesis, Utah State University. http://digitalcommons.usu.edu/etd/1089.Suche in Google Scholar

Cho, Minyoung. 2019. “The effects of prompts on L2 writing performance and engagement.” Foreign Language Annals 52(3), 576–94.10.1111/flan.12411Suche in Google Scholar

Chotlos, John W. 1944. “A statistical and comparative analysis of individual written language samples.” Psychological Monographs 56(2), 75–111.10.1037/h0093511Suche in Google Scholar

Cumming, Alister. 1990. “Metalinguistic and ideational thinking in second language composing.” Written Communication 7(4), 482–511.10.1177/0741088390007004003Suche in Google Scholar

Dana, Ferris. 2011. Treatment of error in second language student writing. Ann Arbor: University of Michigan Press.10.3998/mpub.2173290Suche in Google Scholar

D’Argembeau, Arnaud and Martial Van der Linden. 2004. “Phenomenal characteristics associated with projecting oneself back into the past and forward into the future: Influence of valence and temporal distance.” Consciousness and Cognition 13(4), 844–58.10.1016/j.concog.2004.07.007Suche in Google Scholar

Dewi, Ratna. 2017. “Lexical complexity in the introductions of undergraduate students’ research articles.” Jurnal Ppendidikan Bbahasa Iinggris 6(2), 161–72.10.26618/exposure.v6i2.1179Suche in Google Scholar

Dörnyei, Zoltán and Judit Kormos. 2000. “The role of individual and social variables in oral task performance.” Language Teaching Research 4(3), 275–300.10.1177/136216880000400305Suche in Google Scholar

Dörnyei, Zoltan and Istvan Otto. 1998. “Motivation in action: A process model of L2 motivation.” Working Papers in Applied Linguistics 4, 43–69.Suche in Google Scholar

Farahani, Ali Akbar K. and Seyed R. Meraji. 2011. “Cognitive task complexity and L2 narrative writing performance.” Journal of Language Teaching and Research 2(2), 445–56.10.4304/jltr.2.2.445-456Suche in Google Scholar

Fergadiotis, Gerasimos, Heather H. Wright, and Thomas M. West. 2013. “Measuring lexical diversity in narrative discourse of people with aphasia.” American Journal of Speech-Language Pathology 22(3), 397–408.10.1044/1058-0360(2013/12-0083)Suche in Google Scholar

Frear, Mark W. and John Bitchener. 2015. “The effects of cognitive task complexity on writing complexity.” Journal of Second Language Writing 30, 45–57.10.1016/j.jslw.2015.08.009Suche in Google Scholar

Grabowski, Joachim. 2007. “The writing superiority effect in the verbal recall of knowledge: Sources and determinants.” In Writing and cognition, edited by D. Galbraith and M. Torrance, p. 165–79. Bingley, UK: Emerald Group.10.1108/S1572-6304(2007)0000020012Suche in Google Scholar

Granfeldt, Jonas. 2007. “Speaking and writing in French L2: Exploring effects on fluency, complexity and accuracy.” In Complexity, accuracy and fluency in second language use, learning & teaching, edited by S. Van Daele, A. Housen, F. Kuiken, M. Pierrard, and I. Vedder, p. 87–98. Wetteren, Belgium: University of Brussels.Suche in Google Scholar

Harackiewicz, Judith M., Jessi L. Smith, and Stacy J. Priniski. 2016. “Interest matters: The importance of promoting interest in education.” Policy Insights from the Behavioral and Brain Sciences 3(2), 220–7.10.1177/2372732216655542Suche in Google Scholar

Hirvela, Alan, Ken Hyland, and Rosa M. Manchón. 2016. “Dimensions of L2 writing theory and research: Learning to write and writing to learn.” In Handbook of second and foreign language writing, edited by R. M. Manchon and P. K. Matsuda, p. 45–63. Berlin, Germany: De Gruyter.10.1515/9781614511335-005Suche in Google Scholar

Horwitz, Elaine. 2001. “Language anxiety and achievement.” Annual Review of Applied Linguistics 21, 112–26.10.1017/S0267190501000071Suche in Google Scholar

Johnson, Mark D. 2017. “Cognitive task complexity and L2 written syntactic complexity, accuracy, lexical complexity, and fluency: A research synthesis and meta‐analysis.” Journal of Second Language Writing 37, 13–38.10.1016/j.jslw.2017.06.001Suche in Google Scholar

Johnson, Mark D., Leonardo Mercado, and Anthony Acevedo. 2012. “The effect of planning sub-processes on L2 writing fluency, grammatical complexity, and lexical complexity.” Journal of Second Language Writing 21(3), 264–82.10.1016/j.jslw.2012.05.011Suche in Google Scholar

Kellogg, Ronald T. 1996. “A model of working memory in writing.” In The science of writing: Theories, methods, individual differences and applications, edited by M. C. Levy and S. E. Ransdell, p. 57–71. Hillsdale, NJ: Laurence Erlbaum Associates.Suche in Google Scholar

Kellogg, Ronald T. 1999. “Components of working memory in text production.” In The cognitive demands of writing. Processing capacity and working memory in text production, edited by M. Torrance and G. C. Jeffery, p. 43–61. Amsterdam: Amsterdam University Press.Suche in Google Scholar

Kim, Ji-young. 2014. “Predicting L2 writing proficiency using linguistic complexity measures: A corpus-based study.” English Teaching 69(4), 27–51.10.15858/engtea.69.4.201412.27Suche in Google Scholar

Kormos, Judit. 2011. “Task complexity and linguistic and discourse features of narrative writing performance.” Journal of Second Language Writing 20(2), 148–61.10.1016/j.jslw.2011.02.001Suche in Google Scholar

Kormos, Judit. and Anna Trebits. 2012. “The role of task complexity, modality, and aptitude in narrative task performance.” Language Learning 62(2), 439–72.10.1111/j.1467-9922.2012.00695.xSuche in Google Scholar

Krause, Kerri L. and Hamish Coates. 2008. “Students’ engagement in first‐year university.” Assessment & Evaluation in Higher Education 33(5), 493–505.10.1080/02602930701698892Suche in Google Scholar

Kroll, Barbara and Joy Reid. 1994. “Guidelines for designing writing prompts: Clarifications, caveats, and cautions.” Journal of Second Language Writing 3(3), 231–55.10.1016/1060-3743(94)90018-3Suche in Google Scholar

Kuiken, Folkert and Ineke Vedder. 2007. “Task complexity and measures of linguistic performance in L2 writing.” IRAL-International Review of Applied Linguistics in Language Teaching 45(3), 261–84.10.1515/iral.2007.012Suche in Google Scholar

Lennon, Paul. (1990). “Investigating fluency in EFL: A quantitative approach.” Language Learning 40(3), 387–417.10.1111/j.1467-1770.1990.tb00669.xSuche in Google Scholar

Lu, Xiaofei. 2010. “Automatic analysis of syntactic complexity in second language writing.” International Journal of Corpus Linguistics 15(4), 474–96.10.1075/ijcl.15.4.02luSuche in Google Scholar

Maehr, Martin L. and Larry A. Braskamp. 1986. The motivation factor: A theory of personal investment. Lexington, MA: D.C. Heath.Suche in Google Scholar

Manchón, Rosa M. 2011. “Situating the learning-to-write and writing-to-learn dimensions of L2 writing.” In Learning-to-write and writing-to-learn in an additional language, edited by R. Manchón, p. 3–14. Amsterdam: John Benjamins.10.1075/lllt.31.03manSuche in Google Scholar

Mueller, Charles M. and William A. Kraus. 2018. “The effects of personalized prompts on Japanese EFL students’ written essays.” On CUE Journal 11(1), 25–50.Suche in Google Scholar

Norris, John M. and Lourdes Ortega. 2009. “Towards an organic approach to investigating CAF in instructed SLA: The case of complexity.” Applied Linguistics 30(4), 555–78. 10.1093/applin/amp044.Suche in Google Scholar

Ong, Justina and Lawrence J. Zhang. 2010. “Effects of task complexity on the fluency and lexical complexity in EFL students’ argumentative writing.” Journal of Second Language Writing 19(4), 218–33.10.1016/j.jslw.2010.10.003Suche in Google Scholar

Ortega, Lourdes. 2012. “Epilogue: Exploring L2 writing–SLA interfaces.” Journal of Second Language Writing 21(4), 404–15.10.1016/j.jslw.2012.09.002Suche in Google Scholar

Peck, Wayne C. 1990. “The effects of prompts on revision: A glimpse of the gap between planning and performance.” In Reading‐to‐write: Exploring a cognitive and social process, edited by L. Flower, V. Stein, J. Ackerman, M. J. Kantz, K. McCormick, and W. C. Peck, p. 156–69. New York: Oxford University Press.10.1093/oso/9780195061901.003.0007Suche in Google Scholar

Philp, Jenefer and Susan Duchesne. 2016. “Exploring engagement in tasks in the language classroom.” Annual Review of Applied Linguistics 36, 50–72.10.1017/S0267190515000094Suche in Google Scholar

Polio, Charlene and Margo Glew. 1996. “ESL writing assessment prompts: How students choose.” Journal of Second Language Writing 5(1), 35–49.10.1016/S1060-3743(96)90014-4Suche in Google Scholar

Polio, Charlene and Mark C. Shea. 2014. “An investigation into current measures of linguistic accuracy in second language writing research.” Journal of Second Language Writing 26, 10–27.10.1016/j.jslw.2014.09.003Suche in Google Scholar

Poupore, Glen. 2014. “The influence of content on adult L2 learners’ task motivation: An interest theory perspective.” The Canadian Journal of Applied Linguistics 17(2), 69–90.Suche in Google Scholar

Révész, Andrea, & Tineke Brunfaut. 2013. “Text characteristics of task input and difficulty in second language listening comprehension.” Studies in Second Language Acquisition 35(1), 31–65.10.1017/S0272263112000678Suche in Google Scholar

Reynolds, Barry Lee. 2015. Helping Taiwanese graduate students help themselves: Applying corpora to industrial management English as a foreign language academic reading and writing. Computers in the Schools, 32(3–4), 300–317. 10.1080/07380569.2015.1096643.Suche in Google Scholar

Reynolds, Barry Lee. 2016. Action research: Applying a bilingual parallel corpus collocational concordancer to Taiwanese medical school EFL academic writing. RELC Journal: A Journal of Language Teaching and Research, 47(2), 213–227. 10.1177/0033688215619518.Suche in Google Scholar

Richards, Brian. 1987. “Type/token ratios: What do they really tell us?” Journal of Child Language 14(2), 201–9.10.1017/S0305000900012885Suche in Google Scholar

Robinson, Peter. 2001. “Task complexity, cognitive resources and syllabus design: A triadic framework for examining task influence on SLA.” In Cognition and second language instruction, edited by P. Robinson, p. 287–318. Cambridge: Cambridge University Press.10.1017/CBO9781139524780.012Suche in Google Scholar

Ruiz-Funes, Marcela. 2015. “Exploring the potential of second/foreign language writing for language learning: The effects of task factors and learner variables.” Journal of Second Language Writing 28, 1–19.10.1016/j.jslw.2015.02.001Suche in Google Scholar

Schoonen, Rob, Patrick Snellings, Marie Stevenson, and Amos van Gelderen. 2009. “Towards a blueprint of the foreign language writer: The linguistic and cognitive demands of foreign language writing.” In Writing in foreign language contexts: Learning, teaching, and research, edited by R. M. Manchón, p. 77–101. Clevedon, UK: Multilingual Matters.10.21832/9781847691859-007Suche in Google Scholar

Shaw, Stuart D. and Cyril J. Weir. 2007. Examining writing: Research and practice in assessing second language writing (Vol. 26). Cambridge University Press.Suche in Google Scholar

Skehan, Peter. 1998. “Task-based instruction.” Annual Review of Applied Linguistics 18, 268–86.10.1017/S0267190500003585Suche in Google Scholar

Smagorinsky, Peter. 1991. “The writer’s knowledge and the writing process: A protocol analysis.” Research in the Teaching of English 25(3), 339–64.Suche in Google Scholar

Takagaki, Toshiyuki. 2003. “The revision patterns and intentions in L1 and L2 by Japanese Writers: A case study.” TESL Canada Journal 21(1), 22–38.10.18806/tesl.v21i1.272Suche in Google Scholar

Tapola, Anna, Tomi Jaakkola, and Markku Niemivirta. 2014. “The influence of achievement goal orientations and task concreteness on situational interest.” The Journal of Experimental Education 82(4), 455–79.10.1080/00220973.2013.813370Suche in Google Scholar

Way, Denise P., Elizabeth G. Joiner, and Michael A. Seaman. 2008. “Writing in the secondary foreign language classroom: The effects of prompts and tasks on novice learners of French.” The Modern Language Journal 84(2), 171–84.10.1111/0026-7902.00060Suche in Google Scholar

Wigglesworth, Gillian and Neomy Storch. 2009. “Pair versus individual writing: Effects on fluency, complexity and accuracy.” Language Testing 26(3), 445–66.10.1177/0265532209104670Suche in Google Scholar

Witte, Stephen P. 1983. “Topical structure and revision: An exploratory study.” College Composition and Communication 34(3), 313–41.10.2307/358262Suche in Google Scholar

Yang, Weiwei, Xiaofei Lu, and Sara Cushing Weigle. 2015. “Different topics, different discourse: Relationships among writing topic, measures of syntactic complexity, and judgments of writing quality.” Journal of Second Language Writing 28, 53–67.10.1016/j.jslw.2015.02.002Suche in Google Scholar

Yoon, Hyung‐Jo and Charlene Polio. 2017. “The linguistic development of students of English as a second language in two written genres.” TESOL Quarterly 51(2), 275–301.10.1002/tesq.296Suche in Google Scholar

Zhang, Zhe V. and Ken Hyland. 2018. “Student engagement with teacher and automated feedback on L2 writing.” Assessing Writing 36, 90–102.10.1016/j.asw.2018.02.004Suche in Google Scholar

Received: 2022-03-16
Revised: 2023-08-13
Accepted: 2023-10-31
Published Online: 2023-12-22

© 2023 the author(s), published by De Gruyter

This work is licensed under the Creative Commons Attribution 4.0 International License.

Artikel in diesem Heft

  1. Research Articles
  2. Interpreting unwillingness to speak L2 English by Japanese EFL learners
  3. Factors in sound change: A quantitative analysis of palatalization in Northern Mandarin
  4. Beliefs on translation speed among students. A case study
  5. Towards a unified representation of linguistic meaning
  6. Hedging with modal auxiliary verbs in scientific discourse and women’s language
  7. Front vowels of Spanish: A challenge for Chinese speakers
  8. Spheres of interest: Space and social cognition in Phola deixis
  9. Uncovering minoritized voices: The linguistic landscape of Mieres, Asturies
  10. “Multilingual islands in the monolingual sea”: Foreign languages in the South Korean linguistic landscape
  11. Changes and continuities in second person address pronoun usage in Bogotá Spanish
  12. Valency patterns of manner of speaking verbs in Croatian
  13. The declarative–procedural knowledge of grammatical functions in higher education ESL contexts: Fiction and reality
  14. On the computational modeling of English relative clauses
  15. Reaching beneath the tip of the iceberg: A guide to the Freiburg Multimodal Interaction Corpus
  16. Leadership style by metaphor in crisis political discourse
  17. Geolinguistic structures of dialect phonology in the German-speaking Alpine region: A dialectometric approach using crowdsourcing data
  18. Impact of gender on frequency of code-switching in Snapchat advertisements
  19. Cuteness modulates size sound symbolism at its extremes
  20. Theoretical implications of the prefixation of Polish change of state verbs
  21. The effects of recalling and imagining prompts on writing engagement, syntactic and lexical complexity, accuracy, and fluency: A partial replication of Cho (2019)
  22. The pitfalls of near-mergers: A sociophonetic approach to near-demergers in the Malaga /θ/ vs /s/ split
  23. Special Issue: Lexical constraints in grammar: Minority verb classes and restricted alternations, edited by Pegah Faghiri and Katherine Walker
  24. Introduction to Lexical constraints in grammar: Minority verb classes and restricted alternations
  25. Restrictions on past-tense passives in Late Modern Danish
  26. Fluidity in argument indexing in Komnzo
  27. Lexically driven patterns of contact in alignment systems of languages of the northern Upper Amazon
  28. Tense-aspect conditioned agent marking in Kanakanavu, an Austronesian language of Taiwan
  29. Special Issue: Published in Cooperation with NatAcLang2021, edited by Peep Nemvalts and Helle Metslang
  30. Latinate terminology in Modern Greek: An “intruder” or an “asset”?
  31. Lithuanian academic discourse revisited: Features and patterns of scientific communication
  32. State and university tensions in Baltic higher education language policy
  33. Japanese national university faculty publication: A time trend analysis
  34. Special Issue: Subjectivity and Intersubjectivity in Language, edited by Külli Habicht, Tiit Hennoste, Helle Metslang, and Renate Pajusalu - Part I
  35. Between rhetorical questions and information requests: A versatile interrogative clause in Estonian
  36. Excursive questions
  37. Attitude dative (dativus ethicus) as an interpersonal pragmatic marker in Latvian
  38. Irrealis-marked interrogatives as rhetorical questions
  39. Constructing the perception of ‘annoying’ words and phrases in interaction: An analysis of delegitimisation strategies used in interviews and online discussions in Finnish
  40. Surprise questions in English and French
  41. Address forms in Tatar spoken in Finland and Estonia
  42. Special Issue: Translation Times, edited by Titela Vîlceanu, Loredana Pungă, Verónica Pacheco Costa, and Antonia Cristinoi Bursuc
  43. Editorial special issue: Translation times
  44. On the uses of machine translation for education purposes: Attitudes and perceptions of Lithuanian teachers
  45. Metaphorical images in the mirror: How Romanian literary translators see themselves and their translations
  46. Transnational audiovisual remakes: Suits in Arabic as a case study
  47. On general extenders in literary translation and all that stuff
  48. Margaret Atwood’s The Handmaid’s Tale and the borders of Romanian translations
  49. The quest for the ideal business translator profile in the Romanian context
  50. Training easy-to-read validators for a linguistically inclusive society
  51. Frequency of prototypical acronyms in American TV series
  52. Integrating interview-based approaches into corpus-based translation studies and literary translation studies
  53. Source and target factors affecting the translation of the EU law: Implications for translator training
  54. “You are certainly my best friend” – Translating adverbs of evidential certainty in The Picture of Dorian Gray
  55. Multilingualism in the Romanian translation of C. N. Adichie’s Purple Hibiscus: Sociolinguistic considerations
  56. Informed decision making in translating assessment scales in Physical Therapy
Heruntergeladen am 11.9.2025 von https://www.degruyterbrill.com/document/doi/10.1515/opli-2022-0259/html
Button zum nach oben scrollen