Home Linguistics & Semiotics The effectiveness of online learning activities in a blended EAP course: relationships between language proficiency, online engagement, and language performance
Article Open Access

The effectiveness of online learning activities in a blended EAP course: relationships between language proficiency, online engagement, and language performance

  • Ying Zhou

    Ying Zhou received her Doctor of Education degree from the University of Bath and is currently an EAP Module Leader at the English Language Centre (ELC) of Xi’an Jiaotong Liverpool University. Her research interests include English Medium Instruction (EMI), learning motivation, and technology-enhanced education. Her work has been published in journals such as the Journal of Multilingual and Multicultural Development, The Educational Review, and The Asian EFL Journal.

    ORCID logo EMAIL logo
    , Simon Sheridan

    Simon Sheridan is the Director of the English Language Centre (ELC) at Xi’an Jiaotong Liverpool University. He is responsible for the overall pedagogic strategy of the ELC and is Chair of the ELC Board of Examiners, a role which supports student transition to UK universities. His principal research interest is in enhancing the overall validity of research within the educational context.

    and Qiwei Zhang

    Qiwei Zhang is the Deputy Director of the English Language Center (ELC) at Xi’an Jiaotong Liverpool University. She is responsible for Quality Assurance in ELC and has experience overseeing the curriculum and assessment design for modules with 90 teachers and 4000 students. Her research interests are technology-enhanced teaching and translanguaging. Her work as a co-researcher in Second Language Acquisition and Multidimensional Teaching Mode was funded by The National Social Science Fund of China.

Published/Copyright: February 24, 2025

Abstract

This study examines the effectiveness of the pre-seminar online learning activities in a blended English for Academic Purposes (EAP) course, with a particular focus on learner engagement with these online activities. It investigates the influence of learners’ English proficiency levels on their online engagement and examines the impact of this engagement on their language performance. A mixed-methods study design was employed, collecting quantitative data from 131 students and qualitative interview data from a subset of 18 participants drawn from this cohort. Simple linear regression results revealed that learners’ English proficiency levels did not predict their online behavioural engagement, but had a small predictive force on online cognitive engagement. Multiple regression results indicated a medium-sized impact of online engagement on EAP learners’ overall final scores. Specifically, simple regression results showed that online cognitive engagement was a significant predictor of learners’ EAP reading, listening, and writing scores, while behavioural engagement significantly predicted writing and speaking scores. Qualitative insights highlighted the importance of interactions between learner and content, learner and socialiser, and learner and interface when designing the online learning activities of a blended EAP course. The study concludes with an effective blended EAP learning model that optimises the use of technological modalities, emphasises the importance of online pedagogical, social, and technological interactions, and highlights the integration between online and onsite lessons.

1 Introduction

Blended learning pedagogy has emerged as a new norm and established itself as a sustainable educational approach in the post-pandemic era (Li et al. 2022; Romaniuk and Łukasiewicz-Wieleba 2022). Serving as a bridge between remote learning and in-class learning, the blended approach integrates the merits of both methods and allows for complementary practices (Romaniuk and Łukasiewicz-Wieleba 2022). Compared to traditional classroom learning and purely online learning, many existing studies (e.g., Kintu et al. 2017; Li et al. 2022; Nazzal and Alradi 2020; Sadiq 2022; Tao et al. 2024; Zhou 2018) have confirmed the benefits of the blended educational mode in different subjects, such as performing arts, educational psychology, and English. Traditionally, blended learning has been defined as a combination of online learning and face-to-face instructional models (Graham et al. 2013). This study follows a broader definition by placing the focal point of this novel pedagogy on its seamless integration of these two teaching models. This perspective defines blended teaching as a flexible integration of technology and curriculum, optimally utilising online and offline teaching theories, methods, and resources to enhance learning effectiveness, improve efficiency, and achieve desired learning outcomes (Tao et al. 2024).

The uniqueness of the blended course design lies primarily in its online component, which plays a key role in enhancing the overall learning experience. This component not only integrates with and complements the onsite component but also offers significant benefits, including increased flexibility, personalised learning opportunities, and improved learning outcomes (Hoić-Božić et al. 2016). However, the online component also presents challenges for both students and teachers, including issues related to technological competencies, sustained engagement, the quality of course design, and the adequacy of institutional support for online learning systems (Rasheed et al. 2020; Romaniuk and Łukasiewicz-Wieleba 2022; Wang et al. 2024). Among these challenges, the design of the blended course, particularly its online learning activities, is crucial, as it serves as a key determinant of the success of blended teaching and learning (Joosten et al. 2019; Kintu et al. 2017; Pima et al. 2018; Sadiq 2022).

Similar benefits and challenges have also been observed in China. In response to the national call for shifting from traditional EFL (English as a Foreign Language) teaching to blended teaching (National Advisory Committee on TEFL in Higher Education under the Ministry of Education 2020), this approach has been adopted across English learning contexts in China, from secondary schools to higher education. Nevertheless, the lack of coherent blended learning models has been noted by Chinese scholars as a significant problem hindering the development of blended pedagogy in China (Shi et al. 2021; Wang 2021; Wang et al. 2024; Zhang et al. 2020). To achieve high-quality blended pedagogy, Pima et al. (2018) emphasised the need for frameworks that prioritise enhancing both learner and teacher engagement while effectively integrating technological multimodalities. Additionally, Drysdale et al. (2013) emphasised the importance of using frameworks to guide blended teaching practitioners in making decisions about what to blend and how to blend. In the context of English language blended learning, there is a clear lack of theoretical framework that guides the design of the online activities in all four language skills, i.e., reading, listening, writing, and speaking. The present study aims to address this research gap by evaluating the effectiveness of the online learning activities in a blended EAP course and proposing an effective blended learning model for higher education institutions offering EAP courses. The findings provide valuable insights for understanding and implementing this innovative approach to English language teaching at the tertiary level.

2 Literature review

2.1 Online activities in blended English learning

In the context of blended English learning, past studies have suggested both generic and specific models. Wang et al.’s (2009) study summarised that a blended English course should comprise three key stages: 1) online preparation of basic knowledge, 2) face-to-face lecturing, and 3) online revision. Wang et al. (2009) recommended that the extent of participation, homework, and course exams be used to analyse and evaluate learner achievement. Nevertheless, this model remained at a procedural level and failed to provide specific insights into blended English course design, particularly regarding the details of the online learning activities. Aligned with these general procedures, Tao et al. (2024) provided more detailed approaches to their blended teaching mode, specifically regarding the online learning content. Prior to onsite classes, students were required to complete a series of self-study online tasks, including MOOCs, textbook-based tasks covering vocabulary, grammar, reading, listening, translation, and writing, and submit questions encountered during the online learning (Tao et al. 2024). One limitation of this blended model lies in its non-integration of speaking into the blended course. Another limitation is its lack of disclosure regarding how the online multimodal activities (e.g., videos, audio, texts) were organised and integrated with onsite learning activities. This study aims to address these theoretical gaps by proposing a blended EAP model that incorporates all four skills into the design of online activities and details how various online multimodal activities can be effectively integrated with each other and with onsite learning activities.

Although blended learning has been shown to improve English language performance across various contexts, its impact on specific language skills remains inconclusive. Hubackova and Ruzickova’s (2011) study revealed that the online activities were useful for improving grammar and vocabulary, which might allow opportunities for enhancing listening and reading competencies. Liu et al. (2020) demonstrated student improvements in listening and speaking through web-based autonomous learning. These discrepancies may arise from inconsistent designs of the online activities in the blended models. Regarding experimental studies comparing blended and traditional learning, Zhou (2018) found that blended learning significantly improved learners’ writing skills in the aspects of content relevance, logical structure, content sufficiency, and language expression. Tao et al.’s (2024) study showed that learners in the blended course outperformed those in traditional classrooms on CET6 (College English Test Band 6) in reading, listening, and writing. Similarly, Moradimokhles and Hwang (2022) found that students in blended learning scored higher on TOEFL reading, listening, and writing than those in traditional classrooms. However, neither Tao et al. (2024) nor Moradimokhles and Hwang (2022) included the speaking component in their blended course designs. Banditvilai (2016) and Nuri and Bostanci (2021) highlighted the positive effects of blended pedagogy on all four key language skills. Because these studies have examined the blended mode as a whole, little is known about whether its effectiveness is linked to specific online modalities or course designs. To address these gaps, this study aims to investigate the impact of the online learning activities on student performance across all four language skills and thereby evaluate the effectiveness of its design.

2.2 Online engagement and learner performance

Engagement is often explored in educational research as a key construct due to its potential contribution to academic success, higher attendance rate, and positive emotions (Fredricks 2015). Despite the variations in the definitions of the term, it is generally agreed that engagement comprises three core dimensions: cognitive, behavioural, and emotional engagement (Fredricks et al. 2016). First, cognitive engagement involves self-directed learning, employing deep learning strategies, and devoting efforts to comprehend complicated ideas; second, behavioural engagement is characterised by participation, attention, effort, and persistence; third, emotional engagement pertains to the degree of positive reactions toward school, teachers, or classmates, as well as the feeling of belonging and identification with the school or specific subjects (Fredricks et al. 2016).

The most commonly used method for assessing engagement is self-reported surveys (Fredricks et al. 2016; Lin 2018), especially when measuring emotional engagement. Despite their advantages in time- and cost-efficiency and ability to capture abstract concepts, self-report measures often exhibit substantial variability and rely heavily on researchers’ interpretations (Fredricks and McColskey 2012; Gobert et al. 2015; Greene 2015). An alternative approach to measuring online cognitive engagement involves regular formative assessments, such as weekly online quizzes. These quizzes were designed with varying cognitive levels of questions, incorporated automated feedback, and provided grades to facilitate learning (e.g., Hettiarachchi et al. 2015; Hughes et al. 2020). Woeste and Barham (2008) observed that students expressed high appreciation for the metacognitive benefits of weekly quizzes at later stages of their learning, noting their usefulness for understanding the content and serving as effective revision tools for exams. Additionally, behavioural engagement can be measured through various indicators, such as frequency and duration of online participation, as well as assignment completion. Specifically, Green et al. (2018) used metrics such as the number of videos and files viewed and hits on discussion forums as measures of behavioural engagement. Rubio et al. (2018) collected data on indicators including page views, the number of discussion posts, on-time assignment submissions, and active days. Similarly, Morris et al. (2005) examined students’ access log data, focusing on the frequency and duration of content page visits, the number of discussion posts read and replied to, and the time spent reading and responding to posts. Emotional engagement is excluded from this investigation due to its potential to introduce subjectivity into the measurement.

Online engagement in blended learning environments has been shown to directly contribute to improved student performance and course effectiveness. Green et al. (2018) found that higher levels of engagement with online materials predicted better grades in a blended medical course. Similarly, Rubio et al. (2018) reported a positive correlation between online engagement and final grades in a blended Spanish course, highlighting continuity as the strongest predictor of success. Furthermore, Panigrahi et al. (2022) demonstrated that online engagement positively influenced the perceived effectiveness of a blended Management Information Systems course, enhancing learners’ self-efficacy and the quality of interaction with the course. Similar findings were observed in Hu and Hui’s (2012) study of a blended Adobe Photoshop course. While these findings underscore the benefits of online engagement across various fields, there remains a lack of research on its impact within EAP blended courses, particularly in relation to learner performance.

Moreover, previous studies (e.g., Means et al. 2013; Panigrahi et al. 2022) have shown that student online engagement in blended learning can be effectively enhanced through interactive approaches. Wang (2008) proposed a generic model (see Figure 1) for integrating online activities into teaching and learning, which consists of three core elements: learner-content interaction, learner-socialiser interaction, and learner-interface interaction. Each of these elements has been identified as a significant predictor of academic performance or student satisfaction with online activities (e.g., Amoush and Mizher 2023; Joosten et al. 2019; Kintu et al. 2017). Additionally, Panigrahi et al. (2022) established a positive relationship between these interaction dimensions and the online behavioural, cognitive, and emotional engagement dimensions. This generic model serves as a valuable framework for understanding student willingness to engage with the online activities of a blended course.

Figure 1: 
A generic model for integrating information and computer technology (Wang 2008).
Figure 1:

A generic model for integrating information and computer technology (Wang 2008).

2.3 Proficiency levels and online engagement

Although the effectiveness of blended English learning has been documented in existing studies, there is a paucity of studies examining its suitability for students with different English proficiency levels. Tao et al.’s (2024) study is one of the few that investigated this issue in an EFL context, concluding that blended learning primarily benefited students at the intermediate and lower proficiency levels, while those with medium-high and high proficiency levels benefited less from this approach. This was attributed to the blended course design, which mainly facilitated knowledge consolidation at the cognitive levels of memorisation and understanding, rather than promoting deep transfer and integration of language knowledge (Tao et al. 2024).

While Tao et al. (2024) highlight the limitations of blended learning for students with higher proficiency levels, some studies suggest that carefully designed online activities can benefit learners across ability levels. For instance, Green et al. (2018) demonstrated that online engagement in a blended medical course served as a mediator between students’ prior ability and final course grades, emphasising the importance of incorporating online engagement into blended course design. Unlike Tao et al.’s (2024) findings, Green et al. (2018) observed that online content benefited students across different ability levels, although students with stronger prior ability exhibited higher online engagement, leading to better final grades. Given that Green et al.’s (2018) study was conducted in a medical education context, further investigation is needed to determine how students with diverse English proficiency levels engage with online activities in a blended EAP learning environment.

3 Methodology

This study employed a mixed-methods design, with the primary intent of combining the quantitative and qualitative results to obtain a complete understanding of the research problem (Creswell and Plano Clark 2018). It aims to find out the relationships between EAP learners’ language proficiency levels, online engagement, and language performance by answering the following research questions (RQs):

RQ1:

To what extent do EAP learners’ language proficiency levels predict their online engagement?

RQ2:

To what extent does online engagement predict EAP learners’ language performance?

RQ3:

How do EAP learners perceive the effectiveness of the pre-seminar online learning activities in a blended EAP course?

3.1 Context

This study was conducted at a private transnational English-medium instruction (EMI) university in China. Stratified, credit-bearing English for Academic Purposes (EAP) courses were offered to all Year 1 students as a foundational course, preparing them for the Year 2 full EMI programmes. Students at different English proficiency levels were streamed into three EAP courses – Foundation, Intermediate, and Advanced – based on their Oxford Online Placement Test results. The Intermediate course was the largest, catering to approximately 3,500 Year 1 students at the pre-intermediate, intermediate, and upper-intermediate levels of the Common European Framework of Reference for Languages (CEFR). A systematic blended EAP learning method was adopted in this 26-week year-long course, with two 2-hour online lessons provided before three 2-hour onsite seminars every week. All five EAP lessons were compulsory and scheduled in the students’ timetables.

Each pre-seminar online lesson mainly consisted of two key elements: a 10 to 15-min video lesson recorded by EAP teachers using PowerPoint (PPT) slides, and a follow-up quiz checking student comprehension of the video and application of the taught skills. This means the online activities of the blended course provided learners with approximately 52 videos and 52 quizzes over the academic year. All quizzes were designed with closed-type questions (e.g., multiple-choice questions, matching, True/False) in order to provide timely online feedback to the students. To complete each online lesson, students were required to watch the video and pass the online quiz at 40 %. Meeting these two requirements would enable the students to be automatically recorded as complete for the lesson. Upon completion of each online lesson, students were usually required to prepare answers for a piece of homework, which would be checked during the onsite seminars. The content of the online lessons was closely linked to the learning objectives of the onsite seminars, forming an integral part of the curriculum and serving as preparation for the onsite ones. EAP teachers in this course constantly emphasised the importance of the online learning activities and occasionally displayed the completion report in class to encourage student engagement in the online lessons.

3.2 Participants

All participants were recruited from those who were enrolled in the Year 1 EAP-Intermediate course in the Academic Year 2022–2023. After obtaining ethical approval and informed consent, 131 participants were recruited using a convenience sampling strategy in the quantitative stage, and their permission to access their statistical data was gained via an online questionnaire. Using maximum variation sampling (Dörnyei 2007), a sub-cohort of 18 participants was recruited for the second stage of semi-structured interviews.

3.3 Data collection

This study employed the following instruments and methods to measure participants’ language proficiency, online engagement, and language performance, and to gather their perceptions:

  1. Language proficiency instrument

    Oxford Online Placement Test: This test, conducted prior to the academic year, measured participants’ English proficiency levels and mapped their scores to the CEFR levels: A2 (pre-intermediate), B1 (intermediate), and B2 (upper-intermediate). According to the CEFR levels (Council of Europe 2024), A2 learners can handle basic needs and discuss familiar topics, B1 learners can engage in standard interactions and give simple explanations, and B2 learners can demonstrate fluency, comprehend complex texts, and provide detailed explanations.

  2. Online engagement instruments

    Online Quiz Results: To complete each quiz, learners were allowed to attempt the quiz multiple times until they achieved a passing score of 40 %. Although the quizzes employed a closed-question format, they were designed to target both lower and higher-order cognitive levels, drawing on findings from previous studies (Hettiarachchi et al. 2015; Hughes et al. 2020). Specifically, the quizzes assessed learners’ understanding of the video lesson content, their ability to transfer and apply acquired knowledge into practice, and their capacity to evaluate and critique authentic good and bad samples. The inclusion of these cognitively demanding tasks encouraged learners to engage deeply with the content rather than relying on surface-level recall. Additionally, automated feedback for each question provided opportunities for learners to reflect on their responses, identify areas for improvement, and refine their understanding. This iterative process promoted the use of deep learning strategies, making the quiz results a more valid indicator of learners’ cognitive engagement in the online lessons.

    Online Lesson Completion Rates: This rate, which represents the percentage of lessons completed throughout the academic year, served as a key indicator of learners’ online behavioural engagement. To be marked as complete for each lesson, students were required to watch the video and pass the quiz, making the completion rate a measure that reflects both behaviours. Consistent with prior research on behavioural engagement (e.g., Fredricks et al. 2016; Green et al. 2018), a higher completion rate suggests that learners consistently engaged with the online content and demonstrated sustained effort, persistence and participation over time. As such, they provide valuable insights into students’ behavioural engagement with the online content.

  3. Language performance instruments

    EAP Assessment Scores: Summative assessments in reading, listening, writing, and speaking were carefully designed to align with the CEFR standards, ensuring that Year 1 EAP students achieved an exit level equivalent to CEFR B2. The reading and listening tasks were calibrated to match the difficulty level of B2 standards, while the writing and speaking assessments employed CEFR-aligned marking rubrics. This alignment provided consistent and standardised evaluation criteria across all language skills.

  4. Qualitative method

    Semi-Structured Interviews: Interviews were conducted in their first language (Chinese) with a subgroup of students drawn from the participants in the quantitative stage. See Appendix A for the interview protocol. These interviews provided qualitative insights into learners’ perceptions of the online learning activities of the blended course.

Using the above instruments and methods, the following data were collected:

  1. Oxford Online Placement Test results prior to the commencement of the Academic Year 2022–2023 (n = 131);

  2. End-of-year overall online quiz results, as well as the reading, listening, writing, and speaking component scores (n = 131);

  3. End-of-year online lesson completion rates (n = 131);

  4. End-of-year overall EAP scores, as well as the reading, listening, writing, and speaking component scores (n = 131);

  5. Semi-structured interviews with a subgroup of participants (n = 18).

3.4 Data analysis

The quantitative data were analysed using IBM SPSS Statistics 27. Descriptive statistics were generated for all variables under investigation. Both simple and multiple linear regression analyses were performed on the dataset to answer RQs 1 and 2. Simple linear regression can provide a fundamental understanding of how each predictor variable individually correlates with the outcome variable. Multiple linear regression analysis, which has the capacity to explain the joint effect of a set of independent variables (de Vaus 2014; Field 2018; Tabachnick and Fidell 2014), was used to examine the relationship between online engagement and EAP language performance, specifically for RQ2.

The interview data were analysed and coded via Nvivo 14. Reflexive thematic analysis, following Braun and Clarke’s (2022) recommended steps, was conducted to categorise the interview data and identify the recurring patterns. To provide qualitative insights into the online lesson design, the main categories of the codes were labelled deductively based on these two key elements: online quizzes and asynchronous video lessons. The sub-themes were merged inductively based on the repetitive codes, revised and refined iteratively throughout the coding process, and then summarised and systematised for further analysis. The reliability of the qualitative data analysis was enhanced by the researchers’ continuous reflexivity and collaborative engagement, following recommendations by Braun and Clarke (2022).

4 Results

The collected data were cleaned and checked, and no missing values or outliers were detected. Before running the linear models, the assumption of normality was tested by calculating the skewness and kurtosis values for all variables (see Table 1). All skewness values fell within the acceptable range of -1 to + 1, and kurtosis values within -2 to + 2, indicating no severe deviations from normal distribution (Hair et al. 2019). The assumptions of linearity and homogeneity were both satisfied, as confirmed by examining the zpred vs. zresid scatterplots. All requirements for performing linear regressions were met.

Table 1:

Descriptive data of all variables (n = 131).

Variable Min Max Mean SD Skewness Kurtosis
Placement test results 134 164 149.94 7.32 -0.27 -0.96
Overall online quiz results 19 89 64.87 14.05 -0.83 0.59
Online reading quiz results 17 98 68.06 15.72 -0.91 0.53
Online listening quiz results 20 96 64.90 14.47 -0.62 0.47
Online writing quiz results 19 93 65.41 12.96 -0.60 1.02
Online speaking quiz results 20 91 66.34 14.26 -0.71 0.10
Online lesson completion rates 18 % 100 % 77.30 % 21.42 % -0.87 -0.25
Overall EAP final scores 18 73 56.57 8.70 -0.70 1.99
EAP reading scores 25 83 57.49 11.65 -0.30 0.18
EAP listening scores 28 94 62.39 13.42 -0.09 -0.31
EAP writing scores 7 78 52.11 12.54 -0.72 1.42
EAP speaking scores 8 81 55.92 11.65 -0.57 1.74

4.1 Proficiency levels and online engagement

To answer RQ1, simple linear regressions were performed on the dataset. The findings showed that the placement test scores statistically significantly predicted the overall online quiz scores (p = 0.19), explaining 4.2 % of the variance in these scores (See Table 2). The overall quiz results increased by 0.11 for every point increase in the placement test scores. The effect size indicated by R2 is small according to Ellis (2010) and Cohen’s (1988) suggested thresholds: small (R2 = 0.02), medium (R2 = 0.13), and large (R2 = 0.26). Additionally, the placement test scores did not predict EAP learners’ online lesson completion rates (p = 0.615), implying that students’ English proficiency levels did not affect their online behavioural engagement.

Table 2:

Simple linear regressions of placement test scores on online quiz results and completion rates.

Predictor variable Outcome variables R 2 B Standardised β F value t value Sig
Constant Overall online quiz results 0.042 143.00 -1.24 <0.001
Placement test scores 0.11 0.21 5.69 3.90 0.019
Constant Online lesson completion rates 0.002 151.11 62.68 <0.001
Placement test scores -0.02 -0.04 0.25 -0.50 0.615

4.2 Online engagement and language performance

To answer RQ2, both multiple and simple linear regression analyses were conducted. Prior to performing the multiple linear regression, the assumption of multicollinearity was checked by inspecting the correlation between predictors, as well as the variance inflation factor (VIF) and tolerance values. Pearson’s correlation between the two predictor variables (i.e., overall online quiz results and completion rates) was 0.17, indicating a small level of correlation (Cohen 1988; Ellis 2010). The VIF was 1.03, and the tolerance value was 0.97, meeting the recommended thresholds of VIF <10 and tolerance >0.2 (Field 2018; Tabachnick and Fidell 2014). Therefore, the assumption of no multicollinearity was satisfied.

The multiple regression analysis (see Table 3) revealed that the combination of online lesson quiz results and completion rates statistically significantly predicted the overall EAP final scores, explaining 14 % of the variance in the scores and indicating a medium effect size. These findings evidenced the positive impact of online cognitive and behavioural engagement on learners’ EAP performance.

Table 3:

Multiple linear regression of online engagement on overall EAP final scores.

Predictor variables R 2 B Standardised β F value t value Sig
Constant 0.140 38.89 10.42 9.86 <0.001
Overall online quiz results 0.18 0.30 3.58 <0.001
Online lesson completion rates 0.07 0.18 2.19 0.030

To obtain a more comprehensive understanding of whether these two predictor variables correlated with the EAP component scores, further statistical analyses using simple linear regressions were conducted on the dataset. The analyses (see Table 4) demonstrated that the online reading, listening, and writing quiz results statistically significantly predicted EAP learners’ corresponding assessment scores. Higher scores on online reading, listening, and writing quizzes were positively associated with better performance on corresponding EAP assessments. However, the online speaking quiz results did not predict EAP speaking scores (p = 0.16), indicating the limited effectiveness of the online quizzes in improving learners’ speaking performance.

Table 4:

Simple linear regressions of online quiz component results on EAP component scores.

Outcome variables Predictor variables R 2 B Standardised β F t Sig
EAP reading scores Constant 0.125 39.67 9.31 <0.001
Online reading quiz results 0.26 0.35 18.41 4.29 <0.001
EAP listening scores Constant 0.038 50.61 9.51 <0.001
Online listening quiz results 0.18 0.20 5.14 2.27 0.025
EAP writing scores Constant 0.093 32.81 5.56 <0.001
Online writing quiz results 0.30 0.31 13.23 4.97 <0.001
EAP speaking scores Constant 0.015 49.23 9.98 <0.001
Online speaking quiz results 0.10 0.12 2.00 2.23 0.16

Furthermore, simple regression analyses (see Table 5) showed that online lesson completion rates were significant predictors of EAP learners’ writing and speaking assessment scores. In contrast, the completion rates did not predict their reading and listening scores. These findings suggest that online behavioural engagement is a significant indicator of EAP performance in productive skills – writing and speaking – but not in receptive skills – reading and listening. Given that the online speaking quizzes were found to be statistically insignificant for improving speaking performance, the effectiveness of completing the online lessons is more likely attributable to the speaking video lessons.

Table 5:

Simple linear regressions of online lesson completion rates on EAP component scores.

Outcome variables Predictor variable R 2 B Standardised β F value t value Sig
EAP writing scores Constant 0.069 40.26 10.09 <0.001
Online lesson completion rates 0.15 0.26 9.50 3.08 0.003
EAP speaking scores Constant 0.067 45.08 12.15 <0.001
Online lesson completion rates 0.14 0.26 9.20 3.03 0.003
EAP reading scores Constant 0.020 51.50 13.55 <0.001
Online lesson completion rates 0.08 0.14 2.68 1.64 0.104
EAP listening scores Constant 0.000 61.90 14.00 <0.001
Online lesson completion rates 0.01 0.01 0.01 0.12 0.909

4.3 Learners’ perceptions of the effectiveness of online learning activities

To address RQ3, the interview data offered constructive feedback on the two key elements of the pre-seminar online activities: online quizzes and asynchronous video lessons. Representative excerpts, labelled with participants’ pseudonyms and final EAP scores, are presented in Tables 6 and 7.

Table 6:

Interview data categories and sub-themes with student quotes.

Categories Sub-themes Student quotes
Online quizzes Video comprehension “Some quizzes were about checking the understanding of the key points that the teacher talked about in the video… It was a process of correcting my notes and understanding.” (Cindy, 63)

“It was very necessary to have the quizzes because after watching the videos, there might be only a faint impression remaining. The quizzes could deepen our impression and improve our understanding of the concepts or explanations in the video.” (Emma, 72)
Language attainment “The good thing was that sometimes the quizzes had listening and reading tests, which helped me practise my language.” (Tina, 67)

“In busy weeks, I might have completed those quizzes in a rush, but before the exam weeks, I re-took those quizzes because they were helpful resources to improve my listening and reading.” (Mary, 60)
Workload “The amount of work was a bit too much sometimes, especially during the mid-term period or at the end of the semester.” (Daisy, 61)

“I think ten questions would be an appropriate number for a quiz if it only includes multiple choice questions… I remember one reading was particularly long, along with the questions, which were painful for me to do.” (Jake, 66)
Feedback mechanisms “If I answered incorrectly, the feedback would only tell me to choose ‘a’ or ‘b’. There was not enough explanation, so I was very confused. I would prefer feedback that tells me which sentence in the original text the answer is based on and explains how the answer could be obtained through that sentence.” (Emma, 72)

“Looking at those explanations, it seemed that they could not completely clear my confusion. I was wondering whether there could be more detailed explanations after finishing the quizzes.” (Cindy, 63)
Technological issues “Some quizzes contained the dragging of vocabulary. When I used my phone or iPad to do the quiz, sometimes, the dragging was not very smooth, or some bugs would appear on the page.” (Miller, 45)

“The reading quizzes were not easy to operate. It could be designed like a computer-based IELTS test, split across two pages, as scrolling up and down in its current form was a pain.” (Jake, 66)
Table 7:

Interview data categories and sub-themes with student quotes.

Categories Sub-themes Student quotes
Asynchronous video lessons Writing video lessons “I found the writing videos very useful, which supplemented things we didn’t learn in high school. For example, when we wrote essays in high school, the emphasis was on the third person singular or the past tense. The online lessons taught us real academic English and how we should do it well. I felt it was very different, and the content was really practical.” (Anna, 57)

“Some key points were listed in the PPTs of the videos, so I could take screenshots by myself. I had a notebook with these screenshots, which provided me with useful references when writing essays.” (Mary, 60)
Speaking video lessons “The speaking videos were excellent, and I watched many videos repeatedly. I also tried to imitate them and their speed. For example, I turned the sample video on and did my presentation to keep my speech and pace consistent with the model.” (Lily, 74)

“I still remember that the speaking lessons were exactly what I wanted. It had examples and detailed explanations, like what might be needed for each part of the presentation… It combines theoretical explanations with vivid examples.” (Chloe, 51)

“The speaking videos helped me a lot. I learned how to analyse and present my data, the whole process of making the PPT, and body language, etc.” (Bella, 70)
High-quality interactive videos “Most of the time, the teachers created very detailed PPTs for the videos… I could understand most of them, and the understanding was quite comprehensive.” (Cindy, 63)

“Each PPT had an overall outline, and if I didn’t understand something in the quiz, I could refer to the overview and jump directly to the relevant section.” (Anna, 57)

“Some videos stopped at a couple of points and asked me questions, which I thought were pretty good. This was more effective than doing another follow-up quiz, in which I sometimes couldn’t remember those small points being tested.” (Rose, 73)

“The videos were free of noise and all other basic technological issues. I think the online videos were professionally recorded.” (Daisy, 61)
Length of videos “If the video was too long, I tended to get tired. When interrupted while watching a video, I might not want to continue, which made the follow-up quiz lose its significance.” (Anna, 57)

“Although a video might only last 10 to 15 min, I still felt it was slightly long. Breaking it down into shorter, 5-min segments would make each segment more focused. I found that by the time I reached the later part of a video, I had already forgotten the previous content, which made completing the follow-up quiz challenging.” (Frank, 56)
Video lesson delivery “Some teachers were more likely to read the PPTs mechanically. Some interesting content could be added to the lesson.” (Miller, 45)

“There was a male teacher who was quite interesting. When he introduced the PPT, he wouldn’t just read it; he would talk about something himself and bring his own understanding. I was willing to listen. Some other teachers just read the PPTs, so I only watched those videos once and didn’t feel like watching them again.” (Rose, 73)

4.3.1 Online quizzes

The interview findings revealed that the interviewees generally recognised the usefulness of the quizzes. Several students, including Cindy (63) and Emma (72), noted that the quizzes were well-integrated into the online curriculum and supported video comprehension, reflecting their understanding of the quizzes’ purpose. In addition, improved language attainment, particularly in reading and listening, was mentioned as a primary benefit, with Tina (67) stating their usefulness for “practising language” and Mary (60) emphasising their value “before exam weeks”. These qualitative insights support the statistical findings that part of the online quizzes effectively contributed to EAP learners’ language improvement.

Despite these positives, several areas for improvement were identified. Firstly, some students (e.g., Jake, 66) criticised the workload of the quizzes and the length of texts in reading quizzes, suggesting that a more moderate workload, especially during the assessment periods, could enhance learners’ online engagement. Secondly, a few interviewees noted the need for establishing sound feedback mechanisms, emphasising the importance of detailed explanations for correct and incorrect answers. For instance, Emma (72) and Cindy (63) expressed a desire for explanations to reduce confusion over mistakes. Thirdly, technological issues were reported, particularly with hardware and software compatibility, which complicated quiz completion. Specific issues included limited functionality for drag-and-drop matching activities on iPads (e.g., Miller, 45) and difficulties with reading texts, such as multi-page scrolling problems (e.g., Jake, 66). These comments highlight the importance of improving quiz functionality on portable devices like tablets and phones.

4.3.2 Asynchronous video lessons

The interviewees were broadly positive about the online recorded video lessons, with near-unanimous agreement, particularly for the writing and speaking videos. Several students, such as Anna (57), stated that the writing videos were helpful as they had not been exposed to academic writing before, and its style was quite different from what they had learned in high school. They also appreciated being able to take notes or “screenshots” of the videos for later reference (e.g., Mary, 60). It appears that students viewed the writing videos favourably due to the new content and clear learning outcomes, which represented significant progress from their previous studies. Similarly, a few interviewees praised the speaking video lessons, particularly for serving as good models for “imitation” (Lily, 74) and for providing helpful presentation skills (e.g., Bella, 70). However, few comments were made about the reading and listening videos. These findings align with the quantitative results, suggesting that the online video lessons were especially effective in enhancing EAP learners’ writing and speaking skills. Furthermore, students were generally positive about the quality of the PPT-based interactive videos, citing features such as the “detailed PPT” (Cindy, 63), the “overall outline” (Anna, 57), the option to “jump straight to the corresponding place” (Anna, 57), embedded instant quizzes (Rose, 73), and the “professionally recorded” quality (Daisy, 61). This feedback indicates successful quality control in video production.

The interviewees offered two main insights into the production of the videos. Firstly, there was a clear consensus that videos should be either shortened or broken into more manageable sections. This preference was mainly due to the length of some videos causing “tired[ness]”, as noted by Anna (57), and reducing the effectiveness of completing the follow-up “quizzes”, as cited by Frank (56). These findings suggest that student behavioural engagement may decrease when videos are approximately 10–15 min long. Additionally, two interviewees commented on the delivery methods of video lessons: Miller (45) remarked that “read[ing] the PPT mechanically” was a drawback, while Rose (73) appreciated that the instructor “wouldn’t just read the PPT… I was willing to listen”. These comments reveal students’ preferences for more dynamic and engaging delivery methods by online instructors.

5 Discussion

5.1 Proficiency levels and online engagement

Simple linear regression findings showed that EAP learners’ placement test results did not predict their online lesson completion rates, though they positively predicted online quiz results to a small extent. This indicates the lack of correlation between EAP learners’ language proficiency levels and their online behavioural engagement. Although students with higher language proficiency levels demonstrated slightly higher cognitive engagement with the online content, it can be argued that students with lower proficiency levels should not be deprived of such learning opportunities, given the potential benefits of the online input. This finding aligns with Green et al.’s (2018) conclusion that online learning content can benefit students across different ability levels.

Nevertheless, as the participants of this study were all recruited from CEFR intermediate levels – pre-intermediate (A2), intermediate (B1), and upper-intermediate (B2), the generalisability of the findings is limited to this group of EAP learners. It remains uncertain whether the benefits of the online learning activities are similar for learners at CEFR low and advanced levels. According to Tao et al. (2024), with the implementation of the blended EFL learning mode, students at advanced language proficiency levels (with CET4 scores above 550) showed less improvement in language performance than those at intermediate or lower levels (with CET4 scores below 512). One reason for this finding could be because their investigated blended mode focused on addressing basic learning issues and had limited effects on deep knowledge processing and transfer. In contrast, the blended mode examined in the current study incorporated both basic (e.g., video content comprehension) and deep (e.g., analysing and evaluating writing and speaking samples) knowledge learning. Moreover, as the two language proficiency criteria – CEFR and CET – lack alignment, the findings of Tao et al.’s (2024) study are not directly applicable to the current research context. Therefore, the generalisability of the findings to CEFR learners at low and advanced levels remains to be verified in future research.

5.2 Online engagement and language performance

Multiple linear regression results confirmed a positive, medium-sized impact of online cognitive and behavioural engagement on EAP learners’ language performance, indicating the overall effectiveness of the pre-seminar online course design. This finding further supports Sadiq’s (2022) conclusion that online quizzes and asynchronous video lessons statistically significantly improved students’ English proficiency. In addition, simple linear regression results indicated that the corresponding online quizzes significantly contributed to EAP learners’ performance in reading, listening, and writing assessments, but not in speaking. The statistical insignificance of the speaking quizzes may be attributed to their closed-ended format, which did not require learners to produce spoken output, partly due to the challenges of providing timely teacher feedback. This design limited opportunities for speaking practice and constructive feedback on speaking performance, leading to ineffectiveness in enhancing final EAP speaking outcomes. This implies the need to employ other forms of online modalities, such as automated speaking performance evaluation tools, to improve the effectiveness of online speaking activities.

Interestingly, simple linear regression results revealed that online lesson completion rates positively contributed to EAP learners’ performance in writing and speaking assessments but not in reading and listening. This highlights the effectiveness of the speaking video lessons despite the limitations of the speaking quizzes. The findings reveal the varied effectiveness of the online multimodalities on learners’ language performance, warranting a more thoughtful design when mixing the modalities in the online learning activities. As Joosten et al. (2019) suggested, the leanness (e.g., texts) or richness (e.g., videos) of the media should be carefully considered and appropriately selected for the content being delivered, allowing adequate breadth and depth for learning. These findings might also explain why past studies (e.g., Liu et al. 2020; Tao et al. 2024; Zhou 2018) reached incongruous conclusions on the effectiveness of the blended mode, possibly due to the divergent online course designs.

5.3 Learners’ perceptions of the effectiveness of online learning activities

The interview findings shed light on the EAP learners’ perceptions of the effectiveness of the online activities. The findings echoed the three components of learner-content, learner-socialiser, and learner-interface interactions in Wang’s (2008) generic model of integrating online content. In terms of learner-content interaction, the interviewees positively commented on the usefulness and relevance of the writing and speaking video lessons, as well as the opportunities to use the online quizzes to practise their reading and listening skills. For both videos and quizzes, they suggested shorter lengths to potentially improve their willingness to engage with the online activities. As Joosten et al. (2019) noted, online content design was positively associated with students’ academic performance and their perceptions of knowledge acquired.

As for learner-socialiser interaction, students expressed a preference for teachers who provided detailed and engaging explanations in the asynchronous videos, rather than simply reading from PPTs. This interaction was perceived as a way for teachers to establish a stronger virtual presence, which helped simulate a sense of connection and guidance despite the lack of real-time communication. Moreover, learners favoured interactive videos, where the videos paused at key points and required them to answer instant questions via pop-up windows. These features were seen as opportunities for students to engage in active reflection and self-assessment, which helped replicate elements of learner-teacher interaction found in face-to-face communication. Additionally, learners expressed a desire for a more effective feedback mechanism in the online quizzes. For instance, providing automated feedback with detailed explanations for both correct and incorrect answers could support learners in understanding their performance, mirroring the clarifications typically offered in real-time teacher interactions. Although indirectly related to academic performance, Amoush and Mizher’s (2023) study revealed that student-instructor interaction was one of the strongest significant predictors of student satisfaction with online English courses.

Thirdly, the interviewees’ suggestions for making technological improvements to ease their challenges with completing online quizzes highlight the importance of enhancing learner-interface interaction. As Amoush and Mizher (2023) pointed out, interaction with technology was found to statistically significantly predict learners’ satisfaction with online English courses. In addition, Kintu et al. (2017) found that technology quality significantly influenced learners’ perceived ability to acquire knowledge and their intrinsic motivation to learn independently through online activities in blended learning. These findings emphasise the critical role of learner-interface interaction in designing effective online activities for blended courses.

Yet, beyond the considerations of learner-content, learner-socialiser, and learner-interface interactions, the importance of connecting these activities with onsite lessons and maintaining the teacher presence in traditional classrooms cannot be overlooked. Building on the fundamental principles of Wang’s (2008) model and based on the findings of this study, an effective blended EAP learning model (see Figure 2) is proposed below. This model has the potential to optimise the use of various online multimodalities, enhance EAP learners’ language performance, and ultimately achieve the desired learning outcomes.

Figure 2: 
An effective blended EAP learning model.
Figure 2:

An effective blended EAP learning model.

6 Conclusion and pedagogical implications

This study employed a mixed-methods design to evaluate the effectiveness of the online learning activities of a blended EAP learning model. The findings provide significant educational insights, offering guidance for future EAP pedagogy and research. One of the primary quantitative findings indicates the effectiveness of the online content for EAP learners at the intermediate stages, spanning from pre-intermediate to upper-intermediate levels. However, it remains uncertain whether students at all language proficiency levels, particularly those at low and advanced levels, would gain the same benefits. Other key statistical findings demonstrate that EAP learners’ cognitive and behavioural engagement with the online lessons significantly contributed to their overall EAP final assessment scores, as well as to their performance on specific components. The results demonstrate the positive impact of online cognitive engagement (through quizzes) on enhancing language skills such as reading, listening, and writing, and the effectiveness of online behavioural engagement (primarily through watching video lessons) in developing productive skills, including writing and speaking.

Meanwhile, the qualitative findings highlight the importance of considering interactions among learner-content, learner-socialiser, and learner-technology when designing the online activities of a blended EAP course. Thus, an effective blended EAP course model (see Figure 2) is recommended for implementation and validation in other institutional contexts. This model emphasises optimising various online multimodalities and enhancing integration between the interconnected online and onsite lessons, with the goal of improving effectiveness and reducing inconsistencies in the delivery of the blended English course.

While this study offers valuable insights, it is not without limitations. Firstly, although statistical data were collected from participants with similar academic backgrounds, other variables, such as emotional engagement, could have co-influenced the relationship between online engagement and learners’ EAP performance. This highlights the importance of adopting a more comprehensive theoretical framework in future investigations of online learning activities in blended learning environments. Secondly, the instruments used for data collection and analysis could be further developed to strengthen the validity of the research findings. Due to the constraints of the online platforms used, this study relied on online quiz results to measure cognitive engagement and completion rates to assess behavioural engagement. The adoption of more sophisticated learning management systems could allow the tracking of other forms of engagement, such as frequency and duration of utilising the online activities. Moreover, future research could focus on exploring the suitability of the blended model for learners at low and advanced levels of English proficiency, as well as the potential of automated speaking performance evaluation tools to enhance the effectiveness of online modalities. To conclude, this study advances our understanding of optimal online learning activity design in a blended EAP course by combining objective statistical analysis with in-depth qualitative findings. It also opens new avenues for future research, underscoring the need for more comprehensive approaches in both theoretical and methodological aspects to further explore this innovative pedagogy.


Corresponding author: Ying Zhou, English Language Centre, Xi’an Jiaotong Liverpool University, Ren’ai Road No. 111, 215000, Suzhou, China, E-mail:

Funding source: Xi’an Jiaotong Liverpool University

Award Identifier / Grant number: Teaching Development Fund / TDF2324-R27-223

About the authors

Ying Zhou

Ying Zhou received her Doctor of Education degree from the University of Bath and is currently an EAP Module Leader at the English Language Centre (ELC) of Xi’an Jiaotong Liverpool University. Her research interests include English Medium Instruction (EMI), learning motivation, and technology-enhanced education. Her work has been published in journals such as the Journal of Multilingual and Multicultural Development, The Educational Review, and The Asian EFL Journal.

Simon Sheridan

Simon Sheridan is the Director of the English Language Centre (ELC) at Xi’an Jiaotong Liverpool University. He is responsible for the overall pedagogic strategy of the ELC and is Chair of the ELC Board of Examiners, a role which supports student transition to UK universities. His principal research interest is in enhancing the overall validity of research within the educational context.

Qiwei Zhang

Qiwei Zhang is the Deputy Director of the English Language Center (ELC) at Xi’an Jiaotong Liverpool University. She is responsible for Quality Assurance in ELC and has experience overseeing the curriculum and assessment design for modules with 90 teachers and 4000 students. Her research interests are technology-enhanced teaching and translanguaging. Her work as a co-researcher in Second Language Acquisition and Multidimensional Teaching Mode was funded by The National Social Science Fund of China.

  1. Research funding: Xi’an Jiaotong Liverpool University Teaching Development Fund (Project code: TDF2324-R27-223).

Appendix A: Interview protocol

  1. Based on the online system records, your completion rate of the online lessons seems relatively high/low. What motivated/de-motivated you to complete the online lessons?

  2. What do you think about the effectiveness of the online reading lessons, especially the videos and quizzes?

  3. What do you think about the effectiveness of the online listening lessons, especially the videos and quizzes?

  4. What do you think about the effectiveness of the online writing lessons, especially the videos and quizzes?

  5. What do you think about the effectiveness of the online speaking lessons, especially the videos and quizzes?

References

Amoush, Kholoud H. & Rabab A. Mizher. 2023. Interaction as a predicator for EFL undergraduate university students’ satisfaction with online English language courses. Theory and Practice in Language Studies 13(4). 927–937. https://doi.org/10.17507/tpls.1304.14.Search in Google Scholar

Banditvilai, Choosri. 2016. Enhancing students’ language skills through blended learning. Electronic Journal of E-Learning 14. 220–229.Search in Google Scholar

Braun, Virginia & Victoria Clarke. 2022. Thematic analysis: a practical guide. London: SAGE.10.53841/bpsqmip.2022.1.33.46Search in Google Scholar

Cohen, Jacob. 1988. Statistical power analysis for the behavioral sciences, 2nd ed. Hillsdale, NJ: Lawrence Erlbaum.Search in Google Scholar

Council of Europe. 2024. Global scale – table 1 (CEFR 3.3): Common reference levels. Common European Framework of Reference for Languages (CEFR). https://www.coe.int/en/web/common-european-framework-reference-languages/table-1-cefr-3.3-common-reference-levels-global-scale.Search in Google Scholar

Creswell, John W. & Vicki L. Plano Clark. 2018. Designing and conducting mixed methods research, 3rd ed. London: SAGE.Search in Google Scholar

de Vaus, David. 2014. Surveys in social research, 6th ed. London: Routledge.10.4324/9780203519196Search in Google Scholar

Dörnyei, Zoltán. 2007. Research methods in applied linguistics: Quantitative, qualitative, and mixed methodologies. New York, NY: Oxford University Press.Search in Google Scholar

Drysdale, Jeffery S, Charles R. Graham, Kristian J. Spring & Lisa R. Halverson. 2013. An analysis of research trends in dissertations and theses studying blended learning. The Internet and Higher Education 17. 90–100. https://doi.org/10.1016/j.iheduc.2012.11.003.Search in Google Scholar

Ellis, Paul D. 2010. The essential guide to effect sizes: Statistical power, meta-analysis, and the interpretation of research results. Cambridge, UK: Cambridge University Press.10.1017/CBO9780511761676Search in Google Scholar

Field, Andy. 2018. Discovering statistics using SPSS, 5th ed. London: SAGE Publications.Search in Google Scholar

Fredricks, Jennifer A. 2015. Academic engagement. In James D. Wright (ed.), International encyclopedia of the social and behavioral sciences, 2nd ed., 31–36. Oxford: Elsevier.10.1016/B978-0-08-097086-8.26085-6Search in Google Scholar

Fredricks, Jennifer A., Michael Filsecker & Michael A. Lawson. 2016. Student engagement, context, and adjustment: Addressing definitional, measurement, and methodological issues. Learning and Instruction 43. 1–4. https://doi.org/10.1016/j.learninstruc.2016.02.002.Search in Google Scholar

Fredricks, Jennifer A. & Wendy McColskey. 2012. The measurement of student engagement: A comparative analysis of various methods and student self-report instruments. In Sandra. L. Christenson, Amy L. Reschy & Cathy Wylie (eds.), Handbook of research on student engagement, 319–339. New York: Springer.10.1007/978-1-4614-2018-7_37Search in Google Scholar

Gobert, Janice D., Ryan S. Baker & Michael B. Wixon. 2015. Operationalising and detecting disengagement within online science microworlds. Educational Psychologist 50(1). 43–57. https://doi.org/10.1080/00461520.2014.999919.Search in Google Scholar

Graham, Charles R., Wendy Woodfield & Buckley Harrison. 2013. A framework for institutional adoption and implementation of blended learning in higher education. The Internet and Higher Education 18. 4–14. https://doi.org/10.1016/j.iheduc.2012.09.003.Search in Google Scholar

Green, Rodney A., Laura Y. Whitburn, Anita Zacharias, Graeme Byrne & Diane L. Hughes. 2018. The relationship between student engagement with online content and achievement in a blended learning anatomy course. Anatomical Sciences Education 11. https://doi.org/10.1002/ase.1761.Search in Google Scholar

Greene, Barbara A. 2015. Measuring cognitive engagement with self-report scales: Reflections from over 20 years of research. Educational Psychologist 50(1). 14–30. https://doi.org/10.1080/00461520.2014.989230.Search in Google Scholar

Hair, Joseph F., William C. Black, Barry J. Babin & Rolph E. Anderson. 2019. Multivariate data analysis, 8th ed. England: Pearson Prentice.Search in Google Scholar

Hettiarachchi, Enosha, Antonia M. Huertas & Enric Mor & Ana-Elena Guerrero-Roldán. 2015. Improving student performance in high cognitive level courses by using formative e-assessment. International Journal of Technology Enhanced Learning 7. 116–133. https://doi.org/10.1504/IJTEL.2015.072027.Search in Google Scholar

Hoić-Božić, Natasa, Martina H. Dlab & Vedran Mornar. 2016. Recommender system and web 2.0 tools to enhance a blended learning model. IEEE Transactions on Education 59. 39–44. https://doi.org/10.1109/TE.2015.2427116.Search in Google Scholar

Hu, Paul Jen-Hwa. & Wendy Hui. 2012. Examining the role of learning engagement in technology-mediated learning and its effects on learning effectiveness and satisfaction. Decision Support System 53(4). 782–792. https://doi.org/10.1016/j.dss.2012.05.014.Search in Google Scholar

Hubackova, Sarka & Marketa Ruzickova. 2011. Experience in foreign language teaching with ICT support. Procedia Computer Science 3. 243–247. https://doi.org/10.1016/j.procs.2010.12.041.Search in Google Scholar

Hughes, Mitchell, Yenna Salamonson & Metcalfe Lauren. 2020. Student engagement using multiple-attempt ‘Weekly Participation Task’ quizzes with undergraduate nursing students. Nurse Education in Practice 46. 102803. https://doi.org/10.1016/j.nepr.2020.102803.Search in Google Scholar

Joosten, Tanya, Rachel Cusatis & Lindsey Harness. 2019. A cross-institutional study of instructional characteristics and student outcomes: Are quality indicators of online courses able to predict student success? Online Learning 23(4). 354–378. https://doi.org/10.24059/olj.v23i4.1432.Search in Google Scholar

Kintu, Mugenyi J, Chang Zhu & Edmond Kagambe. 2017. Blended learning effectiveness: The relationship between student characteristics, design features and outcomes. International Journal of Educational Technology in Higher Education 14(7). 1–20, https://doi.org/10.1186/s41239-017-0043-4.Search in Google Scholar

Li, Zihao, Qingyun Li, Jie Han & Zhongyang Zhang. 2022. Perspectives of hybrid performing arts education in the post-pandemic era: An empirical study in Hong Kong. Sustainability 14(15). 9194. https://doi.org/10.3390/su14159194.Search in Google Scholar

Lin, Lijia. 2018. Student learning and engagement in a blended environment: A mixed methods study. In Imed Bouchrika, Nouzha Harrati & Phu Vu (eds.), Learner experience and usability in online education, 256–269. New York: IGI Global.10.4018/978-1-5225-4206-3.ch010Search in Google Scholar

Liu, Xiaomei, Lindong Zhang, Shufang Zhang & Yingtao Tian. 2020. The further study of the blended learning model of the video-aural-oral course - The combination of Web-based learning, flipped classroom and face-to-face instruction. Education Journal 9(3). 64–72. https://doi.org/10.11648/j.edu.20200903.12.Search in Google Scholar

Means, Barbara, Yukie Toyama, Robert Murphy & Marianne Baki. 2013. The effectiveness of online and blended learning: A meta-analysis of the empirical literature. Teachers College Record: The Voice of Scholarship in Education 115(3). 1–47. https://doi.org/10.1177/016146811311500307.Search in Google Scholar

Moradimokhles, Hossein & Gwo-Jen Hwang. 2022. The effect of online vs. blended learning in developing English language skills by nursing student: An experimental study. Interactive Learning Environments 30(9). 1653–1662. https://doi.org/10.1080/10494820.2020.1739079.Search in Google Scholar

Morris, Libby V., Catherine Finnegan & Sz-Shyan Wu. 2005. Tracking student behavior, persistence, and achievement in online courses. The Internet and Higher Education 8(3). 221–231. https://doi.org/10.1016/j.iheduc.2005.06.009.Search in Google Scholar

National Advisory Committee on TEFL in Higher Education under the Ministry of Education. 2020. Guidelines on college English teaching. Beijing: Higher Education Press.Search in Google Scholar

Nazzal, Kamal S. & Mohammad N. Alradi. 2020. The effect of using blended learning method on learning motivation among students at the Department of Psychological Counselling in Jadara University. Journal of Education and Practice 11(15). 39–46. https://doi.org/10.7176/JEP/11-15-05.Search in Google Scholar

Nuri, Hero. S. M. & Hanife B. Bostanci. 2021. Blended learning to improve university students’ language skills in the Iraqi context. Turkish Journal of Computer and Mathematics Education 12(2). 246–255. https://doi.org/10.17762/turcomat.v12i2.708.Search in Google Scholar

Panigrahi, Ritanjali, Praveen R. Srivastava, Prabin K. Panigrahi & Yogesh K. Dwivedi. 2022. Role of Internet self-efficacy and interactions on blended learning effectiveness. Journal of Computer Information Systems 62(6). 1239–1252. https://doi.org/10.1080/08874417.2021.2004565.Search in Google Scholar

Pima, John M., Michael Odetayo, Rahat Iqbal & Eliamani Sedoyeka. 2018. A thematic review of blended learning in higher education. International Journal of Mobile and Blended Learning 10(1). 1–11. https://doi.org/10.4018/IJMBL.2018010101.Search in Google Scholar

Rasheed, Rasheed A., Amirrudin Kamsin & A. Abdullah Nor. 2020. Challenges in the online component of blended learning: A systematic review. Computers & Education 144. https://doi.org/10.1016/j.compedu.2019.103701.Search in Google Scholar

Romaniuk, Miłosz W. & Joanna Łukasiewicz-Wieleba. 2022. Hybrid education in higher education on the example of academic teachers’ experiences in post-pandemic reality. International Journal of Electronics and Telecommunications 68(3). 489–496. https://doi.org/10.24425/ijet.2022.141265.Search in Google Scholar

Rubio, Fernando, Jonathan M. Thomas & Qin Li. 2018. The role of teaching presence and student participation in Spanish blended courses. Computer Assisted Language Learning 31(3). 226–250. https://doi.org/10.1080/09588221.2017.1372481.Search in Google Scholar

Sadiq, Dilveen A. 2022. The effects of blended learning on students’ achievement in a foundation English course: A study on foundation English students at TISHK university in Erbil, Iraq. Amazonia Investiga 11(59). 21–34. https://doi.org/10.34069/AI/2022.59.11.2.Search in Google Scholar

Shi, Yafei, Mingwen Tong & Taotao Long. 2021. Investigating relationships among blended synchronous learning environments, students’ motivation, and cognitive engagement: A mixed methods study. Computers & Education 168. 104193. https://doi.org/10.1016/j.compedu.2021.104193.Search in Google Scholar

Tabachnick, Barbara G. & Linda S. Fidell. 2014. Using multivariate statistics, 6th ed. US: Pearson Education.Search in Google Scholar

Tao, Yanan, Ludan Yu, Licheng Luo & Hai Zhang. 2024. Effect of blended teaching on college students’ EFL acquisition. Frontiers in Education 9. https://doi.org/10.3389/feduc.2024.1264573.Search in Google Scholar

Wang, Qiyun. 2008. A generic model for guiding the integration of ICT into teaching and learning. Innovations in Education and Teaching International 45(4). 411–419. https://doi.org/10.1080/14703290802377307.Search in Google Scholar

Wang, Chunying. 2021. Employing blended learning to enhance learners’ English conversation: A preliminary study of teaching with Hitutor. Education and Information Technologies 26(2). 2407–2425. https://doi.org/10.1007/s10639-020-10363-5.Search in Google Scholar

Wang, Lin, Muhd K. Omar, Noor S. Zakaria & Nurul N. Zulkifli. 2024. Differential reactions of urban and rural teachers to blended learning: Evidence from Chinese secondary schools. International Journal of Mobile and Blended Learning 16(1). 1–19. https://doi.org/10.4018/IJMBL.337492.Search in Google Scholar

Wang, Xiaomei, Yajun Yang & Xin Wen. 2009. Study on blended learning approach for English teaching. 2009 IEEE International Conference on Systems, Man and Cybernetics, 4641–4644. San Antonio, TX, USA: IEEE.10.1109/ICSMC.2009.5346756Search in Google Scholar

Woeste, Lori A. & Beverly J. Barham. 2008. Wake up! Your PDQ is due. American Society for Clinical Laboratory Science 21(1). 12–14.Search in Google Scholar

Zhang, Zhaoli, Taihe Cao, Jiangbo Shu & Hai Liu. 2020. Identifying key factors affecting college students’ adoption of the e-learning system in mandatory blended learning environments. Interactive Learning Environments 30(8). 1388–1401. https://doi.org/10.1080/10494820.2020.1723113.Search in Google Scholar

Zhou, Chunyi. 2018. Empirical study on the effectiveness of teaching model of College English writing within blended learning mode. Educational Sciences: Theory & Practice 18(5). 1060–1076. https://doi.org/10.12738/estp.2018.5.009.Search in Google Scholar

Received: 2024-11-22
Accepted: 2025-01-15
Published Online: 2025-02-24

© 2025 the author(s), published by De Gruyter and FLTRP on behalf of BFSU

This work is licensed under the Creative Commons Attribution 4.0 International License.

Downloaded on 15.12.2025 from https://www.degruyterbrill.com/document/doi/10.1515/jccall-2024-0024/html
Scroll to top button