Startseite Supporting first-year students in learning molecular orbital theory through a digital learning unit
Artikel Open Access

Supporting first-year students in learning molecular orbital theory through a digital learning unit

  • David Johannes Hauck ORCID logo , Andreas Steffen ORCID logo und Insa Melle ORCID logo EMAIL logo
Veröffentlicht/Copyright: 17. August 2023
Veröffentlichen auch Sie bei De Gruyter Brill

Abstract

A large number of chemistry students drop out of their studies, often because of high requirements for content knowledge. Quantum chemical models of atomic bonding such as molecular orbital (MO) theory are particularly challenging. We aimed to develop an intervention on MO theory based on the Computer-Supported Collaborative Learning framework. First, students work independently with interactive learning videos. Then, they create concept maps about core concepts of MO theory. In this paper, we present the evaluation of this intervention in terms of content knowledge, considering person-specific characteristics. Additionally, we compare three different treatment groups with varying materials and group arrangements, and prospective chemistry teachers with other first-year students. Our results show that students can answer single-choice questions well with the prior knowledge from their first-year chemistry course. Answering open-ended questions is more difficult. Nevertheless, they can improve significantly in both categories by working with the learning videos; creating concept maps does not lead to significant content knowledge changes. There are also no significant differences between the three treatment groups, or between teacher students and other chemistry freshmen. Regarding prior knowledge, differences depending on gender and school-leaving grades can be measured, whereas the choice of courses in school has no effect.

1 Introduction

Recent studies show that more than half of chemistry students in Germany drop out of their studies. Performance problems play at least a “rather large role” (Heublein, 2014; Heublein et al., 2022) for over 80 % of these students and are even the most decisive reason for one third of them (Heublein, 2014; Heublein et al., 2022). International studies report similar trends for Europe (Larsen, 2013), North America (Chen, 2015) or the OECD in general (2020). A look at university chemistry curricula reveals three areas in which students struggle: First, basic chemical concepts are not sufficiently internalized at school. Second, mathematics and physics are very demanding subsidiary subjects (Averbeck et al., 2018). Third, learning about new subjects introduced in the students’ first semester, namely quantum physical theories of chemical bonding such as Valence Bond and Molecular Orbital (MO) theory, poses a large challenge for students. Understanding and working with these theories requires a high degree of abstract thinking. Additionally, complex mathematical concepts such as integrals or probability density are a prerequisite for a profound understanding (Bouayad et al., 2014; Taber, 2005). Because of the COVID-19 pandemic, academic success was further hindered by the fact that social structures and compensatory mechanisms such as student learning groups broke away. As a result, students in online semesters often felt isolated (Werner et al., 2021).

2 Research design and methods

It is evident that students have an acute need for measures that support them in mastering demanding subject-specific content and give them the opportunity to work together with fellow students, as well as to exchange ideas about complex and difficult topics, such as MO theory. Following a socio-constructivist approach, collaborative frameworks such as Computer-Supported Collaborative Learning (CSCL) are well suited for dealing with such difficult science topics (Kyndt et al., 2013; Stahl & Hakkarainen, 2021; Sung et al., 2017). Building on this framework, we developed, implemented, and evaluated an intervention in the form of a digital-collaborative learning unit on MO theory. The benefits of our approach are twofold: First, we intended to support students at our university in Germany. Therefore, this paper is going to focus on the intervention’s effect on their subject knowledge (main research question M1). Second, our approach also allowed us to explore research gaps in the field of CSCL, namely the question of how digital-collaborative processes can be designed and controlled effectively (Olsen et al., 2019; Sung et al., 2017). Specifically, we aimed to investigate how the students’ learning progress in small groups could be influenced by the way a preceding individual work phase was conducted, or whether they learn more by working alone (M2). In summary, this paper addresses the following main research questions:

  • M1.  Does the learning unit affect our students’ subject knowledge and if yes, to what extent?

  • M2. Is the development of our students’ subject knowledge affected by the way the phases within the digital-collaborative group process are structured and if yes, to what extent?

For a differentiated analysis of the students’ subject knowledge development, we investigated whether the participants’ knowledge development was related to their prior knowledge (subsidiary question S1a). Furthermore, we analyzed the possible influence of person-related characteristics on the students’ prior knowledge (S1b). Last, we investigated whether there were differences depending on the course in which students enrolled (S2). The answers to these subsidiary research questions will facilitate the subsequent transfer to regular teaching at the university:

  • S1. To what extent …

  1. is the development of our students’ subject knowledge affected by their individual prior knowledge?

  2. do person-specific factors, such as gender or the average grade of our students’ final exams at school, influence their prior knowledge?

  • S2. Does the students’ prior knowledge and development across the intervention differ between future chemistry teachers and other undergraduate chemistry students and if yes, to what extent?

To answer these questions, we developed a subject knowledge test which contains 29 single-choice (SC) and 8 open-ended (OE) items (see Supplementary Material for a detailed list of all items). This test covers quantum physical basics of chemical bonding, theoretical basics of MO theory, and the construction and interpretation of simple MO diagrams. The test was validated through an ongoing exchange with the professor of inorganic chemistry who held the introductory chemistry lecture in the first semester. Both categories were internally consistent (αSC = 0.888; αOE = 0.621). The open-ended questions were evaluated through a self-developed coding scheme with associated guidelines according to the method of structuring content analysis (Mayring, 2015). The reliability of the instrument was ensured through double coding with satisfactory results (ICC = 0.828). Other foci of the study that are not discussed in this paper are the assessment of the different phases of the learning unit by students (Hauck et al., 2021, in press). The investigation of collaborative activities is outside of the scope of this paper and will be analyzed through a manual in a subsequent study.

Figure 1 shows how the intervention was structured (for an in-depth overview, see Hauck et al., in press). It consists of five seminar sessions, each spanning two hours, taking place after MO theory was introduced in the basic lecture that all chemistry students, about one fifth of them being future chemistry teachers, must attend in the first semester at our university. This allows for a flexible integration of the intervention into any semester in which the topic is introduced.

Figure 1: 
Structure of the five seminar sessions.
Figure 1:

Structure of the five seminar sessions.

In the first seminar session, the students’ (N = 115, nTS = 22 of which being chemistry teacher students (TS) for the upper secondary level, and nCS = 88 of which being chemistry or chemical biology students (CS) in a Bachelor of Science program) prior knowledge is assessed (‘pre’) in addition to the person-specific characteristics mentioned in research question S1b. In session 2, the students work with a digital learning environment (DLE) in the form of four learning videos with interactive elements such as mandatory questions or optional text fields with additional explanations. To compare this phase with later ones and to optimize the videos for a later transfer into teaching practice, the students evaluated the DLE before their subject knowledge was tested again (‘mid’). In the last three sessions, the students created concept maps (concept mapping process, CMP) online, using the web-based CmapTools in the Cloud software. This phase was evaluated in the same way as the DLE, before the students’ knowledge was assessed for a final time in this intervention (‘post’). To answer research question M2, we divided the students into three intervention groups which were parallelized based on the pre-test results in session 1:

  1. All students in G1 (n1 = 39) worked with the same interactive learning videos in the DLE. During the CMP, they created concept maps in small teams of 3–5.

  2. All students in G2 (n1 = 38) also worked with the same interactive learning videos. During the CMP however, they created concept maps individually.

  3. Students in G3 (n1 = 38) also created their concept maps in small teams during the CMP. However, they worked differently in the preceding DLE: Half of the students within each team worked with videos covering exclusively the quantum chemical basics of MO theory; the other half worked with videos covering only the creation and interpretation of MO diagrams.

3 Results

We asked the students whether they had heard of MO theory before their studies at the university began. Nineteen of them answered yes, with only five of them indicating that they had had more than a superficial exposure to it, e. g. in their chemistry courses at school. Thus, we can assume that it was in the context of the lecture and intervention that the vast majority of students first learned about molecular orbital theory.

Table 1 summarizes the results relating to research question M1 and S1a. The single-choice (SC) and open-ended (OE) questions are analyzed separately from each other and for each measurement point (‘pre’/’mid’/’post’). Answering the open-ended questions correctly was more difficult for the students than selecting the correct answer to the single-choice questions.

Table 1:

Results of the subject knowledge test at the three measurement points, separated into single-choice (SC) and open-ended (OE) questions. Students were separated into percentiles low/medium/high based on their pre-test results.

Percentile/pre-score N M pre M mid M post Effect size pre-mid (η2) 95 %-CI
Low SC 38 0.26 0.53 0.53 0.410 [0.266, 0.515]
Medium SC 39 0.46 0.70 0.70 0.409 [0.267, 0.513]
High SC 38 0.71 0.82 0.84 0.196 [0.073, 0.312]
All SC 115 0.48 0.69 0.69 0.210 [0.126, 0.267]
Low OE 38 0.05 0.22 0.26 0.281 [0.141, 0.396]
Medium OE 39 0.20 0.29 0.32 0.114 [0.021, 0.219]
High OE 38 0.43 0.45 0.43 0.002 [0.000, 0.027]
All OE 115 0.23 0.32 0.34 0.059 [0.018, 0.111]

To analyze the development of subject knowledge, we calculated one-factor ANOVAs for the total sample and for each percentile within each category (OE/SC). In both categories, the students’ test scores increase significantly from ‘pre’ to ‘mid’, with large effect sizes for the total sample. In neither the SC and OE category, a significant difference can be measured from ‘mid’ to ‘post’.

The test results for the three treatment groups (M2) are summarized in Table 2. Interaction effects between the treatment group (G1/G2/G3) and time (pre/mid/post) were determined via mixed ANOVAs for the SC and OE category respectively. A correction of the degrees of freedom according to Greenhouse-Geisser was made for the single-choice questions due to violation of sphericity (Verma, 2016). As the data in Table 2 suggests, our data could not uncover a significant interaction for either the single-choice (F(3.394, 190.078) = 0.942, p = 0.430, partial η2 = 0.017) or the open-ended (F(4,224) = 0.421, p = 0.793, partial η2 = 0.007) category. Consequently, main effects were investigated for the within-subject factor ‘time’ and the between-subject factor ‘treatment group’ for both categories (see Table 3).

Table 2:

Results of the subject knowledge test for the three treatment groups.

Treatment group N M preSC M midSC M postSC M preOE M midOE M postOE
G1 39 0.48 0.71 0.70 0.25 0.36 0.37
G2 38 0.45 0.67 0.68 0.21 0.30 0.30
G3 38 0.50 0.67 0.68 0.22 0.30 0.34
All 115 0.48 0.69 0.69 0.23 0.32 0.34
Table 3:

Main effects in the treatment group*time mixed ANOVA, single-choice (SC) and open-ended (OE) categories. The p values are labeled as follows: ∗ for p < 0.05, ∗∗ for p < 0.01, ∗∗∗ for p < 0.001.

Factor Main effect Effect size (η2)
F value p
TimeSC F(1.697, 190.078) = 0.388 <0.001*** 0.545
GroupSC F(2, 112) = 0.338 0.714 0.006
TimeOE F(2, 224) = 0.388 <0.001*** 0.151
GroupOE F(2, 112) = 1.533 0.220 0.027

In accordance with the findings of our preceding analyses regarding research question M1, the factor ‘time’ shows significant effects in our sample. We could not measure any significant differences between the treatment groups for either the SC or OE category.

To answer question S1a, we separated the students into three percentiles (low/medium/high prior knowledge, see Table 1) for the SC and OE questions respectively based on their pre-test results.

As in the total sample, test scores improve significantly in the SC category for students in each sample through the interactive videos. For the open-ended questions, we found a significant increase from ‘pre’ to ‘mid’ for students with low (large effect) or medium (medium effect) prior knowledge. We could not find significant differences for students with high prior knowledge.

Once again, the students’ scores did not change significantly through the creation of concept maps for any of the three percentiles.

Figure 2 illustrates the students’ scoring in the pre-test depending on the grades they finished secondary school with. In our country, Germany, these grades range between 1.0 (best) and 4.0 (worst). With regard to research question S1b, we divided our sample into 6 subgroups, each covering half a grade level. For reasons of comparability, only students who graduated in our country were included in our analysis (NS1b = 106).

Figure 2: 
Students’ single-choice (SC) and open-ended (OE) pre-test scores depending on their final secondary school grades.
Figure 2:

Students’ single-choice (SC) and open-ended (OE) pre-test scores depending on their final secondary school grades.

Regarding the SC questions on the one hand, there was a weak positive correlation between the students’ grades and their performance in the pre-test (p = 0.014, r S  = 0.239). This can be attributed to the students with the highest final grades (1.0–1.5 range) having the highest average score and the students with the second lowest final grades (3.1–3.5 range) having the lowest. The students in between (1.6–3.0 range) all achieved similar average scores. On the other hand, no correlation between test scores can be found in the OE category (p = 0.072, r S  = 0.175). These results need to be taken with some consideration due to the small subsample sizes. Because of the vanishingly small subsample, the two students from the lowest grade range (3.6–4.0) who achieved the second-best SC scores (and even the best average OE score) were excluded from this analysis.

Concerning the students’ course choices in upper secondary chemistry, mathematics, biology, and physics, our data does not indicate any correlation regarding pretest scores in either the SC or OE category.

As for gender (see Table 4), male students from our sample achieved better results than female students in the SC category of the pretest with small effect size (unpaired t-test, t(112) = 2.474, p = 0.015, Cohen’s d = 0.465). In the OE category, no significant difference could be revealed (t(112) = 0.105, p = 0.917).

Table 4:

Students’ pre-test scores depending on their gender.

Gender N M preSC M preOE
Female 52 0.43 0.23
Male 62 0.53 0.23
Nonbinary 1 0.55 0.19
All 115 0.48 0.23

In relation to our last research question S2, we examined our sample for possible differences between students majoring in either chemistry or chemical biology (nCS = 88) and students studying chemistry to become upper secondary school teachers (nTS = 22, see Table 5 for the mean test scores). The five students who did not fall into either of these two groups were excluded from the analysis. In terms of prior knowledge, unpaired t-tests showed no significant difference between students from different programmes in either category (SC: t(108) = 0.488, p = 0.627; OE: t(108) = 0.786, p = 0.433). In order to compare the students’ development over the course of the intervention, we calculated mixed ANOVAs with the within-subject factor ‘time’ (pre/mid/post) and the program the students were enrolled in as the between-subjects factor (CS/TS). In the SC category, the degrees of freedom were once again corrected according to Greenhouse-Geisser due to violation of sphericity.

Table 5:

Results of the subject knowledge test for students majoring in chemistry or chemical biology (CS) and chemistry teacher students (TS).

Program N M preSC M midSC M postSC M preOE M midOE M postOE
CS 88 0.48 0.69 0.70 0.24 0.34 0.36
TS 22 0.46 0.66 0.64 0.20 0.26 0.33

We could not detect a significant interaction between time and the program the students were enrolled in for either the SC (F(1.703, 183.910) = 0.657, p = 0.496) or the OE (F(2,216) = 1.089, p = 0.338) category.

Once again, we found a significant main effect for the within-subjects factor time in both categories but not between the different study programs (see Table 6).

Table 6:

Main effects in the study program*time mixed ANOVA, SC and OE categories. The p values are labeled as follows: * for p < 0.05, ** for p < 0.01, *** for p < 0.001.

Factor Main effect Effect size (η 2 )
F value p
TimeSC F(1.703, 183.910) = 74.628 <0.001*** 0.409
ProgramSC F(1, 108) = 0.935 0.336 0.009
TimeOE F(2, 216) = 12.718 <0.001*** 0.105
ProgramOE F(1, 108) = 1.228 0.270 0.011

4 Discussion

With the data we described in the previous segment, the research questions raised at the beginning of the article can be answered and discussed.

Regarding the single-choice category of the subject knowledge test, the students started the intervention with high prior knowledge. As most students had never learned about MO theory before their studies, this prior knowledge probably stems from the preceding lecture (M1). Still, students with low, medium, and high prior knowledge could improve significantly by working with the digital learning environment (S1a). With regard to the open-ended questions, the students’ prior knowledge was much lower (M1). To answer these questions correctly, it is not enough to only recognize correct answers as in the single-choice questions. Actively linking terms through text production proves to be much more difficult and requires a higher amount of motivation from the students (Krosnick et al., 2010). In this category, students with low and medium prior knowledge were able to improve significantly, too. Only students who already started with high prior knowledge did not show any significant changes (S1a). In support of the ICAP hypothesis (Chi & Wylie, 2014), it becomes clear that students should not only learn to passively memorize subject content. Rather, they need to (inter)actively reorganize and transfer it so that they can not only recognize and check off correct answers, but also formulate them themselves.

There were also no significant changes in students’ subject knowledge after the CMP phase, regardless of the form in which they created the concept maps (M1). This can be interpreted negatively in the sense that the students do not seem to learn anything new here. On the other hand, it could also be that without the concept mapping phase, the students would have lost the newly acquired gain in knowledge over the time that had elapsed between the mid- and post-tests. To investigate this possible effect, another study with an additional comparison group would be reasonable, in which the creation of concept maps would not be included at all.

The fact that the choice of course in school has no significant influence on performance in our pretest is not surprising. Course choice cannot play a role when it comes to prior knowledge of a topic that students (with few exceptions) hear about for the first time in the corresponding lecture.

Other preceding studies have identified school leaving grades as a strong predictor for later success at university level (Averbeck et al., 2018; Trapmann et al., 2007). Although the closed-ended pre-test results in the new content area of quantum chemistry/MO theory correlate with the students’ average school leaving grade in our study, we have to be careful not to overinterpret this. The fact that the students in the middle grade ranges (1.6–3.0) all achieved similar mean scores suggests that the correlation only occurred because of the high scores of the students with the best grades (1.0–1.5 range) and the low scores of the students from the 3.1–3.5 range. This argues against the school leaving grade being a good predictor in this particular study. Nonetheless, there are several possible explanations for why the students with the best grades between 1.0 and 1.5 performed best in the SC section of the subject knowledge test: Following the hypothesis that a high school leaving grade is associated with high prior knowledge in general chemistry (Averbeck et al., 2018) – after all, the students chose to study this subject – the students with top grades might also have better starting conditions and have to spend less resources on catching up on fundamentals taught at school than those with worse prior knowledge. These additional resources are now available as they engage with the new content of MO theory. Furthermore, a good final school grade can also be an indication for students who are generally good at memorizing, processing and reproducing new complex content. The poor performance of students with final grades between 3.1 and 3.5 however suggests that lower-performing students need more support to avoid being left behind at universities (Trapmann et al., 2007). At the same time, it should be noted that grades are not the only predictor of academic success or even intelligence – non-cognitive skills such as, for example, self-regulation, self-esteem, or openness to new ideas and diverse opinions might play an even larger role (Kautz et al., 2014). In this study, this can at least be surmised from the good test scores of the two students from the lowest grade level, although this subsample can by no means be considered representative. In general, the vanishingly small subsamples for some grade ranges do not allow a conclusive answer to this part of research question S1b from our data alone. To better correlate prior knowledge with high school grades, a larger sample would be needed, in which all ranges are adequately represented.

The small gender differences in the tests about this mathematically challenging subject follow the PISA trend that male students do slightly better than females in mathematics, and that a higher proportion of the top students in this subject are male (OECD, 2016; OECD, 2019). On the other hand, it should also be noted that both grade level and gender differences appear only in the SC part of the subject test and disappear in the more challenging OE part. To continue arguing along PISA results: The female students’ better writing skills might have a positive impact on text production and compensate for differences in subject knowledge (OECD, 2019). Other studies show that girls are more motivated in verbal subjects (e. g. English) than boys, who in turn are more engaged in math and science (Wirthwein et al., 2020). In general, gender differences are also mediated by variables such as the students’ (ability) self-concept or self-efficacy (Hofer & Stern, 2016; Villafañe et al., 2016) – factors that need to be considered for future analyses.

In the knowledge development of our three different treatment groups, no significant differences could be detected (M2), so that none of the three variants show an obvious advantage or disadvantage compared to the other two. This result may seem sobering, but it shows further potential for the practical implementation of such an intervention: Allowing students to decide which variant (G1, G2 or G3) they want to use to approach the topic may lead to a positive effect on the acceptance of the unit and the students’ motivation.

The finding for research question S2 that test scores of students majoring in chemistry or chemical biology and the scores of prospective chemistry teachers do not differ may seem trivial at first glance, since they are first-year students who attended the same introductory chemistry lecture. However, prejudices exist that students who want to become teachers are less competent, motivated or willing to perform than students majoring in the subject, which in turn can lead to prospective teachers feeling less valued than other students (Carstensen et al., 2021). On the basis of the results presented here, there is no evidence either in terms of prior knowledge or in terms of development across the intervention. This supports the thesis that there is no “negative selection” into the teacher training programme, i.e. that teacher students do not necessarily show poorer or “less favorable” cognitive or personality characteristics than students enrolled in other programs (Roloff Henoch et al., 2015).

Two limitations must be pointed out: On the one hand, the sub-sample of student teachers is rather small (nTS = 22 vs. nCS = 88). On the other hand, it is also more heterogeneous than that of the students majoring in chemistry, since future teachers at our university do not only study one subject, but at least two. This study did not control for the influence of a second subject studied.

The development of the subject knowledge of individual students is a good first indicator for the design of (learning-)effective collaboration scenarios. Design criteria for such scenarios are of particular relevance in current collaboration research (Chen et al., 2018; Sung et al., 2017). Accordingly, a coarser view on the level of entire small groups is conceivable to make learning success measurable beyond the individual level. In this study, the concept maps created by the students provide a suitable basis for these analyses. We have developed a corresponding manual which is currently in use.

It should also be noted that the quantitative approach presented here is itself subject to limitations: The fact that the creation of concept maps leads to neither better nor worse test scores on average does not imply that this must be true for all participants individually. The next logical step is a more detailed examination of students whose scores improve or deteriorate in the CMP phase, particularly through an analysis of audio and screen recordings of these phases. These data can also help shed more light on the impact of group members on the learning success of individuals (Olsen et al., 2019). Furthermore, it is also conceivable that students learn in this joint phase on levels that cannot be covered by a subject knowledge test alone, for example methodologically with regard to the creation of concept maps, or socially in the collaboration in groups. Thus, this collaborative approach should be explored in further work.

5 Conclusions

In this article, we described a learning-effective seminar unit that enables first-year chemistry students to approach the important, yet challenging, topic of chemical bonding at the quantum level using molecular orbital theory. In this way, our research contributes to a better understanding of how to support students in learning about quantum chemistry at the beginning of their university studies, even though there were phases over which students’ subject knowledge did not seem to change. However, we are also aware that further research, especially with students not only from our university, must follow.

This intervention underlines the value of interdisciplinary collaboration, without which it would not have been possible to develop and conduct the study. The expertise of the professor of inorganic chemistry involved ensured the curricular validity of the intervention and of our test instrument. At the same time, the ongoing exchange also ensured that the learning videos were well adapted to the lecture and to the students’ prior knowledge in particular. Ultimately, the collaboration also had a positive effect at the organisational level: Through the integration into the regular lecture, almost all chemistry students from the introductory year participated in our study.

Furthermore, we were able to analyze the influence of person-specific characteristics such as gender or school leaving grades, which can play an important role in the transition phase between secondary and tertiary education. By comparing three different intervention groups, we were able to explore three possible ways to structure a CSCL-based learning environment for this topic. Nonetheless, a more profound analysis of the students’ concept maps and the audio and video recordings from the collaborative phases may lead to further insights into how such interventions can be designed to be effective for learning – both subject-wise and in terms of transdisciplinary competences such as collaborating in small groups.


Corresponding author: Insa Melle, Chair of Chemistry Education, Department of Chemistry and Chemical Biology, TU Dortmund University, Otto-Hahn-Str. 6, D-44227 Dortmund, Germany, E-mail:

Funding source: Federal Ministry of Education and Research (Germany)

Award Identifier / Grant number: 01JA2001

  1. Author contributions: The authors are responsible for the content of this publication.

  2. Research funding: This project is part of the “Qualitätsoffensive Lehrerbildung”, a joint initiative of the Federal Government and the Länder which aims to improve the quality of teacher training. The programme is funded by the Federal Ministry of Education and Research.

  3. Conflict of interest statement: The authors declare no conflicts of interest regarding this article.

References

Averbeck, D., Hasselbrink, E., & Sumfleth, E. (2018). Academic achievement of chemistry freshmen – interrelations between prerequisites and content knowledge acquisition. In O. Finlayson, E. McLoughlin, S. Erduran, & P. Childs (Eds.), Research, practice and collaboration in science education. Proceedings of ESERA 2017 (pp. 2214–2224). Dublin City University.Suche in Google Scholar

Bouayad, A., Kaddari, F., Lachkar, M., & Elachqar, A. (2014). Quantum model of chemical bonding: Barriers and learning difficulties. Procedia – Social and Behavioral Sciences, 116, 4612–4616. https://doi.org/10.1016/j.sbspro.2014.01.994Suche in Google Scholar

Carstensen, B., Lindner, C., & Klusmann, U. (2021). Wahrgenommene Wertschätzung im Lehramtsstudium: Fachunterschiede und Effekte auf Wohlbefinden und Abbruchsintention [Perceived Apprecation in University Teacher Education: Subject Differences and Effects on Well-Being and Intention to Quit]. Zeitschrift für Padagogische Psychologie.10.1024/1010-0652/a000337Suche in Google Scholar

Chen, X. (2015). STEM attrition among high-performing college students: Scope and potential causes. Journal of Technology and Science Education, 5(1), 41–59. https://doi.org/10.3926/jotse.136Suche in Google Scholar

Chen, J., Wang, M., Kirschner, P. A., & Tsai, C.-C. (2018). The role of collaboration, computer use, learning environments, and supporting strategies in CSCL: A meta-analysis. Review of Educational Research, 88(6), 799–843. https://doi.org/10.3102/0034654318791584Suche in Google Scholar

Chi, M. T. H., & Wylie, R. (2014). The ICAP framework: Linking cognitive engagement to active learning outcomes. Educational Psychologist, 49(4), 219–243. https://doi.org/10.1080/00461520.2014.965823Suche in Google Scholar

Hauck, D. J., Melle, I., & Steffen, A. (2021). Molecular orbital theory—teaching a difficult chemistry topic using a CSCL approach in a first-year university course. Education Sciences, 11(9), 485. https://doi.org/10.3390/educsci11090485.Suche in Google Scholar

Hauck, D. J., Steffen, A., & Melle, I. (in press). A digital collaborative learning environment to support first-year students in learning molecular orbital theory. In M. Rusek, M. Tóthova, & D. Koperová (Eds.), Project-based education and other student-activation strategies and issues in science education XX. Charles University, Faculty of Education.Suche in Google Scholar

Heublein, U. (2014). Student drop-out from German higher education institutions. European Journal of Education, 49(4), 497–513, https://doi.org/10.1111/ejed.12097Suche in Google Scholar

Heublein, U., Hutzsch, C., & Schmelzer, R. (2022). Die Entwicklung der Studienabbruchquoten in Deutschland. Hannover: DZHW.Suche in Google Scholar

Hofer, S. I., & Stern, E. (2016). Underachievement in physics: When intelligent girls fail. Learning and Individual Differences, 51, 119–131. https://doi.org/10.1016/j.lindif.2016.08.006Suche in Google Scholar

Kautz, T., Heckman, J. J., Diris, R., ter Weel, B., & Borghans, L. (2014). Fostering and measuring skills: Improving cognitive and non-cognitive skills to promote lifetime success. In OECD Education Working Papers.10.3386/w20749Suche in Google Scholar

Krosnick, J. A., & Presser, S. (2010). Question and questionnaire design. In P. V. Marsden & J. D. Wright (Eds.), Handbook of survey research (pp. 263–313). Emerald.Suche in Google Scholar

Kyndt, E., Raes, E., Lismont, B., Timmers, F., Cascallar, E., & Dochy, F. (2013). A meta-analysis of the effects of face-to-face cooperative learning. Do recent studies falsify or verify earlier findings? Educational Research Review, 10, 133–149.10.1016/j.edurev.2013.02.002Suche in Google Scholar

Larsen, M. S. (2013). Dropout phenomena at universities: What is dropout? Why does dropout occur? What can be done by the universities to prevent or reduce it?: A systematic review. Clearinghouse – research series: 2013:15. Danish Clearinghouse for Educational Research.Suche in Google Scholar

Mayring, P. (2015). Qualitative content analysis: Theoretical background and procedures. In A. Bikner-Ahsbahs, C. Knipping, & N. Presmeg (Eds.), Approaches to qualitative research 2015 (pp. 365–380). Springer.10.1007/978-94-017-9181-6_13Suche in Google Scholar

OECD. (2016). Excellence and equity in education. PISA 2015 results/organisation for economic co-operation and development, programme for international student assessment: Volume 1. OECD Publishing.Suche in Google Scholar

OECD. (2019). What students know and can do. PISA 2018 results/OECD: Volume I. OECD Publishing.Suche in Google Scholar

OECD. (2020). Tertiary graduation rate (indicator). https://www.oecd-ilibrary.org/education/tertiary-graduation-rate/indicator/english_15c523d3-enSuche in Google Scholar

Olsen, J. K., Rummel, N., & Aleven, V. (2019). It is not either or: An initial investigation into combining collaborative and individual learning using an ITS. International Journal of Computer-Supported Collaborative Learning, 14(3), 353–381. https://doi.org/10.1007/s11412-019-09307-0Suche in Google Scholar

Roloff Henoch, J., Klusmann, U., Lüdtke, O., & Trautwein, U. (2015). Who becomes a teacher? Challenging the “negative selection” hypothesis. Learning and Instruction, 36, 46–56.10.1016/j.learninstruc.2014.11.005Suche in Google Scholar

Stahl, G., & Hakkarainen, K. (2021). Theories of CSCL. In U. Cress, J. Oshima, A. F. Wise, & C. Rosé (Eds.), International handbook of computer-supported collaborative learning. Springer International Publishing.10.1007/978-3-030-65291-3_2Suche in Google Scholar

Sung, Y.-T., Yang, J.-M., & Lee, H.-Y. (2017). The effects of mobile-computer-supported collaborative learning: Meta-analysis and critical synthesis. Review of Educational Research, 87(4), 768–805. https://doi.org/10.3102/0034654317704307Suche in Google Scholar PubMed PubMed Central

Taber, K. S. (2005). Learning quanta: Barriers to stimulating transitions in student understanding of orbital ideas. Science Education, 89(1), 94–116. https://doi.org/10.1002/sce.20038Suche in Google Scholar

Trapmann, S., Hell, B., Weigand, S., & Schuler, H. (2007). Die Validität von Schulnoten zur Vorhersage des Studienerfolgs – eine Metaanalyse [The validity of school-grades for academic achievement: a meta-analysis]. Zeitschrift für Padagogische Psychologie, 21(1), 11–27. https://doi.org/10.1024/1010-0652.21.1.11Suche in Google Scholar

Verma, J. P. (2016). Repeated measures design for empirical researchers (1. Aufl.). Wiley.Suche in Google Scholar

Villafañe, S. M., Xu, X., & Raker, J. R. (2016). Self-efficacy and academic performance in first-semester organic chemistry: Testing a model of reciprocal causation. Chemistry Education: Research and Practice, 17(4), 973–984. https://doi.org/10.1039/c6rp00119jSuche in Google Scholar

Werner, A. M., Tibubos, A. N., Mülder, L. M., Reichel, J. L., Schäfer, M., Heller, S., Pfirrmann, D., Edelmann, D., Dietz, P., Rigotti, T., & Beutel, M. E. (2021). The impact of lockdown stress and loneliness during the COVID-19 pandemic on mental health among university students in Germany. Scientific Reports, 11(1), 22637.10.1038/s41598-021-02024-5Suche in Google Scholar PubMed PubMed Central

Wirthwein, L., Sparfeldt, J. R., Heyder, A., Buch, S. R., Rost, D. H., & Steinmayr, R. (2020). Sex differences in achievement goals: Do school subjects matter? European Journal of Psychology of Education, 35(2), 403–427. https://doi.org/10.1007/s10212-019-00427-7Suche in Google Scholar


Supplementary Material

This article contains supplementary material (https://doi.org/10.1515/cti-2022-0040).


Received: 2022-10-31
Accepted: 2023-07-02
Published Online: 2023-08-17

© 2023 the author(s), published by De Gruyter, Berlin/Boston

This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Heruntergeladen am 22.11.2025 von https://www.degruyterbrill.com/document/doi/10.1515/cti-2022-0040/html
Button zum nach oben scrollen