Startseite A molecular motion-based approach to entropy and application to phase transitions and colligative properties
Artikel Open Access

A molecular motion-based approach to entropy and application to phase transitions and colligative properties

  • Vincent Natalis ORCID logo EMAIL logo und Bernard Leyh ORCID logo
Veröffentlicht/Copyright: 27. März 2025
Veröffentlichen auch Sie bei De Gruyter Brill

Abstract

Freezing point depression and boiling point elevation are colligative properties that are taught in many undergraduate science curricula, often by a discussion of the change in chemical potential of the solution, or by referring to interactions between solute and solvent molecules, which evades the major entropy-driven effect. In this teaching proposal, we suggest introducing thermodynamics by a simplified and visual statistical method based on Maxwell-Boltzmann distribution plots that can be related to entropy. This approach allows for an entropy-based explanation of phase transition temperatures, freezing point depression and boiling point elevation. It focuses on showing that these colligative properties, in the limit of ideal systems, are caused exclusively by the increased number of microstates of the solution compared to the pure solvent. The disorder metaphor, which is often used to make the entropy concept more concrete, may be useful to discuss some aspects of this phenomenon. The statistical approach, however, is a more rigorous way to explain the links between molecular motion, entropy, and colligative properties.

1 Introduction

Colligative properties mainly consist of four phenomena that are caused by the addition, to a liquid solvent, of a solute, which is not volatile and does not dissolve in the solid solvent: vapor pressure lowering, freezing point depression, boiling point elevation and osmotic pressure. In textbooks and introductory thermodynamics courses, colligative properties are often explained through the lowering of the solution’s chemical potential compared to the pure solvent. This approach, which is elegant from a formal vantage point, circumvents the physical origin of these phenomena, which result from the increased entropy of the solution compared to the pure solvent. We feel that this didactic choice is a lost opportunity to provide physical insight into entropy, a state function which remains a significant conceptual obstacle for students learning thermodynamics. 1 , 2 , 3 , 4

Pinarbasi, Sözbilir and Canpolat 5 showed that 52 % of pre-service chemistry teachers in their master’s degree program believed that “Boiling point elevation/freezing point depression occurs due to interactions between the water and salt particles” (p. 275, in Table 1). Freezing point depression and boiling point elevation are often incorrectly believed to be caused primarily by attractive intermolecular solute-solvent interactions, which hinder the solvent from either freezing or boiling off. Even though such interactions obviously take place in a real system, they are not the primary cause of colligative properties. This explanation based on interactions, which somehow ‘misses the point’, has been perpetuated by errors in texts and figures of textbooks (e.g. fig. 1.26 from Hill, Petrucci, McCreary and Perry; 6 the mistake does not appear in the English but in the French version), as well as in popular online simulations of osmosis (e.g. Javalab Osmosis Simulation; 7 Simbucket Osmosis Simulation 8 ) or popular science education videos that total almost a million views. 9 , 10 This approach is also marred by a logical inconsistency: if interactions are the core of the explanation, how can mathematical formulas provided in textbooks to evaluate the magnitude of colligative properties be based on the ideal system approximation, which by definition ignores these interactions?

Atkin, De Paula and Keeler’s textbook 11 is probably one of the most influential physical chemistry textbook worldwide. Let’s briefly recall how these authors introduce colligative properties. Colligative properties hold their name because colligative “[…] denotes ‘depending on the collection’” (p.158); these properties depend on the number of particles and are independent of their nature. Following two premises (the solute is not volatile and does not dissolve in the solid solvent), the chemical potential equation is recalled

(1) μ = μ solvent * + R T ln x solvent ,

with μ the chemical potential, μ solvent * the standard chemical potential of the solvent, R the gas constant, T the temperature and xsolvent the molar fraction of the solvent. With this equation, it is shown that diluting the solvent with a solute reduces the chemical potential of the solvent, and thus, increases the boiling temperature and reduces the freezing temperature. Then, the authors propose a molecular explanation:

The molecular origin of the lowering of the chemical potential is not the energy of interaction of the solute and solvent particles, because the lowering occurs even in an ideal solution (for which the enthalpy of mixing is zero). If it is not an enthalpy effect, it must be an entropy effect. When a solute is present, there is an additional contribution to the entropy of the solvent which results in a weaker tendency to form the vapour (Fig. 5B.7)[1]. This weakening of the tendency to form a vapour lowers the vapour pressure and hence raises the boiling point. Similarly, the enhanced molecular randomness of the solution opposes the tendency to freeze. Consequently, a lower temperature must be reached before equilibrium between solid and solution is achieved. Hence, the freezing point is lowered. (p. 158–159).

The rest of the proof consists in applying equation (1) in different contexts to explain and quantify how the boiling and freezing temperatures, or the osmotic pressure are affected. The issue with this explanation is that it relies heavily on a vague metaphor: disorder, also referred to here as “randomness”.

The disorder metaphor is widely used by instructors as a didactic tool to give physical meaning to the concept of entropy, 4 which otherwise remains fairly abstract. Indeed, spatial disorder correlates with entropy in a wide range of cases covered in introductory thermodynamics courses, such as the entropies of the states of matter. This is what Atkins’s legend of figure 5.22 (Figure 1 in this article) refers to: adding a solute to the solvent brings the disorder of the solution closer to the disorder of a gas, and further away from the disorder of a solid, and thus, explains the entropy increase that follows this addition. As Haglund 12 has argued, since metaphors are consubstantial to science communication and teaching, their shortcomings should be clearly laid out, and managed by the use of multiple teaching methods (among which, metaphors) addressing the same concepts.

Figure 1: 
Illustration for the Atkins, De Paula and Keeler’s quote mentioned in the text. It illustrates the “disorder” of the solution and how this disorder creates a “decreased tendency to acquire the disorder characteristic of the vapor”. N.B. this figure comes from the 8th edition because we had access to its pdf version. It is almost identical to figure 5B.7 from the 12th edition. Reproduced with permission from Atkins, P. & De Paula, J. (2006), Oxford University Press through PLSClear. This content is excluded from all forms of open access license, including creative commons, and the content may not be reused without the permission of Oxford University Press – details of how to obtain permission can be found at https://global.oup.com/academic/rights/permissions/?cc=gb&lang=en&.
Figure 1:

Illustration for the Atkins, De Paula and Keeler’s quote mentioned in the text. It illustrates the “disorder” of the solution and how this disorder creates a “decreased tendency to acquire the disorder characteristic of the vapor”. N.B. this figure comes from the 8th edition because we had access to its pdf version. It is almost identical to figure 5B.7 from the 12th edition. Reproduced with permission from Atkins, P. & De Paula, J. (2006), Oxford University Press through PLSClear. This content is excluded from all forms of open access license, including creative commons, and the content may not be reused without the permission of Oxford University Press – details of how to obtain permission can be found at https://global.oup.com/academic/rights/permissions/?cc=gb&lang=en&.

Still, the disorder metaphor has been specifically criticized for many years in the chemistry education research community for many reasons: (i) it disregards thermal disorder and emphasizes spatial disorder, always representing disorder as static, as in Atkins, De Paula and Keeler’s figure 5.22 reproduced here (Figure 1), 13 (ii) it does not account for the unimaginably high number of microstates any finite temperature system possesses, 14 (iii) it works well as a metaphor for specific cases such as the relative entropies of states of matter, but less so for common phenomena such as a glass of water crystallizing in a freezer, 15 or re-entrant phases, 16 (iv) it imposes a microscopic-oriented explanation onto the macroscopic-oriented teaching of thermodynamics, with few links between the two approaches, 17 (v) it undermines the probabilistic nature of entropy by suggesting that a single microstate where molecules are visually ordered is less likely than another single microstate where the particles are less regularly positioned, 18 (vi) it does not share many common properties with entropy when it comes to gas mixing, 19 and (vii) it is known to be the source of many alternative conceptions among undergraduate students. 20 In the context of colligative properties, two studies, 21 , 22 have shown that when thinking about a freezing liquid, many students hold the alternative conception that the system requires energy to “order the molecules” into a solid, since analogously, ordering a room or a deck of cards requires some work, that is, some energy, in the sense of a series of action by a human, discarding the possibility that spontaneity can arise from the increase of entropy of the environment.

Furthermore, it is not so easy to interpret the chemical potential in a submicroscopic, statistical thermodynamics perspective, and the connection between Atkins, De Paula and Keeler’s disorder explanation and the chemical potential can be quite arduous. We believe that focusing on an entropy analysis allows a more straightforward connection to the number of microstates and therefore to a molecular, physically meaningful point of view.

Thus, in agreement with Haglund, 12 we believe that the disorder metaphor retains some merits and should not be discarded, but rather should be used in complement with other approaches that do not rely on disorder, to explain entropy and entropy-related phenomena such as colligative properties. Furthermore, we believe that an entropy-centered, microscopic connection to colligative properties, without reference to the abstract concept of chemical potential, had still to be proposed.

The method we propose in the present contribution is part of a larger research effort to investigate how basic statistical thermodynamics could be used as the primary tool in university introductory thermodynamics. We question the widespread premise that macroscopic thermodynamics should be introduced first in the scientific student’s curriculum, followed by a re-interpretation in later years for advanced chemistry, physics or engineering students. Although innovative teaching methods have been proposed in a macroscopic frame, either following a theoretical line, 23 illustrated by concrete examples, 24 or a hands-on approach, 25 we submit here that simplified, didactically transposed statistical thermodynamics with reduced mathematical load can contribute to alleviate entropy mysteries. In doing so, we agree with many propositions from the literature on the microscopic-oriented teaching of entropy. Multiple articles have already highlighted the relevance of Maxwell-Boltzmann distributions to visualize and explain e.g. entropy, 26 temperature, 27 heat capacity, 28 heat and work exchanges, 29 and salt solubility. 30 For an exhaustive list of microscopic approaches, some of which use other microscopic visualizations, see this recent systematic review. 4 We wish to specifically highlight here three contributions. Ellis & Ellis 31 built a demonstration apparatus that connects enthalpy and entropy magnitudes with the dimensions of a transparent box in which particles jiggle up and down thanks to a vibrating floor; in this apparatus, enthalpy is represented by the depth of the box and entropy by the width. Leff 32 published a five-part article explaining both macroscopic- and microscopic-oriented phenomena, and in doing so, addressed many shortcomings linked with the disorder metaphor. Jungermann, 33 whose work inspired this one, proposed a comprehensive teaching sequence to introduce entropy, including its dependance on particle mass, and using Maxwell-Boltzmann distributions for explaining simple thermodynamical transformations.

2 Description of the teaching method

Introductory concepts of quantum mechanics are a prerequisite for this method. Students must know that energy, at the submicroscopic level, is quantized, and that energy exchanges between particles take place in discrete amounts. Depending on the mathematical background of the students and the specific curriculum of each university, the proposed method may be most appropriate for students in upper-stage undergraduate, though it has been developed with success in 1st-year, general chemistry courses at the university of the authors. The approach is divided into three stages.

2.1 First stage: macrostates, microstates and the Boltzmann distribution

A macrostate (also frequently called a distribution) is introduced as a macroscopic state defined by a limited amount of information: number of particles (N), volume (V), internal energy (U), number of particles per energy level E j , denoted as N j . A microstate is defined as one of the possible ways to organize the particles on the individual energy levels while respecting the parameters which define the macrostate. To illustrate this point, the easily tractable example of an isolated system consisting in N = 3 distinguishable, interaction-free particles, labeled A, B and C, is discussed. The single-particle energy spectrum consists in equally-spaced (spacing = ε, where ε is an arbitrary amount of energy) non degenerate levels. The total energy (internal energy) of the system is chosen as

(2) U = j N j E j = 3 ε ,

Thus, the average energy per particle 〈E〉 is equal to ε, since

(3) E = U N = 3 ε 3 = ε .

In addition to these conditions (N = 3, E = 3ε), we still need to define the partitioning of the particles on the available single-particle states, in order to completely characterize a macrostate. We assumed no restriction on the state occupancies. At this point, the student task consists (i) in systematically identifying the possible macrostates respecting the (N = 3, E = 3ε) conditions, and (ii) in identifying and counting the microstates, that is, the possible arrangements of the particles on the single-particle states, for each macrostate. The three possible macrostates and their microstates are grouped in Figure 2. If we assume an equal probability for each microstate, then the probability of each macrostate can be computed: P = 0.3 for macrostate 1 (in green in Figure 2), P = 0.6 for macrostate 2 (in red) and P = 0.1 for macrostate 3 (in blue). Here, an important observation can be made to challenge the disorder metaphor of entropy: which macrostate appears the most disordered?

Figure 2: 
Possible macrostates and microstates for a system containing N = 3 particles sharing an internal energy U = 3ε, ε being an arbitrary quantity of energy. N
j
 is the number of particles on each level. <E> is the average energy per particle. The arrow on the left represents increasing energy (E). W is the number of microstates per macrostate. The numbers in brackets represent the number of particles that posses a certain amount of energy, in the order of the increasing energy levels. For example, a macrostate containing two particles with energy E1 and one particle with energy E3 is noted {2,0,0,1,0} (macrostate 1 of the example).
Figure 2:

Possible macrostates and microstates for a system containing N = 3 particles sharing an internal energy U = 3ε, ε being an arbitrary quantity of energy. N j is the number of particles on each level. <E> is the average energy per particle. The arrow on the left represents increasing energy (E). W is the number of microstates per macrostate. The numbers in brackets represent the number of particles that posses a certain amount of energy, in the order of the increasing energy levels. For example, a macrostate containing two particles with energy E1 and one particle with energy E3 is noted {2,0,0,1,0} (macrostate 1 of the example).

Macrostate 2, with a broader energy distribution extending over three energy levels? Or macrostate 3, with a fair energy partition among the particles, since disorder is often associated with homogeneity or “spread-outness”? 34 As with many other examples, this simple case shows the vagueness and the lack of precision of the disorder metaphor.

Next, to enhance an intuitive feeling for the most probable distribution of energy among particles, a second example with the same energy spectrum is given, with an isolated system of N = 30 distinguishable particles possessing U = 30ε of energy, that is, the same average energy per particle as in the previous three-particle situation. Given the tremendous number of macrostates, the number of microstates (W) per macrostate is only provided for a reduced range of distributions (Figure 3) which have been selected for their pedagogical interest. In Figure 3, the second column displays the macrostate where the total energy of the system is concentrated on a single particle, and the other ones have zero energy: the number of choices, and thus of microstates, is W = 30. The last column (9th) corresponds to an equal sharing of energy among all particles: this macrostate has only one microstate. The intermediate columns (3–8) show macrostates with very high numbers of microstates, and the two most probable are highlighted in red. The energy distributions of these two macrostates display a monotonous decrease of the probability of state occupancy as energy increases. At this moment, we inform the students that this already mimics the exponentially decreasing energy distribution for macroscopic systems, which is called the Maxwell-Boltzmann distribution.

Figure 3: 
A selection of possible macrostates for a system with N = 30 particles and internal energy U = 30ε, where ε is an arbitrary quantity of energy. W is the number of microstates associated with a macrostate. N
j
 is the number of particles on energy level j. The columns highlighted in red are indicative of macrostates associated with a very high number of microstates i.e. the most probable ones, provided all microstates are equiprobable.
Figure 3:

A selection of possible macrostates for a system with N = 30 particles and internal energy U = 30ε, where ε is an arbitrary quantity of energy. W is the number of microstates associated with a macrostate. N j is the number of particles on energy level j. The columns highlighted in red are indicative of macrostates associated with a very high number of microstates i.e. the most probable ones, provided all microstates are equiprobable.

A first conclusion is reached. When N increases, a smaller and smaller proportion of macrostates concentrate a larger and larger number of microstates. In other words, they become vastly more probable than the remaining macrostates, assuming that all microstates are equiprobable. Thus, when considering a macroscopic system, whose number of particles is ∼1022 bigger than this simple N = 30 particles example, only the most probable macrostate corresponds to the equilibrium state. We then introduce the Boltzmann definition of entropy:

(4) S = k · ln W ,

with k the Boltzmann constant (1.380649 × 10−23 J K−1), and conclude that this macrostate, possessing the highest W, also possesses the highest entropy. Students are told that this principle of maximum entropy at equilibrium is called the second law of thermodynamics. Again, the disorder metaphor needs here to be put in perspective. It is not obvious to establish a clear, intuitive connection between a woolly concept like disorder and the Maxwell-Boltzmann distribution, which is the equilibrium, maximum entropy one, taking into account the constraint imposed by the selected average internal energy. How can a given amount of disorder be assigned to such a distribution? Is the more or less extended population of excited levels a symptom of chaos, randomness? Is disorder predictable? Following the advices of, for example, the following authors, 35 , 36 , 37 another descriptor of entropy may be proposed: entropy can be conceptualized as the spreading of the energy, with “S” for “Spreading” as a mnemotechnical tool. In the case just discussed, it is the spreading of the energy of the particles over the available single-particle states. If the width of this energy distribution increases, so does the entropy, too. As with the disorder metaphor, the spreading metaphor has some shortcomings, and should not be used as the sole descriptor of entropy in a thermodynamics course: this metaphor may also refer to the spatial spreading of particles rather than their energy spreading, and research has shown that students can interpret spreading as a process variable, akin to the transfer of heat. 12

2.2 Second stage: what influences the entropy S = k·ln(W)?

The analysis of Figure 3 also shows that a system initially prepared in the first macrostate of the table (W = 30), with the whole energy concentrated on one particle, or a system prepared in the last macrostate of the table (W = 1), with the energy shared fairly across all particles, will spontaneously evolve toward an equilibrium macrostate where energy is partitioned over a larger number of particles (W ≈ 1015). In addition, increasing the size of the system also increases the density of states and the number of accessible states, leading to a broader energy dispersion and a larger entropy. Globally, it can be said that energy tends to be dispersed over a larger number of particles, a wider space, or more single-particle energy states. Students are first provided with three examples which illustrate different applications of the “spreading principle”.

  1. Two solid objects at different temperatures are brought in contact and end up at the same temperature: heat has spread through vibrational modes throughout the materials; the final equilibrium state corresponds to a maximum spreading of energy over matter and space.

  2. A sample of gas is allowed to expand isothermally in a bigger volume: matter spreads out over a larger space.

  3. A sample of gas is allowed to expand adiabatically in a bigger volume: matter spreads out over a larger space, but this effect is counteracted by the decrease in temperature. Cartier 38 provides visualizations of the Boltzmann distribution for this process which are analogous to the ones used in the present article, as well as submicroscopic visualizations for other gas processes (isobaric, isothermal, and irreversible adiabatic gas expansions).

Second, we approach the subtler and often neglected topic of energy spreading over individual atomic and molecular degrees of freedom. Students are shown the most important degrees of freedom for a chemist: molecular translation, rotation and vibration, and electronic degrees of freedom. Translation is presented in a rather straightforward manner: atoms and molecules are allowed to move as a whole in the three spatial directions (x, y and z) (motion of the center of mass). Rotations are illustrated with animated movements of the bent water and linear carbon dioxide molecules. Vibrational normal modes are illustrated with animated movements of the same molecules. Electronic degrees of freedom are kept for a subsequent lecture, where the link between entropy and chemical equilibrium can be discussed.

The Maxwell-Boltzmann distribution, which has been introduced in the first stage is then compared for the translational, rotational and vibrational degrees of freedom. The central point is here the huge difference in the size of the energy quanta associated with these motions, as illustrated for O2 (g) at 300 K (Figure 4). Horizontal black lines show the discrete energy levels allowed by quantum mechanics whereas the lengths of the thicker red lines scale with the state occupancies. The instructor mentions that the representation of the translational energy levels shows very close black lines because of the huge density of states: much more states are in fact present but could not be displayed in a realistic way on practical grounds. These levels build a quasi-continuum indeed.

Figure 4: 
Energy levels (in black) and relative populations of molecules on these levels (in red) for three kinds of degrees of freedom (from left to right: translation, rotation, vibration), at 300 K, for a sample of O2 (g).
Figure 4:

Energy levels (in black) and relative populations of molecules on these levels (in red) for three kinds of degrees of freedom (from left to right: translation, rotation, vibration), at 300 K, for a sample of O2 (g).

The students must then discover the strong impact of the different energy spacings: for a gas like O2 at room temperature, while only the fundamental vibrational level is populated, multiple rotational levels are populated, and a tremendous number of translational levels are populated. This indicates directly that the number of microstates associated with the translational motion is much larger than that for the rotational motion which in turn is larger than that of the vibrational motion. In other words, the contributions of these motions to the total entropy follows the sequence: Stranslation Srotation > Svibration.

Third, based on this useful representation of the energy distributions, we consider the influence of temperature on the number of microstates W and on the entropy S. To do so, we look at the influence of temperature on the Maxwell-Boltzmann distribution of translation, rotation and vibration, while keeping the volume and the number of particles constant. As an example, we show the influence of temperature on the population of rotational energy levels at 300 K, 600 K and 900 K (Figure 5), and students are asked whether they think the distribution has changed more between 300 K and 600 K, or between 600 K and 900 K. A visual inspection reveals that the distribution has spread out more between 300 K and 600 K. To confirm this visual inference, an entropy (S) versus internal energy (U) plot at constant volume is presented (Figure 6).

Figure 5: 
Energy levels (in black) and molecule relative populations on these levels (in red) for the rotational degrees of freedom for a sample of O2 (g) at 300 K, 600 K, and 900 K. The corresponding entropy (S) and internal energy (U) values are also mentioned.
Figure 5:

Energy levels (in black) and molecule relative populations on these levels (in red) for the rotational degrees of freedom for a sample of O2 (g) at 300 K, 600 K, and 900 K. The corresponding entropy (S) and internal energy (U) values are also mentioned.

Figure 6: 
Plot of molar entropy (S) versus molar internal energy (U) for the rotational degrees of freedom of O2 (g) at constant volume. In relation to Figure 5, the slope of the logarithmic curve is displayed to be equal to 1/T, to show that, for the same energy input, the increase of entropy is smaller at high temperature than at low temperature.
Figure 6:

Plot of molar entropy (S) versus molar internal energy (U) for the rotational degrees of freedom of O2 (g) at constant volume. In relation to Figure 5, the slope of the logarithmic curve is displayed to be equal to 1/T, to show that, for the same energy input, the increase of entropy is smaller at high temperature than at low temperature.

Figure 6 shows that entropy increases when internal energy increases, but it increases less when the temperature is higher. The same analysis is presented and discussed for the translational and vibrational degrees of freedom. The value of the derivative of S with respect to U (at constant volume and number of particles) is 1/T, thus smaller and smaller as T increases, which is expressed in mathematical terms by equation (5):

(5) d S d U = 1 T

Based on this relationship, a link can be established between the molecular approach and Clausius’ classical definition of entropy. At constant volume, for an infinitesimal reversible energy uptake, dU = dq rev , so that

(6) 1 T = d S d q r é v

which corresponds to Clausius’ definition: 39

(7) d S = d q r é v T

Depending on time constraints and pedagogical objectives, equation (5) can be given without proof or justified to students using an approach inspired by Kjellander, 40 which is now summarized. Two bodies, thermally isolated from the rest of the universe, a cold one (A) and a hot one (B) are put in contact for a very short amount of time, so that only an infinitesimal amount of heat, dq > 0, is transferred from B to A, a spontaneous process which is well known from everyone and was already taken as a postulate in Clausius’ first works. 41 As no work is done at constant volume, dU A  = dq and dU B  = −dq. Adding heat to a system increases its entropy, removing heat decreases it. In other words, dS A  > 0 and dS B  < 0. As a spontaneous process is accompanied by a total entropy increase, dS A  + dS B  > 0, which requires that |dS A | > |dS B |. A numerical illustration of this inequality is available in Figure 7 for one mole of liquid water initially at 10 °C, which is put in contact with one mole of liquid water initially at 90 °C, until equilibrium is reached and both temperatures equalize. This example shows that the entropy increase for the initially cold body is equal to 9.97 J mol−1 K−1, whereas the entropy decrease of the initially hot body is equal to −8.81 J mol−1 K−1, resulting in a positive total entropy increase of 1.16 J mol −1 K−1.

Figure 7: 
Numerical calculation of the entropy variation for the heat transfer from one mole of cold water (top left, in blue) to one mole of hot water (top left, in red) until their equilibrium temperature is reached (middle left, in green).
Figure 7:

Numerical calculation of the entropy variation for the heat transfer from one mole of cold water (top left, in blue) to one mole of hot water (top left, in red) until their equilibrium temperature is reached (middle left, in green).

As a cold body possesses a smaller temperature (T A ) than a hot body (T B ), the previous argument can be translated into the following inequalities:

(8) T A < T B d S A d q > d S B d q ,

where d S d q is the rate of change of the entropy with respect to the amount of heat transferred.

Because dU A  = dq and dU B  = −dq, equation (8) can be rewritten as:

(9) T A < T B d S A d U A > d S B d U B .

Equivalently:

(10) T A < T B d S A d U A 1 < d S B d U B 1 .

Inequality (10) show that d S d U 1 can be used to define a temperature scale. As a matter of fact, equation (11) is the thermodynamic definition of the Kelvin scale of absolute temperature:

(11) T = d S d U 1 ( d S d U ) = 1 T .

If students are familiar with partial derivatives, a more rigorous way of writing equation (11) can be introduced, emphasizing that the amount of matter (number of particles, N) and the volume (V) are kept constant:

(11′) T = ( S U ) N , V 1   ( S U ) N , V = 1 T .

This is however not essential in a first approach.

To conclude this second stage, students are reminded of the two main claims resulting from our analysis.

  1. Stranslation Srotation > Svibration in general, because Wtranslation Wrotation > Wvibration, because ΔEtranslation ≪ ΔErotation < ΔEvibrationE being the energy difference between two adjacent energy levels).

  2. Entropy increases when energy increases, but less and less as the total internal energy increases. For the same amount of energy ΔU added to the system, the increase in entropy is larger at low temperature than at high temperature. This point is essential to understand the colligative properties which will now be discussed.

2.3 Third and final stage: an entropy-centered explanation of colligative properties associated with phase transitions

The same kinds of graph as in Figures 4 and 5 are used to illustrate the increase of entropy of the solution following the addition of a solute to the solvent. Figure 8A depicts the qualitative energy distributions over the states of a pure solvent in the solid, liquid and gas phases. All degrees of freedom (translation, rotation, vibration) are combined in these graphs, which are consistent with the expected, and well-known by the students, entropy sequence: Sgas > Sliquid > Ssolid. The differences in the densities of states for the three phases result from the change of nature of the degrees of freedom. Molecules in the gas phase have translational, rotational and vibrational degrees of freedom whereas only vibrational motions take place in solids. The liquid state displays an intermediate behavior. The latent heats of phase transition (solid → liquid and liquid → gas) appear as energy gaps between the ground states of the corresponding phases. These gaps are highlighted with thick blue arrows in Figure 8A. As a constant pressure is assumed, these heats of transition correspond to ΔHfus and ΔHvap, respectively for the fusion and vaporization transitions. We believe that these diagrams represent an interesting molecular quantum-state approach to phase transitions, focusing on the increasing density of states in the solid < liquid < gas sequence.

Figure 8: 
Visualizations of the entropy and enthalpy changes for phase transition. (A) Qualitative relative populations of molecules on energy levels (representing the combination of all available degrees of freedom) of the same pure solvent in its solid, liquid and gaseous forms. (B) Same representation as (A) but a non-volatile solute has been added to the liquid solvent, creating a solution with a higher number of microstates, and thus a higher entropy, than for case (A). (C) Qualitative plots of enthalpy and entropy in cases (A) and (B).
Figure 8:

Visualizations of the entropy and enthalpy changes for phase transition. (A) Qualitative relative populations of molecules on energy levels (representing the combination of all available degrees of freedom) of the same pure solvent in its solid, liquid and gaseous forms. (B) Same representation as (A) but a non-volatile solute has been added to the liquid solvent, creating a solution with a higher number of microstates, and thus a higher entropy, than for case (A). (C) Qualitative plots of enthalpy and entropy in cases (A) and (B).

Figure 8B depicts the same information when a solute is dissolved into the solvent. We assume that the solute is non-volatile and that it does not co-crystallize with the solvent, so that the energy distributions for the solid and gas phases remain unaffected. Only the solution distribution is affected with a larger density of states due the presence of the additional dissolved particles (molecules or ions). In other words, this figure provides a complementary, statistically oriented point of view to Atkins, De Paula and Keeler’s disorder (or randomness) explanation presented in the introduction.

The qualitative diagrams of Figure 8A and B illustrate why ΔHfus and ΔSfus are both positive: the solid to liquid transition correlates with an increase in internal energy to populate higher-energy states and with an increase of the number of accessible single-particle states, leading to a larger number of microstates and a larger entropy. The same remark can be made about the transition of liquid to gas.

The key demonstration is based on three main premises:

  1. We consider the equilibrium phase transition processes solid liquid and liquid gas. Such processes are reversible. Temperature remains therefore constant and equal to T tr (tr for “transition”, either fus, “fusion”, or vap, “vaporization”). In addition, we assume a constant pressure. The fundamental equation d S = d q rev T , which has been discussed in the second stage of the paper, can then be rewritten, after integration as

(12) Δ S tr = Δ H tr T t r .
  1. As mentioned above, a solution that is comprised of a solvent and a solute has a higher density of states than the pure liquid solvent. There are thus more accessible energy levels, leading to a larger number of microstates and a higher entropy (Figure 8B).

  2. The solution is considered ideal: solute-solvent intermolecular interactions are considered equivalent to solute-solute and solvent-solvent interactions. Therefore, the heats of fusion and of vaporization remain nearly unaffected: ΔHfus,solution ≅ ΔHfus,pure and ΔHvap,solution ≅ ΔHvap,pure.

From these premises, a short rationale can be drawn out, either in a qualitative, non-mathematical way, or using equation (12).

The qualitative explanation relies on the second law: we consider reversible processes at equilibrium, where entropy is maximum. In other words, ΔS = 0. Heat is exchanged between the system and the environment (heat bath). We must therefore consider the total entropy change: ΔS = ΔSsystem + ΔSenvironment = 0. Consider the vaporization process. It is associated with ΔSsystem > 0 and thus ΔSenvironment < 0. This results from the heat transfer from the environment to the system. Now, we have seen that the entropy uptake upon vaporization of the solution is smaller than for the pure liquid. The entropy decrease must then be smaller for the environment. Because the heat transfer is roughly the same (remember the third premise: ΔHvap,solution ≅ ΔHvap,pure), the transition temperature should be higher: remember again, at higher temperature, entropy changes less for the same heat transfer. This explains the boiling point elevation. A similar argument leads to the freezing point depression.

Now, students can also be provided with a mathematical argument, and it is interesting to ask them to unravel to common points of both the physical and the mathematical vantage points.

Since Δ S tr = Δ H tr T tr , then T fus = Δ H fus Δ S fus . If we compare Tfus,pure and Tfus,solution, we see that (using the same color code as Figure 8C):

(13) T fus , pure = Δ H fus , pure Δ S fus , pure .

Then,

(14) T fus , solution = Δ H fus , solution Δ S fus , solution Δ H fus , pure Δ S fus , solution < Δ H fus , pure Δ S fus , pure liquid = T fus , pure
(15) T fus , solution < T fus , pure .

The variation in temperature of fusion is thus, in the approximation of an ideal system (ΔHfus,solution ≅ ΔHfus,pure), an entropic effect. By analogy, we can find the same kind of inequality for the boiling point elevation:

(16) T vap , pure = H vap , pure S vap , pure .

Then,

(17) T vap , solution = H vap , solution S vap , solution H vap , pure S vap , solution   > H vap , pure S vap , pure = T vap , pure
(18) T vap , sol  > T vap , pur .

3 Conclusion

We have proposed a teaching method to develop a submicroscopic, statistical view of entropy, and to connect it to two colligative properties (freezing point depression and boiling point elevation). This approach can be introduced at early stages of higher education, that is, before the introduction of the chemical potential. It directly relies on the second law of thermodynamics, interpreted using elementary statistical thermodynamics, avoiding a too heavy mathematical cognitive load. Most of the arguments are based on a qualitative interpretation of graphical representations of Boltzmann’s distributions. The approach encompasses three stages: (i) introducing the basics of statistical thermodynamics (macrostates, microstates, Maxwell-Boltzmann distribution, Boltzmann’s entropy S = k·ln(W)); (ii) discussing, using a visual diagrammatic approach, the impact of the molecular degrees of freedom on the entropy, the relationship between entropy and internal energy, and the link with Clausius’ entropy definition; (iii) finally, using this knowledge of introductory statistical thermodynamics to show how the freezing point depression and the boiling point elevation derive from different values of ΔSphase transition for the solution and the pure liquid, that are themselves the consequences of the higher number of microstates in the solution compared to the pure liquid.


Corresponding author: Vincent Natalis, Research Unit DIDACTIfen, Laboratory of Chemistry Education, University of Liège, Allée du Six Aout 11, 4000 Liège, Belgium, E-mail:

Acknowledgements

We would like to express our profound gratitude to all the students who participated in the testing of this method, as well as to Prof. Loïc Quinton for his help and advice. We wish to thank Dr. Brigitte Nihant and Mr. Martin Blavier and for their helpful comments and proofreading.

  1. Research ethics: Not applicable.

  2. Informed consent: Not applicable.

  3. Author contributions: All authors have accepted responsibility for the entire content of this manuscript and approved its submission.

  4. Use of Large Language Models, AI and Machine Learning Tools: None declared.

  5. Conflict of interest: The authors state no conflict of interest.

  6. Research funding: None declared.

  7. Data availability: Not applicable.

References

1. Atarés, L.; Canet, M. J.; Trujillo, M.; Benlloch-Dualde, J.; Paricio Royo, J.; Fernandez-March, A. Helping Pregraduate Students Reach Deep Understanding of the Second Law of Thermodynamics. Educ. Sci. 2021, 11 (9), 539–553. https://doi.org/10.3390/educsci11090539.Suche in Google Scholar

2. Bennett, J. M.; Sözbilir, M. A. Study of Turkish Chemistry Undergraduates’ Understanding of Entropy. J. Chem. Educ. 2007, 84 (7), 1204–1208. https://doi.org/10.1021/ed084p1204.Suche in Google Scholar

3. Crossette, N.; Vignal, M.; Wilcox, B. R. Investigating Graduate Student Reasoning on a Conceptual Entropy Questionnaire. Phys. Rev. Phys. Educ. Res. 2021, 17 (2), 020119. https://doi.org/10.1103/PhysRevPhysEducRes.17.020119.Suche in Google Scholar

4. Natalis, V.; Leyh, B. Improving the Teaching of Entropy and the Second Law of Thermodynamics: A Systematic Review with Meta-Analysis. Chem. Educ. Res. Pract. 2025, 26, 9–33. https://doi.org/10.1039/D4RP00158C.Suche in Google Scholar

5. Pinarbasi, T.; Sozbilir, M.; Canpolat, N. Prospective Chemistry Teachers’ Misconceptions About Colligative Properties: Boiling Point Elevation and Freezing Point Depression. Chem. Educ. Res. Pract. 2009, 10 (4), 273–280. https://doi.org/10.1039/B920832C.Suche in Google Scholar

6. Hill, J.; Petrucci, R.; McCreary, S.; Perry, S. Chimie Générale, 2nd ed.; Pearson: 2008.Suche in Google Scholar

7. Javalab Osmosis Simulation. https://javalab.org/en/osmosis_en/ (accessed 2025-01-13).Suche in Google Scholar

8. Simbucket Osmosis Simulation. https://www.simbucket.com/simulation/osmosis/ (accessed 2025-01-13).Suche in Google Scholar

9. How Does Salt Melt Ice? 2015. https://www.youtube.com/watch?v=JkhWV2uaHaA&t=73s.Suche in Google Scholar

10. How to Clear Icy Roads, With Science. 2024. https://www.youtube.com/watch?v=7Zau3jgUJWU&t=47s.Suche in Google Scholar

11. Atkins, P.; De Paula, J.; Keeler, J. Atkins’ Physical Chemistry, 12th ed; Oxford University Press: Oxford, 2023.Suche in Google Scholar

12. Haglund, J. Good Use of a ‘Bad’ Metaphor: Entropy as Disorder. Sci. Educ. 2017, 26 (3–4), 205–214. https://doi.org/10.1007/s11191-017-9892-4.Suche in Google Scholar

13. Lambert, F. L. Shuffled Cards, Messy Desks, and Disorderly Dorm Rooms – Examples of Entropy Increase? Nonsense. J. Chem. Educ. 1999, 76 (10), 1385–1387. https://doi.org/10.1021/ed076p1385.Suche in Google Scholar

14. Kozliak, E. I.; Lambert, F. L. “Order-to-Disorder” for Entropy Change? Consider the Numbers! Chem. Educ. 2005, 10, 24–25.Suche in Google Scholar

15. Atarés, L.; Canet, M. J.; Pérez-Pascual, A.; Trujillo, M. Undergraduate Student Thinking on the Threshold Concept of Entropy. J. Chem. Educ. 2024, 101 (5), 1798–1809. https://doi.org/10.1021/acs.jchemed.3c00381.Suche in Google Scholar

16. Styer, D. Insight into Entropy. Am. J. Phys. 2000, 68 (12), 1090–1096. https://doi.org/10.1119/1.1287353.Suche in Google Scholar

17. Jeppsson, F.; Haglund, J.; Strömdahl, H. Exploiting Language in Teaching Entropy. J. Balt. Sci. Educ. 2011, 10 (1), 27–35.Suche in Google Scholar

18. Atarés, L.; Canet, M. J.; Trujillo, M.; Paricio, J. The First Step to Address the Teaching of Entropy. Phys. Teach. 2024, 62 (4), 287–289. https://doi.org/10.1119/5.0135846.Suche in Google Scholar

19. Ben-Naim, A. Entropy: Order or Information. J. Chem. Educ. 2011, 88 (5), 594–596. https://doi.org/10.1021/ed100922x.Suche in Google Scholar

20. Christensen, W. M.; Meltzer, D. E.; Ogilvie, C. A. Student Ideas Regarding Entropy and the Second Law of Thermodynamics in an Introductory Physics Course. Am. J. Phys. 2009, 77 (10), 907–917. https://doi.org/10.1119/1.3167357.Suche in Google Scholar

21. Natalis, V.; Quinton, L.; Leyh, B. Identification and Evolution of Alternative conceptions of entropy: STEM Undergraduates Before and After a Macroscopic Thermodynamics Course. [Submitted Manuscript], 2024.Suche in Google Scholar

22. Natalis, V.; Leyh, B. Promoting Conceptual Change When Teaching Entropy to First-Year Undergraduates: What Impact has an Introductory Statistical Approach on Alternative Conceptions? (In Press). Int. J. Sci. Educ. 2024.10.1080/09500693.2025.2460049Suche in Google Scholar

23. Fischer, P. J.; Hanson, R. M.; Riley, P.; Schwinefus, J. Using Graphs of Gibbs Energy Versus Temperature in General Chemistry Discussions of Phase Changes and Colligative Properties. J. Chem. Educ. 2008, 85 (8), 1142–1145. https://doi.org/10.1021/ed085p1142.Suche in Google Scholar

24. Novo, M.; Reija, B.; Al-Soufi, W. Freezing Point of Milk: A Natural Way to Understand Colligative Properties. J. Chem. Educ. 2007, 84 (10), 1673–1675. https://doi.org/10.1021/ed084p1673.Suche in Google Scholar

25. Wickware, C. L.; Day, C. T. C.; Adams, M.; Orta-Ramirez, A.; Snyder, A. B. The Science of a Sundae: Using the Principle of Colligative Properties in Food Science Outreach Activities for Middle and High School Students. J. Food Sci. Educ. 2017, 16 (3), 92–98. https://doi.org/10.1111/1541-4329.12112.Suche in Google Scholar

26. Kozliak, E. I. Introduction of Entropy Via the Boltzmann Distribution in Undergraduate Physical Chemistry: a Molecular Approach. J. Chem. Educ. 2004, 81 (11), 1595–1598. https://doi.org/10.1021/ed081p1595.Suche in Google Scholar

27. Masthay, M. B.; Fannin, H. B. Positive and Negative Temperatures in a Two-Level System: Thermodynamic and Statistical-Mechanical Perspectives. J. Chem. Educ. 2005, 82 (6), 867. https://doi.org/10.1021/ed082p867.Suche in Google Scholar

28 Lambert, F. L. Disorder – a Cracked Crutch for Supporting Entropy Discussions. J. Chem. Educ. 2002, 79 (2), 187–192. https://doi.org/10.1021/ed079p187.Suche in Google Scholar

29. Novak, I. The Microscopic Statement of the Second Law of Thermodynamics. J. Chem. Educ. 2003, 80 (12), 1428–1431. https://doi.org/10.1021/ed080p1428.Suche in Google Scholar

30. Gil, V. M. S.; Paiva, J. C. M. Using Computer Simulations to Teach Salt Solubility. The Role of Entropy in Solubility Equilibrium. J. Chem. Educ. 2006, 83 (1), 170. https://doi.org/10.1021/ed083p170.Suche in Google Scholar

31. Ellis, F. B.; Ellis, D. C.; Lambert, F. L. An Experimental Approach to Teaching and Learning Elementary Statistical Mechanics. J. Chem. Educ. 2008, 85 (1), 78–82. https://doi.org/10.1021/ed085p1191.Suche in Google Scholar

32. Leff, H. S. Removing the Mystery of Entropy and Thermodynamics – Part I. Phys. Teach. 2012, 50 (1), 28–31. https://doi.org/10.1119/1.3670080.Suche in Google Scholar

33. Jungermann, A. H. Entropy and the Shelf Model: A Quantum Physical Approach to a Physical Property. J. Chem. Educ. 2006, 83 (11), 1686–1694. https://doi.org/10.1021/ed083p1686.Suche in Google Scholar

34. Sklar, L. Physics and Chance: Philosophical Issues in the Foundations of Statistical Mechanics; Cambridge University Press: Cambridge, 1993.10.1017/CBO9780511624933Suche in Google Scholar

35. Lambert, F. L. The Conceptual Meaning of Thermodynamic Entropy in the 21st Century. Int. Res. J. Pure Appl. Chem. 2011, 1 (3), 65–68. https://doi.org/10.9734/IRJPAC/2011/679.Suche in Google Scholar

36. Leff, H. S. Thermodynamic Entropy: The Spreading and Sharing of Energy. Am. J. Phys. 1996, 64 (10), 1261–1271. https://doi.org/10.1119/1.18389.Suche in Google Scholar

37. Phillips, J. A. The Macro and Micro of it is That Entropy is the Spread of Energy. Phys. Teach. 2016, 54 (6), 344–347. https://doi.org/10.1119/1.4961175.Suche in Google Scholar

38. Cartier, S. F. The Statistical Interpretation of Classical Thermodynamic Heating and Expansion Processes. J. Chem. Educ. 2011, 88 (11), 1531–1537. https://doi.org/10.1021/ed100713d.Suche in Google Scholar

39. Clausius, R. The Mechanical Theory of Heat; MacMillan and Co.: London, 1879.Suche in Google Scholar

40. Kjellander, R. Thermodynamics Kept Simple: A Molecular Approach; CRC Press: Boca Raton, 2016.10.1201/b18907Suche in Google Scholar

41. Clausius, R. Ueber Die Bewegende Kraft der Wärme Und Die Gesetze, Welche Sich Daraus Für Die Wärmelehre Selbst Ableiten Lassen [on the Moving Force of Heat and the Laws that can be Derived from it for the Theory of Heat Itself]. Ann. Phys. 1850, 79 (4), 368–397. 500–524.10.1002/andp.18501550403Suche in Google Scholar

Received: 2024-10-30
Accepted: 2025-02-23
Published Online: 2025-03-27

© 2025 the author(s), published by De Gruyter, Berlin/Boston

This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Heruntergeladen am 13.9.2025 von https://www.degruyterbrill.com/document/doi/10.1515/cti-2024-0109/html
Button zum nach oben scrollen