chapter
stringlengths 1.97k
1.53M
| path
stringlengths 47
241
|
---|---|
Once we know what energy a given transition would have, we can ask, “Which transitions between energy levels or states are possible?” In answering this question, we also will learn why the longer cyanine dye molecules have stronger absorptions, or larger absorption coefficients.
Clearly the transitions cannot violate the Pauli Exclusion Principle; that is, they cannot produce an electron configuration with three electrons in the same orbital. Besides the Pauli Exclusion Principle, there are additional restrictions that result from the nature of the interaction between electromagnetic radiation and matter. These restrictions are summarized by spectroscopic selection rules. These rules tell whether or not a transition from one state to another state is possible.
To obtain these selection rules, we consider light as consisting of perpendicular oscillating electric and magnetic fields. The magnetic field interacts with magnetic moments and causes transitions seen in electron spin resonance and nuclear magnetic resonance spectroscopies. The oscillating electric field interacts with electrical charges, i.e. the positive nuclei and negative electrons that comprise an atom or molecule, and cause the transitions seen in UV-Visible, atomic absorption, and fluorescence spectroscopies.
The energy of interaction, $E$, between a system of charged particles and an electric field $\epsilon$ is given by the scalar product of the electric field and the dipole moment, $μ$, for the system. Both of these quantities are vectors.
$E = - \mu \cdot \epsilon \label {4-20}$
The dipole moment is defined as the summation of the product of the charge $q_j$ times the position vector $r_j$ for all charged particles $j$.
$\mu = \sum _j q_j r_j \label {4-21}$
Example $1$
Calculate the dipole moment of HCl from the following information. The position vectors below use Cartesian coordinates (x, y, z), and the units are pm. What fraction of an electronic charge has been transferred from the chlorine atom to the hydrogen atom in this molecule? $r_H = (124.0, 0, 0)$, $r_{Cl} = (- 3.5, 0, 0)$, $q_H = 2.70 x 10^{-20} C, q_{Cl} = - 2.70 x 10^{-20} C$.
Example $2$
Sketch a diagram for Exercise $1$ showing the coordinate system, the $\ce{HCl}$ molecule and the dipole moment.
To calculate an expectation value for this interaction energy, we need to evaluate the expectation value integral.
$\left \langle E \right \rangle = \int \psi ^*_n ( - \hat {\mu} \cdot \hat {\epsilon} ) \psi _n d \tau \label {4-22}$
The $\int d \tau$ symbol simply means integrate over all coordinates. The operators $\hat {\mu}$ and $\hat {\epsilon}$ are vectors and are the same as the classical quantities, μ and $\epsilon$.
Usually the wavelength of light used in electronic spectroscopy is very long compared to the length of a molecule. For example, the wavelength of green light is 550 nm, which is much larger than molecules, which are closer to 1 nm in size. The magnitude of electric field then is essentially constant over the length of the molecule, and $\epsilon$ can be removed from the integration since it is constant wherever ψ is not zero. In other words, ψ is finite only over the volume of the molecule, and the electric field is constant over the volume of the molecule. What remains for the integral is the expectation value for the permanent dipole moment of the molecule in state $n$, namely,
$\left \langle \mu \right \rangle= \int \psi ^*_n \hat {\mu} \psi _n d \tau \label {4-23}$
so
$E = - \left \langle \mu \right \rangle \cdot \epsilon \label {4-24}$
Example $3$
Verify that the vectors in the scalar product in Equation $\ref{4-24}$ commute by expanding $\mu \cdot \epsilon$ and $\epsilon \cdot \mu$. Use particle-in-a-box wave functions with HCl charges and coordinates.
Equation $\ref{4-24}$ shows that the strength or energy of the interaction between a charge distribution and an electric field depends on the dipole moment of the charge distribution.
To obtain the strength of the interaction that causes transitions between states, the transition dipole moment is used rather than the dipole moment. The transition dipole moment integral is very similar to the dipole moment integral in Equation $\ref{4-23}$ except the two wavefunctions are different, one for each of the states involved in the transition. Two different states are involved in the integral because the transition dipole moment integral has to do with the magnitude of the interaction with the electric field that causes a transition between the two states. For a transition where the state changes from ψi to ψf, the transition dipole moment integral is
$\left \langle \mu \right \rangle _T = \int \psi ^*_f \hat {\mu} \psi _i d \tau = \mu _T \label {4-25}$
Just like the probability density is given by the absolute square of the wavefunction, the probability for a transition as measured by the absorption coefficient is proportional to the absolute square $\mu ^*_T \mu _T$ of the transition dipole moment, which is calculated using Equation $\ref{4-25}$. Since taking the absolute square always produces a positive quantity, it does not matter whether the transition moment itself is positive, negative, or imaginary. The transition dipole moment integral and its relationship to the absorption coefficient and transition probability can be derived from the time-dependent Schrödinger equation. Here we only want to introduce the concept of the transition dipole moment and use it to obtain selection rules and relative transition probabilities for the particle-in-a-box. Later it will be applied to other systems that we will be considering.
If $μ_T = 0$ then the interaction energy is zero and no transition occurs or is possible between states characterized by $ψ_i$ and $ψ_f$. Such a transition is said to be forbidden, or more precisely, electric-dipole forbidden. In fact, the electric-dipole electric-field interaction is only the leading term in a multipole expansion of the interaction energy, but the higher order terms in this expansion usually are not significant. If $μ_T$ is large, then the probability for a transition and the absorption coefficient are large.
It is very useful to be able to tell whether a transition is possible, $μ_T ≠ 0$, or not possible, $μ_T = 0$, without having to evaluate integrals. Properties of the wavefunctions such as symmetry or angular momentum can be used to determine the conditions that must exist for the transition dipole moment to be finite, i.e. not zero. Statements called spectroscopic selection rules summarize these conditions. Selection rules do not tell us how probable or intense a transition is. They only tell us whether a transition is possible or not possible.
For the particle-in-a-box model, as applied to dye molecules and other appropriate molecular systems, we need to consider the transition moment integral for one electron. According to Equation $\ref{4-21}$, the dipole moment operator for an electron in one dimension is –ex since the charge is –e and the electron is located at x.
$\mu _T = - e \int \limits _0^L \psi ^*_f (x) x \psi _i (x) dx \label {4-26}$
This is the integral that must be evaluated for various particle-in-a-box wavefunctions to see which transitions are allowed (i.e. $μ_T ≠ 0$) and forbidden ($μ_T = 0$) and to determine the relative strengths of the allowed transitions. | textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Book%3A_Quantum_States_of_Atoms_and_Molecules_(Zielinksi_et_al)/04%3A_Electronic_Spectroscopy_of_Cyanine_Dyes/4.05%3A_The_Transition_Dipole_Moment_and_Spectroscopic_Selectio.txt |
This section explores the use of symmetry to determine selection rules. Here we derive an analytical expression for the transition dipole moment integral for the particle-in-a-box model. The result that the magnitude of this integral increases as the length of the box increases explains why the absorption coefficients of the longer cyanine dye molecules are larger. We use the transition moment integral and the trigonometric forms of the particle-in-a-box wavefunctions to get Equation $\ref{4-27}$ for an electron making a transition from orbital $i$ to orbital $f$.
\begin{align} \mu _T &= \dfrac {-2e}{L} \int \limits _0^L \sin \left (\dfrac {f \pi x}{L} \right ) x \sin \left ( \dfrac {i \pi x }{L} \right ) dx \[4pt] &= \dfrac {-2e}{L} \int \limits _0^L x \sin \left (\dfrac {f \pi x}{L} \right ) \sin \left ( \dfrac {i \pi x }{L} \right ) dx \label {4-27} \end{align}
Exercise $1$
Why is there a factor $2/L$ in Equation $\ref{4-27}$? What are the units associated with the dipole moment and the transition dipole moment?
Simplify the integral in Equation $\ref{4-27}$ by substituting the product-to-sum trigonometric identity
$\sin \psi \sin \theta = \dfrac {1}{2} \left[ \cos (\psi - \theta ) - \cos (\psi + \theta) \right] \label {4-28}$
and also redefine the sum and difference terms:
$\Delta n = f - i \nonumber$
and
$n_T = f + i \nonumber$
So Equation $\ref{4-27}$
\begin{align} \mu _T &= \dfrac {-e}{L} \int \limits _0^L x \left[ \cos \left (\dfrac {\Delta n \pi x}{L} \right ) - \cos \left (\dfrac {n_T \pi x}{L} \right ) \right ] dx \[4pt] &= \dfrac {-e}{L} \left[ \int \limits _0^L x \cos \left (\dfrac {\Delta n \pi x}{L} \right ) dx - \int \limits _0^L x \cos \left (\dfrac { n_T \pi x}{L} \right ) dx \right ] \label{step 3} \end{align}
These two definite integrals can be directly evaluated using this relationship
$\int \limits _0^L x \cos (ax) dx = \left[ \dfrac {1}{a^2} \cos (ax) + \dfrac {x}{a} \sin (ax) \right]^L_0 \label {4-29}$
where $a$ is any nonzero constant. Using Equation \ref{4-29} in Equation \ref{step 3} produces
$T = \dfrac {-e}{L} {\left(\dfrac {L}{\pi}\right)}^2 \left[ \dfrac {1}{\Delta n^2} (\cos (\Delta n \pi) - 1) - \dfrac {1}{n^2_T} (\cos (n_T \pi) - 1) + \dfrac {1}{\Delta n} \sin (\Delta n \pi ) - \dfrac {1}{n_T} \sin (n_T \pi) \right] \label {4-31}$
Exercise $2$
Show that if $Δn$ is an even integer, then $n_T$ must be an even integer and $μ_T = 0$.
Exercise $3$
Show that if $i$ and $f$ are both even or both odd integers then $Δn$ is an even integer and $μ_T = 0$.
Exercise $4$
Show that if $Δn$ is an odd integer, then $n_T$ must be an odd integer and $μ_T$ is given by Equation $\ref{4-32}$.
$\mu _T = \dfrac {-2eL}{\pi ^2} \left(\dfrac {1}{n^2_T} - \dfrac {1}{\Delta n^2} \right) = \dfrac {8eL}{\pi^2} \left( \dfrac {f_i}{{(f^2 - i^2)}^2} \right) \label {4-32}$
Exercise $5$
Show that the two expressions for the transition moment in Equation $\ref{4-32}$ are in fact equivalent.
Example $1$
What is the value of the transition moment integral for transitions 1→3 and 2→4?
Solution
For these two transitions, either n and f are both odd or they are both even integers. In either case, $Δn$ and $n_T$ are even integers. The cosine of an even integer multiple of $π$ is +1 so the cosine terms in Equation $\ref{4-31}$ become (1-1) = 0. The sine terms are zero because the sine of an even integer multiple of $π$ is zero. Therefore, $μ_T = 0$ for these transitions and they are forbidden. The same reasoning applies to any transitions that have both i and f as even or as odd integers.
Exercise $1$
What is the value of the transition moment for the n = 8 to f = 10 transition?
Example $2$
What is the value of the transition moment integral for transitions 1→2 and 2→3?
Solution
For these two transitions Δn = 1 and nT = 3 and 5, respectively, all odd integers. The cosine of an odd-integer multiple of π is -1 so the cosine terms in Equation $\ref{4-31}$ become (-1-1) = -2. The sine terms in Equation \ref{4-31} are zero because the sine of an odd integer multiple of π is zero. Therefore, $μ_T$ has some finite value given by Equation \ref{4-32}. The same reasoning is used to evaluate the transition moment integral for any transitions that have Δn and nT as odd integers, e.g. 2→7 and 3→8. In these cases Δn = 5 and nT = 9 and 11, respectively. Again the transition moment integral for each of these transitions is finite.
Exercise $2$
Explain why one of the following transitions occurs with excitation by light and the other does not: $i = 1$ to $f = 7$ and $i = 3$ to $f = 6$.
From Examples $1$ and $2$, we can formulate the selection rules for the particle-in-a-box model: Transitions are forbidden if $Δn = f - i$ is an even integer. Transitions are allowed if $Δn = f - i$ is an odd integer. In the next section we will see that these selection rules can be understood in terms of the symmetry of the wavefunctions.
Through the evaluation of the transition moment integral, we can understand why the spectra of cyanine dyes are very simple. The spectrum for each dye consists only of a single peak because other transitions have very much smaller transition dipole moments. We also see that the longer molecules have the larger absorption coefficients because the transition dipole moment increases with the length of the molecule.
Exercise $6$
The lowest energy transition is from the HOMO to the LUMO, which were defined previously. Compute the value of the transition moment integral for the HOMO to LUMO transition $E_3→E_4$ for a cyanine dye with 3 carbon atoms in the conjugated chain. What is the next lowest energy transition for a particle-in-a-box? Compute the value of the transition moment integral for the next lowest energy transition that is allowed for this dye. What are the quantum numbers for the energy levels associated with this transition? How does the probability of this transition compare in magnitude with that for 3→4? | textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Book%3A_Quantum_States_of_Atoms_and_Molecules_(Zielinksi_et_al)/04%3A_Electronic_Spectroscopy_of_Cyanine_Dyes/4.06%3A_Selection_Rules_for_the_Particle-in-a-Box.txt |
It generally requires much work and time to evaluate integrals analytically or even numerically on a computer. Our time, computer time, and work can be saved if we can identify by inspection when integrals are zero. The determination of when integrals are zero leads to spectroscopic selection rules and provides a better understanding of them. Here we use graphs to examine properties of the wavefunctions for a particle-in-a-box to determine when the transition dipole moment integral is zero and thereby obtain the spectroscopic selection rules for this system. These considerations are completely general and can be applied to any integrals. We essentially determine whether integrals are zero or not by drawing pictures and thinking. What could be easier?
Consider the case for a transition from orbital n = 1 to orbital n = 2 of a molecule described by the particle-in-a-box model. These two wavefunctions are shown in Figure $1$ as f1 and f2, respectively. For the curves shown on the left in the figure, we defined the box to have unit length, L = 1, and infinite potential barriers at x = 0 and x = L as we did previously, so the particle is trapped between 0 and L. For the curves shown on the right, g1 and g2, we put the origin of the coordinate system halfway between the potential barriers, i.e. at the center of the box. The barriers have not moved and the particle has not changed, but our description of the position of the barriers and the particle has changed. We now say the barriers are located at x = -L/2 and x = +L/2, and the particle is trapped between -L/2 and +L/2.
Clearly the wavefunctions in Figure $1$ look the same for these two choices of coordinate systems. The appearance of the wavefunctions doesn’t depend on the coordinate system we have chosen or on our labels since the wavefunctions tell us about the probability of finding the particle. This probability does not change when we change the coordinate system or relabel the axis. The names of these functions do change, however. In Figure $\PageIndex{1a}$, they both are sine functions. In Figure $\PageIndex{1b}$, one is a cosine function and the other is a sine function multiplied by -1.
Exercise $1$
Sketch $(f_1(x))^2$ and $(g_1(x))^2$. What do you observe? Sketch $(f_2(x))^2$ and $(g_2(x))^2$. What do you observe? What is the significance with respect to the probability given by both f(x) and g(x)?
We moved the origin of the coordinate system to the center of the box to take advantage of the symmetry properties of these functions. By symmetry, we mean the correspondence in form on either side of a dividing point, line, or plane. As we shall see, the analysis of the symmetry is straightforward if the origin of the coordinate systems coincides with the dividing point, line, or plane. Since the right and left halves of the box or molecule represented by the box are the same, the square of the wavefunction for x > 0 must be the same as the square of the wavefunction for x < 0. Since the box is symmetrical, the probability density, Ψ2, for the particle distribution also must be symmetrical because there is no reason for the particle to be located preferentially on one side or the other.
The transition moment integral for the particle-in-a-box involves three functions ($ψ_f, ψ_i$, and x) that are multiplied together at each point x to form the integrand. These three functions for i = 1 and f = 2 are plotted on the left in Figure $2$. The integrand is the product of these three functions and is shown on the right in the figure. The integral is the area between the integrand and the zero on the y-axis. Clearly this area and thus also the value of the integral is not zero. The integral is negative because $ψ_2$ is negative for x > 0 and x is negative for x < 0. Since $μ_T ≠ 0$, the transition from $ψ_1$ to $ψ_2$ is allowed. As we previously mentioned in this chapter, and will see again later, the absorption coefficient is proportional to the absolute square of $μ_T$ so it is acceptable for the transition moment integral to be negative. It even could involve $\sqrt {-1}$. Taking the absolute square makes both negative and imaginary quantities positive.
Exercise $2$
Write the expression or function for the integrand that is plotted on the right side of Figure $7$ in terms of x, sine, and cosine functions. Use your function to explain why the integrand is 0 at x = 0 and has minima at x = + 0.25L and - 0.25L. Sketch the corresponding probability function.Where are the peaks in the probability function?
Now consider the transition moment integral for quantum state n = 1 to quantum state n = 3. In Figure $3$, the wavefunctions and the x operator are shown on the left side, and the integrand is shown on the right side. For this case we see that the integrand for x < 0 is the negative of the integrand for x > 0. This difference in sign means the net positive area for x > 0 is canceled by the net negative area for x < 0, so the total area and the transition moment integral are zero. We therefore conclude that the transition from n = 1 to n = 3 is forbidden.
Exercise $3$
Write the expression or function for the integrand that is plotted on the right side of Figure $8$ in terms of x and cosine functions. Use your function to explain why the integrand is zero at x = 0, why is it negative just above x = 0, and why as x goes from 0 to –0.5, the integrand first is positive and then negative.
In spectroscopy some special terms are used to describe the symmetry properties of wavefunctions. The terms symmetric, gerade, and even describe functions like $f(x) = x^2$ and $ψ_1(x)$ for the particle-in-a-box that have the property f(x) = f(-x), i.e. the function has the same values for x > 0 and for x < 0. The terms antisymmetric, ungerade, and odd describe functions like $f(x) = x$ and $ψ_2(x)$ for the particle-in-a-box that have the property f(x) = -f(-x), i.e. the function for x > 0 is has values that are opposite in sign compared to the function for x < 0. Gerade and ungerade are German words meaning even and odd and are abbreviated as g and u. Note that antisymmetric doesn’t mean non-symmetric.
If an integrand is u, then the integral is zero! It is zero because the contribution from x > 0 is cancelled by the contribution from x < 0, as shown by the example in Figure $8$. An integrand will be u if the product of the functions comprising it is u. The following rules make it possible to quickly identify whether a product of two functions is u.
$g \cdot g = g, u \cdot u = g, g \cdot u = u \label {4-33}$
These rules are the same as those for multiplying +1 for g and -1 for u. The validity of these rules can be seen by examining Figures $1$ and $1$. If an integrand consists of more than two functions, the rules are applied to pairs of functions to obtain the symmetry of their product, and then applied to pairs of the product functions, and so forth, until one obtains the symmetry of the integrand.
Exercise $4$
Use Mathcad or some other software to draw graphs of $x^2, -x^2, x^3, and -x^3$ as a function of x. Which of these functions are g and which are u? Is the product function $x^2 \cdot x^3 g$ or u? How about $x^2 \cdot -x^2 \cdot x^3 \text {and} x^2 \cdot x^3 \cdot -x^3$?
Exercise $5$
Label each function in Figures 4.7 and 4.8 as g or u. Also label the integrands.
Exercise $6$
Use symmetry arguments to determine which of the following transitions between quantum states are allowed for the particle-in-a-box: n = 2 to 3 or n = 2 to 4.
Symmetry properties of functions allow us to identify when the transition moment integral and other integrals are zero. This symmetry-based approach to integration can be generalized and becomes even more powerful when concepts taken from mathematical Group Theory are used. With the tools of Group Theory, one can examine symmetry properties in three-dimensional space for complicated molecular structures. A group-theoretical analysis helps understand features in molecular spectra, predict products of chemical reactions, and simplify theoretical calculations of molecular structures and properties. | textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Book%3A_Quantum_States_of_Atoms_and_Molecules_(Zielinksi_et_al)/04%3A_Electronic_Spectroscopy_of_Cyanine_Dyes/4.07%3A_Using_Symmetry_to_Identify_Integrals_that_are_Zero.txt |
Now that we have mathematical expressions for the wavefunctions and energies for the particle-in-a-box, we can answer a number of interesting questions. The answers to these questions use quantum mechanics to predict some important and general properties for electrons, atoms, molecules, gases, liquids, and solids.
What is the lowest energy for an electron? The lowest energy level is $E_1$, and it is important to recognize that this lowest energy is not zero. This finite (meaning neither zero nor infinite) energy is called the zero-point energy, and the motion associated with this energy is called the zero-point motion. Any system that is restricted to some region of space is said to be bound. The zero-point energy and motion are manifestations of the wave properties and the Heisenberg Uncertainty Principle, and are general properties of bound quantum mechanical systems. The position and momentum of the particle cannot be determined exactly. According to the Heisenberg Uncertainty Principle, the product of the uncertainties, i.e. standard deviations in these quantities, must be greater than or equal to ħ/2. If the energy were zero, then the momentum would be exactly zero, which would violate the Heisenberg Uncertainty Principle unless the uncertainty in position were infinite. The system then would not be localized in space to any extent at all, which we will find to be true for the case of a free particle, which is not bound. The uncertainty in the position of a bound system is not infinite, so the uncertainty in the momentum cannot be zero, as it would be if the energy were zero.
Where is the electron?
Exercise $1$
Use your solution to Exercise $15$ to write a few sentences answering this question about the location of the electron. What insight do you gain from the graphs you made for the probability distribution at very large n compared to n = 1?
Exercise $2$
Use the general form of the particle-in-a-box wavefunction sin(kx) for any n to find the mathematical expression for the position expectation value $\left \langle x \right \rangle$ for a box of length L. How does $\left \langle x \right \rangle$ depend on n? Evaluate the integral.
Exercise $3$
Calculate the probability of finding an electron at L/2 in an interval ranging from $\dfrac {L}{2} - \dfrac {L}{200}$ to $\dfrac {L}{2} + \dfrac {L}{200}$ for n = 1 and n = 2. Since the length of the interval, L/100, is small compared to L, you can get an approximate answer without integrating.
What is the momentum of an electron in the box?
The particle-in-a-box wavefunctions are not eigenfunctions of the momentum operator.
Exercise $4$
Show that the particle-in-a-box wavefunctions are not eigenfunctions of the momentum operator.
Example $1$
Even though the wavefunctions are not momentum eigenfunctions, we can calculate the expectation value for the momentum. Show that the expectation or average value for the momentum of an electron in the box is zero in every state.
Solution
First write the expectation value integral for the momentum. Then insert the expression for the wavefunction and evaluate the integral as shown here.
$\left \langle P \right \rangle = \int \limits ^L_0 \psi ^*_n (x) \left ( -i\hbar \dfrac {d}{dx} \right ) \psi _n (x) dx$
$= \int \limits ^L_0 \left (\dfrac {2}{L} \right )^{1/2} \sin (\dfrac {n \pi x}{L}) \left ( -i\hbar \dfrac {d}{dx} \right ) \left (\dfrac {2}{L} \right )^{1/2} \sin (\dfrac {n \pi x }{L} ) dx$
$= -i \hbar \left (\dfrac {2}{L} \right ) \int \limits ^L_0 \sin (\dfrac {n \pi x}{L}) \left ( \dfrac {d}{dx} \right ) \sin (\dfrac {n \pi x}{L}) dx$
$= -i \hbar \left (\dfrac {2}{L} \right ) \left ( \dfrac {n \pi}{L} \right ) \int \limits ^L_0 \sin (\dfrac {n \pi x}{L}) \cos (\dfrac {n \pi x}{L}) dx$
$= 0$
It may seem that the electron does not have any momentum, which is not correct because we know the energy is never zero. In fact, the energy that we obtained for the particle-in-a-box is entirely kinetic energy because we set the potential energy at 0.
Since the kinetic energy is the momentum squared divided by twice the mass, it is easy to understand how the average momentum can be zero and the kinetic energy finite. It must be equally likely for the particle-in-a-box to have a momentum -p as +p. The average of +p and –p is zero, yet $p^2$ and the average of $p^2$ are not zero.
The information that the particle is equally likely to have a momentum of +p or –p is contained in the wavefunction. The sine function is a representation of the two momentum eigenfunctions $e^{ikx}$ and $e^{-ikx}$ as shown by Exercise $5$.
Exercise $5$
Write the particle-in-a-box wavefunction as a normalized linear combination of the momentum eigenfunctions $e^{ikx}$ and $e^{-ikx}$ by using Euler’s formula. Show that the momentum eigenvalues for these two functions are p = +ħk and -ħk.
The interpretation of these results is physically interesting. The exponential wavefunctions in the linear combination for the sine function represent the two opposite directions in which the electron can move. One exponential term represents movement to the left and the other term represents movement to the right. The electrons are moving, they have kinetic energy and momentum, yet the average momentum is zero.
Does the fact that the average momentum of an electron is zero and the average position is L/2 violate the Heisenberg Uncertainty Principle? No, of course not, because the Heisenberg Uncertainty Principle pertains to the uncertainty in the momentum and in the position, not to the average values.
Quantitative values for these uncertainties can be obtained to compare with the limit set by the Heisenberg Uncertainty Principle for the product of the uncertainties in the momentum and position. First, we need a quantitative definition of uncertainty. Here, just like in experimental measurements, a good definition of uncertainty is the standard deviation or the root mean square deviation from the average. It can be shown by working Problem 6 at the end of this chapter that the standard deviation in the position of the particle-in-a-box is given by
$\sigma _x = \dfrac {L}{2 \pi n } \sqrt{ \dfrac {\pi ^2}{3} n^2 - 2 } \label {4-34}$
and the standard deviation in the momentum by
$\sigma _p = \dfrac {n \pi \hbar}{L} \label {4-35}$
Even for n = 1, the lowest value for n, σx is finite and proportional to L. As L increases the uncertainty in position of the electron increases. On the other hand, as L increases, σp decreases, but the product is never zero; and the uncertainty principle holds.
Exercise $6$
Evaluate the product σx σp for n = 1 and for general n. Is the product greater than ħ/2 for all values of n and L as required by the Heisenberg Uncertainty Principle?
Are the eigenfunctions of the particle-in-a-box Hamiltonian orthogonal?
Two functions $Ψ_A$ and $Ψ_B$ are orthogonal if
$\int \limits _{all space} \psi _A^* \psi _B d\tau = 0$
In general, eigenfunctions of a quantum mechanical operator with different eigenvalues are orthogonal.
Exercise $7$
Evaluate the integral $\int \psi ^*_1 \psi _3 dx$ and as many other pairs of particle-in-a-box eigenfunctions as you wish (use symmetry arguments whenever possible) and explain what the results say about orthogonality of the functions.
Exercise $8$
What happens to the energy level spacing for a particle-in-a-box when $mL^2$ becomes much larger than $h^2$? What does this result imply about the relevance of quantization of energy to baseballs in a box between the pitching mound and home plate? What implications does quantum mechanics have for the game of baseball in a world where h is so large that baseballs exhibit quantum effects?
• Is quantization important for macroscopic objects?
• How can one determine the relative energies of wavefunctions by examination?
The first derivative of a function is the rate of change of the function, and the second derivative is the rate of change in the rate of change, also known as the curvature. A function with a large second derivative is changing very rapidly. Since the second derivative of the wavefunction occurs in the Hamiltonian operator that is used to calculate the energy by using the Schrödinger equation, a wavefunction that has sharper curvatures than another, i.e. larger second derivatives, should correspond to a state having a higher energy. A wavefunction with more nodes than another over the same region of space must have sharper curvatures and larger second derivatives, and therefore should correspond to a higher energy state.
Exercise $9$
Identify a relationship between the number of nodes in a wavefunction and its energy by examining the graphs you made in Exercise $15$. A node is the point where the amplitude passes through zero. What does the presence of many nodes mean about the shape of the wavefunction? | textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Book%3A_Quantum_States_of_Atoms_and_Molecules_(Zielinksi_et_al)/04%3A_Electronic_Spectroscopy_of_Cyanine_Dyes/4.08%3A_Other_Properties_of_the_Particle-in-a-Box.txt |
Consideration of the quantum mechanical description of the particle-in-a-box exposed two important properties of quantum mechanical systems. We saw that the eigenfunctions of the Hamiltonian operator are orthogonal, and we also saw that the position and momentum of the particle could not be determined exactly. We now examine the generality of these insights by stating and proving some fundamental theorems. These theorems use the Hermitian property of quantum mechanical operators, which is described first.
Hermitian Theorem
Since the eigenvalues of a quantum mechanical operator correspond to measurable quantities, the eigenvalues must be real, and consequently a quantum mechanical operator must be Hermitian.
Proof
We start with the premises that ψ and φ are functions, $\int d\tau$ represents integration over all coordinates, and the operator  is Hermitian by definition if
$\int \psi ^* \hat {A} \psi d\tau = \int (\hat {A} ^* \psi ^* ) \psi d\tau \label {4-37}$
This equation means that the complex conjugate of  can operate on ψ* to produce the same result after integration as  operating on φ, followed by integration. To prove that a quantum mechanical operator  is Hermitian, consider the eigenvalue equation and its complex conjugate.
$\hat {A} \psi = a \psi \label {4-38}$
$\hat {A}^* \psi ^* = a^* \psi ^* = a \psi ^* \label {4-39}$
Note that a* = a because the eigenvalue is real. Multiply Equations \ref{4-38} and \ref{4-39} from the left by ψ* and ψ, respectively, and integrate over all the coordinates. Note that ψ is normalized. The results are
$\int \psi ^* \hat {A} \psi d\tau = a \int \psi ^* \psi d\tau = a \label {4-40}$
$\int \psi \hat {A}^* \psi ^* d \tau = a \int \psi \psi ^* d\tau = a \label {4-41}$
Since both integrals equal a, they must be equivalent.
$\int \psi ^* \hat {A} \psi d\tau = \int \psi \hat {A}^* \psi ^* d\tau \label {4-42}$
The operator acting on the function, $\hat {A}^* \int \psi ^* \hat {A} \psi d\tau = \int \psi \hat {A} ^* \psi ^* d\tau_*$, produces a new function. Since functions commute, Equation \ref{4-42} can be rewritten as $\int \psi ^* \hat {A} \psi d\tau = \int (\hat {A}^*\psi ^*) \psi d\tau \label{4-43}$
This equality means that  is Hermitian.
Orthogonality Theorem
Eigenfunctions of a Hermitian operator are orthogonal if they have different eigenvalues. Because of this theorem, we can identify orthogonal functions easily without having to integrate or conduct an analysis based on symmetry or other considerations.
Proof
ψ and φ are two eigenfunctions of the operator  with real eigenvalues $a_1$ and $a_2$, respectively. Since the eigenvalues are real, $a_1^* = a_1$ and $a_2^* = a_2$.
$\hat {A} \psi = a_1 \psi$
$\hat {A}^* \psi ^* = a_2 \psi ^* \label {4-44}$
Multiply the first equation by φ* and the second by ψ and integrate.
$\int \psi ^* \hat {A} \psi d\tau = a_1 \int \psi ^* \psi d\tau$
$\int \psi \hat {A}^* \psi ^* d\tau = a_2 \int \psi \psi ^* d\tau \label {4-45}$
Subtract the two equations in (4-45)to obtain
$\int \psi ^*\hat {A} \psi d\tau - \int \psi \hat {A} ^* \psi ^* d\tau = (a_1 - a_2) \int \psi ^* \psi d\tau \label {4-46}$
The left-hand side of (4-46) is zero because  is Hermitian yielding
$0 = (a_1 - a_2 ) \int \psi ^* \psi d\tau \label {4-47}$
If a1 and a2 in (4-47) are not equal, then the integral must be zero. This result proves that nondegenerate eigenfunctions of the same operator are orthogonal.
Exercise $44$
Draw graphs and use them to show that the particle-in-a-box wavefunctions for n = 2 and n = 3 are orthogonal to each other.
Schmidt Orthogonalization Theorem
If the eigenvalues of two eigenfunctions are the same, then the functions are said to be degenerate, and linear combinations of the degenerate functions can be formed that will be orthogonal to each other. Since the two eigenfunctions have the same eigenvalues, the linear combination also will be an eigenfunction with the same eigenvalue. Degenerate eigenfunctions are not automatically orthogonal but can be made so mathematically. The proof of this theorem shows us one way to produce orthogonal degenerate functions.
Proof
If ψ and φ are degenerate but not orthogonal, define $Φ = φ - Sψ$ where $S$ is the overlap integral $\int \psi ^* \psi d\tau$, then ψ and Φ will be orthogonal.
$\int \psi ^* \phi d\tau = \int \psi ^* (\varphi - S\psi ) d\tau = \int \psi ^* \psi d\tau - S \int \psi ^*\psi d\tau \label {4-48}$
$= S - S = 0$
Exercise $45$
Find $N$ that normalizes Φ if $Φ = N(φ − Sψ)$ where ψ and φ are normalized and S is their overlap integral.
Commuting Operator Theorem
If two operators commute, then they can have the same set of eigenfunctions. By definition, two operators $\hat {A}$ and $\hat {B}$commute if the effect of applying $\hat {A}$ then $\hat {B}$ is the same as applying $\hat {B}$ then $\hat {A}$, i.e. $\hat {A}\hat {B} = \hat {B} \hat {A}$. For example, the operations brushing-your-teeth and combing-your-hair commute, while the operations getting-dressed and taking-a-shower do not. This theorem is very important. If two operators commute and consequently have the same set of eigenfunctions, then the corresponding physical quantities can be evaluated or measured exactly simultaneously with no limit on the uncertainty. As mentioned previously, the eigenvalues of the operators correspond to the measured values.
Proof
If $\hat {A}$ and $\hat {B}$ commute and $ψ$ is an eigenfunction of $\hat {A}$ with eigenvalue $b$, then
$\hat {B} \hat {A} \psi = \hat {A} \hat {B} \psi = \hat {A} b \psi = b \hat {A} \psi \label {4-49}$
Equation (4-49) says that $\hat {A} \psi$ is an eigenfunction of $\hat {B}$ with eigenvalue b, which means that when $\hat {A}$ operates on ψ, it cannot change ψ. At most, $\hat {A}$ operating on $ψ$ can produce a constant times ψ.
$\hat {A} \psi = a \psi \label {4-50}$
$\hat {B} (\hat {A} \psi ) = \hat {B} (a \psi ) = a \hat {B} \psi = ab\psi = b (a \psi ) \label {4-51}$
Equation \ref{4-51} shows that Equation \ref{4-50} is consistent with Equation \ref{4-49}. Consequently ψ also is an eigenfunction of $\hat {A}$ with eigenvalue $a$.
Exercise $46$
Write definitions of the terms orthogonal and commutation.
Exercise $47$
Show that the operators for momentum in the x-direction and momentum in the y-direction commute, but operators for momentum and position along the x-axis do not commute. Since differential operators are involved, you need to show whether
$\hat {P} _x \hat {P} _y f (x,y) = \hat {P} _y \hat {P} _x f (x, y)$
$\hat {P} _x \hat {x} f(x) = \hat {x} \hat {P} _x f(x)$
where f is an arbitrary function, or you could try a specific form for f, e.g. f = 6xy.
General Heisenberg Uncertainty Principle
lthough it will not be proven here, there is a general statement of the uncertainty principle in terms of the commutation property of operators. If two operators $\hat {A}$ and $\hat {B}$ do not commute, then the uncertainties (standard deviations σ) in the physical quantities associated with these operators must satisfy
$\sigma _A \sigma _B \ge | \int \psi ^* [ \hat {A} \hat {B} - \hat {B} \hat {A} ] \psi d\tau \label {4-52}$
where the integral inside the square brackets is called the commutator, and ││signifies the modulus or absolute value. If $\hat {A}$ and $\hat {B}$ commute, then the right-hand-side of equation (4-52) is zero, so either or both σA and σ B could be zero, and there is no restriction on the uncertainties in the measurements of the eigenvalues a and b. If $\hat {A}$ and $\hat {B}$ do not commute, then the right-hand-side of equation (4-52) will not be zero, and neither σA nor σB can be zero unless the other is infinite. Consequently, both a and b cannot be eigenvalues of the same wavefunctions and cannot be measured simultaneously to arbitrary precision.
Exercise $48$
Show that the commutator for position and momentum in one dimension equals –i ħ and that the right-hand-side of Equation (4-52) therefore equals ħ/2 giving $\sigma _x \sigma _{px} \ge \frac {\hbar}{2}$
Exercise $49$
In a later chapter you will learn that the operators for the three components of angular momentum along the three directions in space (x, y, z) do not commute. What is the relevance of this mathematical property to measurements of angular momentum in atoms and molecules?
Exercise $50$
Write the definition of a Hermitian operator and statements of the Orthogonality Theorem, the Schmidt Orthogonalization Theorem, and the Commuting Operator Theorem.
Exercise $51$
Reconstruct proofs for the Orthogonality Theorem, the Schmidt Orthogonalization Theorem, and the Commuting Operator Theorem.
Exercise $52$
Write a paragraph summarizing the connection between the commutation property of operators and the uncertainty principle. | textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Book%3A_Quantum_States_of_Atoms_and_Molecules_(Zielinksi_et_al)/04%3A_Electronic_Spectroscopy_of_Cyanine_Dyes/4.09%3A_Properties_of_Quantum_Mechanical_Systems.txt |
Q4.1
Write the Schrödinger equation for a particle in a two dimensional box with infinite potential barriers and adjacent sides of unequal length (a rectangle). Solve the equation by separating variables with a product function X(x)Y(y) to obtain the wavefunctions X(x) and Y(y) and energy eigenvalues. How many different sets of quantum numbers are needed for this case? Sketch an energy level diagram to illustrate the energy level structure. What happens to the energy levels when the box is a square? When two or more states have the same energy, the states and the energy level are said to be degenerate. What is the zero point energy for an electron in a square box of length 0.05 nm?
Q4.2
A materials scientist is trying to fabricate a novel electronic device by constructing a two dimensional array of small squares of silver atoms. She thinks she has managed to produce an array with each square consisting of a monolayer of 25 atoms. You are an optical spectroscopist and want to test this conclusion. Use the particle-in-a-box model to predict the wavelength of the lowest energy electronic transition for these quantum dots. Which electrons do you want to describe by the particle-in-a-box model, or do you think you can apply this model to all the electrons in silver and get a reasonable prediction? In which spectral region does this transition lie? What instrumentation would you need to observe this transition?
Q4.3
Model the pi electrons of benzene by adapting the electron in a box model. Consider benzene to be a ring of radius r and circumference 2πr. You can find r by using the bond length of benzene (0.139 nm) and some trigonometry. Show how the electron on a ring is analogous to the electron in a linear box. Derive this analogy by thinking, not by copying from some book. What is the boundary condition for the case of the particle on a ring? Find mathematical expressions for the energy and the wavefunctions. Draw an energy level diagram. What is the physical reason that the energy levels are degenerate for this situation? Predict the wavelength of the lowest energy electronic transition for benzene. Compare your prediction with the experimental value (256 nm). What insight do you gain from this comparison?
Q4.4
Explain how and why the following two sets of selection rules for the particle-in-a-box are related to each other: (1) If Δn is even, the transition is forbidden; if Δn is odd, the transition is allowed. (2) If the transition is g to g or u to u, it is forbidden; if the transition is g to u or u to g, it is allowed.
Q4.5
The factor $fi/(f^2-i^2)^2$ in Equation (4-32) determines the relative intensity of transitions in the particle-in-a-box model. Make plots of [$fi/(f^2-i^2)^2$] vs f for several values of i with f starting at i+1 and increasing. What conclusions can you make about particle-in-a-box spectra from your plots?
Q4.6
Starting with the mathematical definition of uncertainty as the standard or root mean square deviation σ from the average, show by evaluating the appropriate expectation value integrals that
$\sigma _x = \frac {L}{2\pi n } \left ( \frac {\pi ^2 n^2}{3} -2 \right ) \text {and} \sigma _p = \frac {n \pi \hbar }{L}$
for a particle in a one-dimensional box of length L as given in the chapter. Then show that the product $\sigma _x \sigma _p \ge \frac {\hbar}{2}$.
Q4.7
Use the symbolic processor in Mathcad to help you carry out the steps leading from Equation (4-27) to Equation (4-31). See Activity 4.3 for an introduction to the symbolic processor.
Q4.8
An electron is confined to a one-dimensional space with infinite potential barriers at x = 0 and x = L and a constant potential energy between 0 and L. The electron is described by the wavefunction $ψ(x) = N (Lx - x^2)$
In responding to the following questions (a through g), do not leave your answers in the form of integrals, i.e. do the integrals. Note:
$\text {Note} : \int x^n dx = \frac {1}{n + 1} x ^{n + 1} + C \text {for} x \ne 0$
1. Explain why this wavefunction must be normalized, and find an expression for N that normalizes the wavefunction.
2. Define what is meant by the expectation value, and find the expectation value for the position of the electron and the momentum of the electron.
3. Find the expectation value for the energy of the electron.
4. Is your energy expectation value consistent with your momentum expectation value? Explain.
5. What is the energy of the n = 1 state for the one-dimensional particle-in-a-box model? How does the energy obtained in (c) compare with this value? Explain why these two energies must have such a relationship to each other.
6. Does the wavefunction, $ψ(x) = N (Lx - x^2)$, for this electron represent a stationary state of the electron?
7. What is the probability that the electron will be located at x = L/3 in an interval of length L/100? Explain why you expect this probability to be time dependent or time independent.
Q4.9
How does choosing the potential energy inside the box to be –100 eV rather than 0 modify the description of the particle-in-a-box?
David M. Hanson, Erica Harvey, Robert Sweeney, Theresa Julia Zielinski ("Quantum States of Atoms and Molecules") | textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Book%3A_Quantum_States_of_Atoms_and_Molecules_(Zielinksi_et_al)/04%3A_Electronic_Spectroscopy_of_Cyanine_Dyes/4.0E%3A_4.E%3A_Electronic_Spectroscopy_of_Cyanine_Dyes_%28Exerc.txt |
In this chapter we used a very simple model called the particle-in-a-box model or the infinite-potential-well model to obtain very crude approximate wavefunctions for pi electrons in cyanine dye molecules. With the particle in the box model, we can estimate the wavelengths at which the peaks occur in the absorption spectra from estimated bond lengths, or we can use the wavelength information to determine average bond lengths in a series of dye molecules. By evaluating the transition moment integral, we can explain the relative intensities of these peaks and obtain selection rules for the spectroscopic transitions. The selection rules also can be deduced from qualitative symmetry considerations.
This model assumes the electrons are independent of each other and uses a particularly simple form for the potential energy of the electrons. The model also assumes that the atomic nuclei are fixed in space, i.e. the molecule is not vibrating or rotating. This latter assumption, which is known as the Crude Born-Oppenheimer Approximation, will be discussed in a later chapter. The physical basis for this approximation is the fact that the mass of the electron is much smaller than the mass of an atomic nucleus. The electrons therefore respond to forces or are accelerated by forces much faster than the nuclei (remember a = f/m) so the electron motion in a molecule can be examined by assuming that the nuclei are stationary.
We did not discuss the widths and shapes of the peaks. Contributions to the line widths and shapes come from motion of the nuclei; which we will consider later. Nuclei in a molecule vibrate, i.e. move relative to each other, and rotate around the center of mass of the molecule. The rotational and vibrational motion, as well as interaction with the solvent, which also is neglected, produce the spectral band widths and shapes and even affect the position of the absorption maximum. When light is absorbed the vibrational and rotational energy of the molecule can change along with the change in the electronic energy. Line widths and shapes therefore depend upon the absorption of different amounts of vibrational and rotational energy. Actually, in a condensed phase, molecular rotation is hindered. This hindered rotation is called libration.
An outcome of our examination of the cyanine dye wavefunctions was a glimpse at three fundamental properties of quantum mechanical systems: orthogonality of wavefunctions, the Heisenberg Uncertainty Principle, and the zero-point energy of bound systems. We also observed that quantum numbers result from the boundary conditions used to describe the physical system. Another observation was that the energy levels for the particle in the box get further apart as the quantum number n increases, but closer together as the size of the box increases. Lastly, the spectra we observe occur because of the interaction of molecules with electromagnetic radiation and the resulting transition of the molecule from one energy level to a higher energy level.
Questions for Thought
1. What is the difference between the spectroscopic wavelength and the wavefunction wavelength?
2. What is the total probability of finding any pi electron on one half-side of a cyanine dye molecule?
3. What is a molecular orbital and how is it related to visible-ultraviolet spectroscopy?
4. Why does a series of conjugated dye molecules, such as the cyanines, have colors ranging from red to blue?
5. Write a few paragraphs describing the origins of the absorption spectra for conjugated dye molecules using the particle-in-a-box model and the terms HOMO and LUMO.
6. Write a paragraph discussing the feasibility of determining the ionization potential for a dye molecule using the particle-in-a-box model. | textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Book%3A_Quantum_States_of_Atoms_and_Molecules_(Zielinksi_et_al)/04%3A_Electronic_Spectroscopy_of_Cyanine_Dyes/4.0S%3A_4.S%3A_Electronic_Spectroscopy_of_Cyanine_Dyes_%28Summa.txt |
In this chapter we apply the principles of Quantum Mechanics to the simplest possible physical system, a free particle in one dimension. This particle could be an electron or, if we only consider translational motion, an atom or a molecule. Free means that no forces are acting on the particle. Since a force is produced by a change in the potential energy, the potential energy must be constant if there is no force. This constant can be taken to be zero because energy is relative not absolute. By saying energy is relative, we mean we are concerned with adding and removing energy from systems not with the absolute value of the energy content. The discussion of the free particle in this chapter further illustrates the fundamental ideas of Quantum Mechanics and introduces solutions to new problems. Specifically the energy, momentum and probability density for a free particle are discussed, and a connection is made between the wave property of matter and the uncertainty principle.
• 5.1: The Free Particle
In classical physics, a free particle is one that is present in a "field-free" space. In quantum mechanics, it means a region of uniform potential, usually set to zero in the region of interest since potential can be arbitrarily set to zero at any point (or surface in three dimensions) in space. Here, we obtain the Schrödinger equation for the free particle in one dimension.
• 5.2: The Uncertainty Principle
The uncertainty principle is a consequence of the wave property of matter. A wave has some finite extent in space and generally is not localized at a point. Consequently there usually is significant uncertainty in the position of a quantum particle in space. Activity 1 at the end of this chapter illustrates that a reduction in the spatial extent of a wavefunction to reduce the uncertainty in the position of a particle increases the uncertainty in the momentum of the particle.
• 5.3: Linear Combinations of Eigenfunctions
Many problems encountered by quantum chemists and computational chemists lead to wavefunctions that are not eigenfunctions of the Hamiltonian operator. The wavefunction in this state belongs to a class of functions known as superposition functions, which are linear combinations of eigenfunctions. A linear combination of functions is a sum of functions, each multiplied by a weighting coefficient, which is a constant.
• 5.E: Translational States (Exercises)
Exercises for the "Quantum States of Atoms and Molecules" TextMap by Zielinksi et al.
• 5.S: Translational States (Summary)
05: Translational States
We obtain the Schrödinger equation for the free particle using the following steps. First write
$\hat {H} \psi = E \psi \label {5-1}$
Next define the Hamiltonian,
$\hat {H} = \hat {T} + \hat {V} \label {5-2}$
and substitute the potential energy operator
$\hat {V} = 0 \label {5-3}$
and the kinetic energy operator
$\hat {T} = -\dfrac {\hbar ^2}{2m} \dfrac {d^2}{dx^2} \label {5-4}$
to obtain the Schrödinger equation for a free particle
$-\dfrac {\hbar ^2}{2m} \dfrac {d^2 \psi (x)}{dx^2} = E \psi (x) \label {5-5}$
A major problem in Quantum Mechanics is finding solutions to differential equations, e.g. Equation $\ref{5-5}$. Differential equations arise because the operator for kinetic energy includes a second derivative. We will solve the differential equations for some of the more basic cases, but since this is not a course in Mathematics, we will not go into all the details for other more complicated cases. The solutions that we consider in the greatest detail will illustrate the general procedures and show how the physical concept of quantization arises mathematically.
We already encountered Equation $\ref{5-5}$ in the last chapter Chapter 4. There, we used our knowledge of some basic functions to find the solution. Now we solve this equation by using some algebra and mathematical logic. First we rearrange Equation $\ref{5-5}$ and make the substitution
$k^2 = \dfrac {2mE}{\hbar^2}. \label{sub}$
The substitution in Equation \ref{sub} is only one way of making a simplification. You could also use a different formulation for the substitution
$\alpha = \dfrac {2mE}{\hbar ^2} \nonumber$
but then you would find later that $(\alpha)^{1/2}$ corresponds to the wavevector $k$ which equals $\frac {2 \pi}{\lambda}$ and $\frac {P}{\hbar}$. So choosing a squared variable like $k^2$ in Equation \ref{sub} is a choice made with foresight. Trial-and-error is one method scientists use to solve problems, and the results often look sophisticated and insightful after they have been found, like choosing $k^2$ rather than $α$.
Since $E$ is the kinetic energy,
$E = \dfrac {p^2}{2m} \label {5-6}$
and we saw in previous chapters that the momentum $p$ and the wavevector $k$ are related,
$p = \hbar k \label {5-7}$
we also could recognize that $\dfrac {2mE}{\hbar ^2}$ is just $k^2$ as shown here in Equation $\ref{5-8}$.
\begin{align} \dfrac {2mE}{\hbar ^2} &= \left (\dfrac {2m}{\hbar ^2}\right ) \left ( \dfrac {p^2}{2m} \right ) \[4pt] &= \left (\dfrac {\cancel{2m}}{\cancel{\hbar ^2}}\right ) \left ( \dfrac {\cancel{\hbar ^2} k^2}{\cancel{2m}} \right ) \[4pt] &= k^2 \label {5-8} \end{align}
The result for Equation $\ref{5-5}$ after rearranging and substitution of result from Equation $\ref{5-8}$ is
$\left ( \dfrac {d^2}{dx^2} + k^2 \right ) \psi (x) = 0 \label {5-9}$
This linear second-order differential equation can be solved in the same way that a algebraic quadratic equation is solved. It is separated into two factors, and each is set equal to 0. This factorization produces two first-order differential equations that can be integrated. The details are shown in the following equations.
$\left ( \dfrac {d^2}{dx^2} + k^2 \right ) \psi (x) = \left(\dfrac {d}{dx} + ik\right) \left(\dfrac {d}{dx} - ik \right) \psi (x) = 0 \label {5-10}$
Equation $\ref{5-10}$ will be true if either
$\left( \dfrac {d}{dx} + ik \right) \psi (x) = 0$
or
$\left( \dfrac {d}{dx} - ik \right) \psi (x) = 0 \label {5-11}$
Rearranging and designating the two equations and the two solutions simultaneously by a + sign and a – sign produces
$\dfrac {d \psi _{ \pm} (x) }{\psi _{\pm} (x)} = \pm ik\,dx \label {5-12}$
which leads to
$\ln \psi _\pm (x) = \pm ikx + C _{\pm} \label {5-13}$
and finally
$\psi _{\pm} (x) = A_{\pm} e^{\pm ikx} \label {5-14}$
The constants $A_+$ and $A_-$ result from the constant of integration. The values of these constants are determined by some physical constraint that is imposed upon the solution. Such a constraint is called a boundary condition. For the particle in a box, discussed previously, the boundary condition is that the wavefunction must be zero at the boundaries where the potential energy is infinite. The free particle does not have such a boundary condition because the particle is not constrained to any one place. Another constraint is normalization, and here the integration constants serve to satisfy the normalization requirement.
Exercise $1$
Show that the operator $\left(\dfrac {d^2}{dx^2} + k^2\right)$ equals $\left(\dfrac {d}{dx} + ik\right) \left(\dfrac {d}{dx} - ik\right)$ and that the two factors commute since $k$ does not depend on $x$. The answer is Equation $\ref{5-10}$.
Example $1$: Normalizing the wavefunction of a free particle
Use the normalization constraint to evaluate $A_{\pm}$ in Equation \ref{5-14}.
Solution
Since the integral of $|\psi |^2$ over all values of x from $-∞$ to $+∞$ is infinite, it appears that the wavefunction $\psi$ cannot be normalized. We can circumvent this difficulty if we imagine the particle to be in a region of space ranging from $-L$ to $+L$ and consider $L$ to approach infinity.
The normalization then proceeds in the usual way as shown below. Notice that the normalization constants are real even though the wavefunctions are complex.
\begin{align*} \int \limits _{-L}^{+L} \psi ^* (x) \psi (x) dx &= A_{\pm} ^* A_{\pm} \int \limits _{-L}^{L} e^{\mp ikx} e^{\pm ikx} dx = 1 \[4pt] |A_{\pm}|^2 \int \limits _{-L}^{+L} dx &= |A_{\pm}|^2 2L = 1 \[4pt] A_{\pm} &= [2L]^{-1/2} \end{align*}
Exercise $2$
Write the wavefunctions, $\psi ^+$ and $\psi ^−$, for the free particle, explicitly including the normalization factors found in Example $1$.
Exercise $3$
Find solutions to each of the following differential equations.
$\dfrac {d^2 y(x)}{dx^2} + 25y(x) = 0 \nonumber$
$\dfrac {d^2 y(x)}{dx^2} -3y(x) = 0 \nonumber$
A neat property of linear differential equations is that sums of solutions also are solutions, or more generally, linear combinations of solutions are solutions. A linear combination is a sum with constant coefficients where the coefficients can be positive, negative, or imaginary. For example
$\psi(x) = C_1\psi _+(x) + C_2\psi _−(x) \label {5-15}$
where $C_1$ and $C_2$ are the constant coefficients. Inserting the functions from Equation $\ref{5-14}$, one gets
$\psi (x) = \dfrac {C_1}{\sqrt {2L}} e^{+ikx} + \dfrac {C_2}{\sqrt {2L}} e^{-ikx} \label {5-16}$
By using Euler's formula,
$e^{\pm ikx} = \cos (kx) \pm i\sin (kx) \label {5-17}$
Equation $\ref{5-15}$ is transformed into
$\psi (x) = C\cos (kx) + D\sin (kx) \label {5-18}$
where we see that k is just the wavevector $\dfrac{2\pi}{\lambda}$ in the trigonometric form of the solution to the Schrödinger equation. This result is consistent with our previous discussion regarding the choice of $k^2$ to represent $\dfrac {2mE}{ħ^2}$.
Exercise $4$
Find expressions for $C$ and $D$ in Equation $\ref{5-18}$ for two cases: when $C_1 = C_2$ = +1 and when $C_1$ = +1 and $C_2$ = -1.
Exercise $5$
Verify that Equations $\ref{5-16}$ and $\ref{5-18}$ are solutions to the Schrödinger Equation (Equation $\ref{5-5}$) with the eigenvalue $E = \dfrac {\hbar ^2 k^2 }{2m}$.
Exercise $6$
Demonstrate that the wavefunctions you wrote for Exercise $2$ are eigenfunctions of the momentum operator with eigenvalues $\hbar k$ and $-\hbar k$.
Exercise $7$
Determine whether $\psi (x)$ in Equation $\ref{5-16}$ is an eigenfunction of the momentum operator.
Exercise $8$
The probability density for finding the free particle at any point in the segment $-L$ to $+L$ can be seen by plotting $\psi ^*\psi$ from -L to +L. Sketch these plots for the two wavefunctions, $\psi _+$ and $\psi _−$, that you wrote for Exercise $2$. Demonstrate that the area between $\psi ^*\psi$ and the x-axis equals 1 for any value of L. Why must this area equal 1 even as L approaches infinity? Are all points in the space equally probable or are some positions favored by the particle?
We found wavefunctions that describe the free particle, which could be an electron, an atom, or a molecule. Each wavefunction is identified by the wavevector $k$. A wavefunction tells us three things about the free particle: the energy of the particle, the momentum of the particle, and the probability density of finding the particle at any point. You have demonstrated these properties in Exercises $5$, $6$, and $8$. These ideas are discussed further in the following paragraphs.
We first find the momentum of a particle described by $\psi _+(x)$. We also can say that the particle is in the state $\psi _+(x)$. The value of the momentum is found by operating on the function with the momentum operator. Remember this problem is one-dimensional so vector quantities such as the wavevector or the momentum appear as scalars. The result is shown in Example $1$.
Example $2$
Extract the momentum from the wavefunction for a free electron.
Solution
First we write the momentum operator and wavefunction as shown by I and II. The momentum operator tells us the mathematical operation to perform on the function to obtain the momentum. Complete the operation shown in II to get III, which simplifies to IV.
$\underset{I}{-i\hbar \dfrac {d}{dx} \psi _+ (x)} = \underset{II}{-i\hbar \dfrac {d}{dx} A _+ e^{ikx}} = \underset{III}{(-i\hbar) (ik) A_+ e^{ikx}} = \underset{IV}{\hbar k \psi _+ (x)} \nonumber$
Example $2$ is another way to conclude that the momentum of this particle is
$p = ħk.$
Here the Compton-de Broglie momentum-wavelength relation $p = \hbar k$ appears from the solution to the Schrödinger equation and the definition of the momentum operator! For an electron in the state $\psi _−(x)$, we similarly find $p = -\hbar k$. This particle is moving in the minus x direction, opposite from the particle with momentum $+ħk$.
Since $k = \dfrac {2 \pi}{\lambda}$, what then is the meaning of the wavelength for a particle, e.g. an electron? The wavelength is the wavelength of the wavefunction that describes the properties of the electron. We are not saying that an electron is a wave in the sense that an ocean wave is a wave; rather we are saying that a wavefunction is needed to describe the wave-like properties of the electron. Why the electron has these wave-like properties, remains a mystery.
We find the energy of the particle by operating on the wavefunction with the Hamiltonian operator as shown next in Equation $\ref{5-19}$. Examine each step and be sure you see how the eigenvalue is extracted from the wavefunction.
\begin{align} \hat {H} \psi _{\pm} (x) &= \dfrac {-\hbar ^2}{2m} \dfrac {d^2}{dx^2} A_{\pm} e^{\pm ikx} \[4pt] &= \dfrac {-\hbar ^2}{2m} (\pm ik)^2 A_{\pm} e^{\pm ikx} \[4pt] & = \dfrac {\hbar ^2 k^2}{2m} A_{\pm}e^{\pm ikx} \label {5-19} \end{align}
Notice again how the operator works on the wavefunction to extract a property of the system from it. We conclude that the energy of the particle is
$E = \dfrac { \hbar ^2 k^2}{2m} \label {5-20}$
Which is just the classical relation between energy and momentum of a free particle, $E = \dfrac {p^2}{2m}$. Note that an electron with momentum +ħk has the same energy as an electron with momentum -ħk. When two or more states have the same energy, the states and the energy level are said to be degenerate.
We have not found any restrictions on the momentum or the energy. These quantities are not quantized for the free particle because there are no boundary conditions. Any wave with any wavelength fits into an unbounded space. Quantization results from boundary conditions imposed on the wavefunction, as we saw for the particle-in-a-box.
Exercise $9$
Describe how the wavelength of a free particle varies with the energy of the particle.
Exercise $10$
Summarize how the energy and momentum information is contained in the wavefunction and how this information is extracted from the wavefunction.
The probability density of a free particle at a position in space $x_0$ is
$\psi _{\pm} ^* (x_0) \psi _{\pm} (x_0) = (2L)^{-1} e^{\mp ikx_0} e^{\pm ikx_0} = (2L)^{-1} \label {5-21}$
From this result we see that the probability density has units of 1/m; it is the probability per meter of finding the electron at the point $x_0$. This probability is independent of $x_0$, the electron can be found any place along the x axis with equal probability. Although we have no knowledge of the position of the electron, we do know the electron momentum exactly. This relationship between our knowledge of position and momentum is a manifestation of the Heisenberg Uncertainty Principle, which says that as the uncertainty in one quantity is reduced, the uncertainty in another quantity increases. For this case, we know the momentum exactly and have no knowledge of the position of the particle. The uncertainty in the momentum is zero; the uncertainty in the position is infinite. | textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Book%3A_Quantum_States_of_Atoms_and_Molecules_(Zielinksi_et_al)/05%3A_Translational_States/5.01%3A_The_Free_Particle.txt |
In the mid 1920's the German physicist Werner Heisenberg showed that if we try to locate an electron within a region $Δx$; e.g. by scattering light from it, some momentum is transferred to the electron, and it is not possible to determine exactly how much momentum is transferred, even in principle. Heisenberg showed that consequently there is a relationship between the uncertainty in position $Δx$ and the uncertainty in momentum $Δp$.
$\Delta p \Delta x \ge \frac {\hbar}{2} \label {5-22}$
You can see from Equation $\ref{5-22}$ that as $Δp$ approaches 0, $Δx$ must approach ∞, which is the case of the free particle discussed previously.
This uncertainty principle, which also is discussed in Chapter 4, is a consequence of the wave property of matter. A wave has some finite extent in space and generally is not localized at a point. Consequently there usually is significant uncertainty in the position of a quantum particle in space. Activity 1 at the end of this chapter illustrates that a reduction in the spatial extent of a wavefunction to reduce the uncertainty in the position of a particle increases the uncertainty in the momentum of the particle. This illustration is based on the ideas described in the next section.
Exercise $1$
Compare the minimum uncertainty in the positions of a baseball (mass = 140 gm) and an electron, each with a speed of 91.3 miles per hour, which is characteristic of a reasonable fastball, if the standard deviation in the measurement of the speed is 0.1 mile per hour. Also compare the wavelengths associated with these two particles. Identify the insights that you gain from these comparisons.
5.03: Linear Combinations of Eigenfunctions
It is not necessary that an electron be described by an eigenfunction of the Hamiltonian operator. Many problems encountered by quantum chemists and computational chemists lead to wavefunctions that are not eigenfunctions of the Hamiltonian operator. Science is like that; interesting problems are not simple to solve. They require adaptation of current techniques, creative energy, and a good set of skills developed by studying solutions to previously solved interesting problems.
Consider a free electron in one dimension that is described by the wavefunction
$\psi (x) = C_1\psi _1 (x) + C_2 \psi _2 (x) \label {5-21}$
with
\begin{align} \psi _1(x) &= \left ( \dfrac {1}{2L} \right )^{1/2} e^{ik_1x} \label {5-22} \[4pt] \psi _2(x) &= \left ( \dfrac {1}{2L} \right )^{1/2} e^{ik_2x} \label {5-23} \end{align}
where $k_1$ and $k_2$ have different magnitudes. Although such a function is not an eigenfunction of the momentum operator or the Hamiltonian operator, we can calculate the average momentum and average energy of an electron in this state from the expectation value integral. (Note: "in-this-state" means "described-by-this-wavefunction".)
Exercise $1$
Show that the function $\psi(x)$ defined by Equation $\ref{5-21}$ is not an eigenfunction of the momentum operator or the Hamiltonian operator for a free electron in one dimension.
The function shown in Equation $\ref{5-21}$ belongs to a class of functions known as superposition functions, which are linear combinations of eigenfunctions. A linear combination of functions is a sum of functions, each multiplied by a weighting coefficient, which is a constant. The adjective linear is used because the coefficients are constants. The constants, e.g. $C_1$ and $C_2$ in Equation $\ref{5-21}$, give the weight of each component ($\psi_1$ and $\psi_2$) in the total wavefunction. Notice from the discussion previously that each component in Equation $\ref{5-21}$ is an eigenfunction of the momentum operator and the Hamiltonian operator although the linear combination function (i.e., $\psi(x)$) is not.
The expectation value, i.e. average value, of the momentum operator is found as follows. First, write the integral for the expectation value and then substitute into this integral the superposition function and its complex conjugate as shown below. Since we are considering a free particle in one dimension, the limits on the integration are $–L$ and $+L$ with $L$ going to infinity.
\begin{align} \left \langle p \right \rangle &= \int \psi ^* (x) \left ( -i\hbar \dfrac {d}{dx} \right ) \psi (x) dx \[4pt]&= \dfrac {-i\hbar}{2L} \int \limits _{-L}^{+L} \left ( C_1^* e^{-ik_1x} + C_2^* e^{-ik_2x} \right ) \dfrac {d}{dx} \left ( C_1 e^{ik_1x} + C_2 e^{ik_2x} \right ) dx \[4pt] &= \dfrac {-i\hbar}{2L} \int \limits _{-L}^{+L} \left ( C_1^* e^{-ik_1x} + C_2^* e^{-ik_2x} \right ) \left ( (ik_1)C_1 e^{ik_1x} + (ik_2)C_2 e^{ik_2x} \right ) dx \label {5-24} \end{align}
Cross-multiplying the two factors in parentheses yields four terms.
$\left \langle p \right \rangle = I_1 + I_2 + I_3 + I_4 \nonumber$
with
\begin{align} I_1 &= \dfrac {\hbar k_1}{2L} C^*_1 C_1 \int \limits ^{+L} _{-L} dx = C^*_1 C_1 \hbar k_1 \[4pt] I_2 &= \dfrac {\hbar k_2}{2L} C^*_2 C_2 \int \limits ^{+L} _{-L} dx = C^*_2 C_2 \hbar k_2 \[4pt] I_3 &= \dfrac {\hbar k_1}{2L} C^*_1 C_2 \int \limits ^{+L} _{-L} e^{i(k_2 - k_1)x} dx \[4pt] I_4 &= \dfrac {\hbar k_1}{2L} C^*_2 C_1 \int \limits ^{+L} _{-L} e^{i(k_1 - k_2)x} dx \label {5-25} \end{align}
An integral of two different functions, e.g. $\int \psi _1^* \psi _2 dx$, is called an overlap integral or orthogonality integral. When such an integral equals zero, the functions are said to be orthogonal. The integrals in $I_3$ and $I_4$ are zero because the functions $\psi_1$ and $\psi_2$ are orthogonal. We know $\psi_1$ and $\psi_2$ are orthogonal because of the Orthogonality Theorem, described previously, that states that eigenfunctions of any Hermitian operator, such as the momentum operator or the Hamiltonian operator, with different eigenvalues, which is the case here, are orthogonal. Also, by using Euler's formula and following Example $1$ below, you can see why these integrals are zero.
Example $1$
For the integral part of $I_3$ obtain
$\int \cos [(k_2 - k_1 ) x ] dx + i \int \sin [(k_2 - k_1)x ] dx \nonumber$
from Euler’s formula.
Solution
Here we have the integrals of a cosine and a sine function along the x-axis from minus infinity to plus infinity. Since these integrals are the area under the cosine and sine curves, they must be zero because the positive lobes are canceled by the negatives lobes when the integration is carried out from $–∞$ to $+∞$.
As a result of this orthogonality, $\left \langle p \right \rangle$ is just $I_1 + I_2$, which is
\begin{align} \left \langle p \right \rangle &= C_1^* C_1 \hbar k_1 + C^*_2 C_2 \hbar k_2 \[4pt] &= C_1^* C_1p_1 + C^*_2 C_2 p_2 \label {5-26} \end{align}
where $\hbar k_1$ is the momentum $p_1$ of state $\psi_1$, and $\hbar k_2$ is the momentum $p_2$ of state $\psi_2$. As explained in Chapter 3, an average value can be calculated by summing, over all possibilities, the possible values times the probability of each value. Equation $\ref{5-26}$ has this form if we interpret $C_1^*C_1$ and $C_2^*C_2$ as the probability that the electron has momentum $p_1$ and $p_2$, respectively. These coefficients therefore are called probability amplitude coefficients, and their absolute value squared gives the probability that the electron is described by $\psi_1$ and $\psi_2$, respectively. This interpretation of these coefficients as probability amplitudes is very important.
Exercise $2$
• Find the expectation value for the energy $\left \langle E \right \rangle$ for the superposition wavefunction given by Equation $\ref{5-23}$.
• Explain why $C_1^*C_1$ is the probability that the electron has energy $\dfrac {\hbar ^2 k^2_1}{2m}$ and $C_2^*C_2$ is the probability that the electron has energy $\dfrac {\hbar ^2 k^2_2}{2m}$.
• What is the expectation value for the energy when both components have equal weights in the superposition function, i.e. when $C_1 = C_2 = 2^{-1/2}$? | textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Book%3A_Quantum_States_of_Atoms_and_Molecules_(Zielinksi_et_al)/05%3A_Translational_States/5.02%3A_The_Uncertainty_Principle.txt |
Q5.1
Write the Schrödinger equation for a free particle in three-dimensional space.
Q5.2
Solve the Schrödinger equation to find the wavefunctions for a free particle in three-dimensional space.
Q5.3
Show that these functions are eigenfunctions of the momentum operator in three-dimensional space.
Q5.4
If you have not already done so, use vector notation for the wave vector and position of the particle.
Q5.5
Write the wavefunctions using vector notation for the wave vector and the position.
Q5.6
Write the momentum operator in terms of the del-operator, which is defined as $\hat {\nabla} = \vec {x} \frac {\partial}{\partial x} + \vec {y} \frac {\partial}{\partial y} + \vec {z} \frac {\partial}{\partial z}$ where the arrow caps on x, y, and z designate unit vectors.
Q5.7
Write the Laplacian operator in terms of partial derivatives with respect to x, y, and z. The Laplacian operator is defined as the scalar product of del with itself, $\hat {\partial} ^2 = \hat {\partial} \cdot \hat {\partial}$.
Q5.8
Write the kinetic energy operator in terms of the Laplacian operator.
David M. Hanson, Erica Harvey, Robert Sweeney, Theresa Julia Zielinski ("Quantum States of Atoms and Molecules")
5.0S: 5.S: Translational States (Summary)
In this chapter we applied the principles of quantum mechanics to the simplest physical system, a free particle in one dimension, which could be an electron, an atom, or a molecule. We wrote the Schrödinger equation for the system and then solved this equation to obtain the wavefunctions, ψk(x), describing the system. Each wavefunction is identified by the magnitude of the wave vector, k, as a subscript. We observed that the wavefunctions are not quantized because there are no boundary conditions for this system. By “not quantized,” we mean that the wave vector, momentum, and energy can have any values. We determined the constants of integration for our solutions by using the normalization condition. By using the wavefunction and the momentum operator to obtain the momentum of the particle, we discovered that the momentum was related to the wave vector and wavelength just as Compton and de Broglie proposed. Note that the momentum and energy of the free particle are related just as they are classically. The position of the particle is completely undetermined by the wavefunction because the momentum is given exactly. The particle could be anywhere. This relationship between momentum and position is a manifestation of the Heisenberg Uncertainty Principle. The momentum is known exactly because the wavefunction is an eigenfunction of the momentum operator.
The concepts of overlap, orthogonality, and linear combination or superposition of functions appeared in the discussion. These concepts will be useful later when we discuss bonding and the mathematical representations of bonding in semi-empirical and ab initio molecular orbital theories. Linear combinations of atomic orbitals and other functions are used to describe bonds in molecules, and the overlap and orthogonality of these functions are important there.
Exercise \(14\) Complete the table below. For an example of a completed table, see the overview table at the end of Chapter 4.
Overview of key concepts and equations for the free particle
Potential energy
V =
Hamiltonian
Wavefunctions
Ψ =
Quantum Numbers
Energies
E =
Spectroscopic Selection Rules
Angular Momentum Properties | textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Book%3A_Quantum_States_of_Atoms_and_Molecules_(Zielinksi_et_al)/05%3A_Translational_States/5.0E%3A_5.E%3A_Translational_States_%28Exercises%29.txt |
In this chapter we use the harmonic oscillator model and a combination of classical and quantum mechanics to learn about the vibrational states of molecules. The first section of the chapter introduces the concepts of normal modes and normal coordinates in order to deal with the complexity of vibrational motion found in polyatomic molecules. The second section of the chapter reviews the classical treatment of the harmonic oscillator model, which is very general. Anything with a potential energy that depends quadratically on position, or equivalently experiences a linear restoring force, is a harmonic oscillator. In addition to vibrating molecules, the harmonic oscillator model therefore describes physical systems such as a pendulum, a weight hanging from a spring, or weights connected by springs.
The remainder of the chapter treats the vibrational states of molecules using quantum mechanics, starting with the solutions to the Schrödinger equation. Quantum mechanics provides the probability density function for positions of the atomic nuclei and the vibrational energy level structure, and is used to calculate spectroscopic selection rules, explain intensities in spectra, and calculate the vibrational force constants. Our analysis will identify the molecular properties that determine the frequency of radiation that is absorbed, determine which vibrations appear in the infrared spectrum (and which do not), and determine why some vibrations absorb radiation strongly (and others do not).
David M. Hanson, Erica Harvey, Robert Sweeney, Theresa Julia Zielinski ("Quantum States of Atoms and Molecules")
06: Vibrational States
To deal with the complexity of the vibrational motion in polyatomic molecules, we need to utilize the three important concepts listed as the title of this section. By a spatial degree of freedom, we mean an independent direction of motion. A single atom has three spatial degrees of freedom because it can move in three independent or orthogonal directions in space, i.e. along the x, y, or z-axes of a Cartesian coordinate system. Motion in any other direction results from combining velocity components along two or three of these directions. Two atoms have six spatial degrees of freedom because each atom can move in any of these three directions independently.
Equivalently, we also can say one atom has three spatial degrees of freedom because we need to specify the values of three coordinates $(x_1, y_1, z_1)$ to locate the atom. Two atoms have six spatial degrees of freedom because we need to specify the values of six coordinates, $(x_1, y_1, z_1)$ and $(x_2, y_2, z_2)$, to locate two atoms in space. In general, to locate N atoms in space, we need to specify 3N coordinates, so a molecule comprised of N atoms has 3N spatial degrees of freedom.
Exercise $1$
Identify the number of spatial degrees of freedom for the following molecules: $Cl_2$, $CO_2$, $H_2O$, $CH_4$, $C_2H_2$, $C_2H_4$, $C_6H_6$.
The motion of the atomic nuclei in a molecule is not as simple as translating each of the nuclei independently along the x, y, and z axes because the nuclei, which are positively charged, are coupled together by the electrostatic interactions with the electrons, which are negatively charged. The electrons between two nuclei effectively attract them to each other, forming a chemical bond.
Consider the case of a diatomic molecule, which has six degrees of freedom. The motion of the atoms is constrained by the bond. If one atom moves, a force will be exerted on the other atom because of the bond. The situation is like two balls coupled together by a spring. There are still six degrees of freedom, but the motion of atom 1 along x, y, and z is not independent of the motion of atom 2 along x, y, and z because the atoms are bound together.
It therefore is not very useful to use the six Cartesian coordinates, $(x_1, y_1, z_1)$ and $(x_2, y_2, z_2)$, to describe the six degrees of freedom because the two atoms are coupled together. We need new coordinates that are independent of each other and yet account for the coupled motion of the two atoms. These new coordinates are called normal coordinates, and the motion described by a normal coordinate is called a normal mode.
A normal coordinate is a linear combination of Cartesian displacement coordinates. A linear combination is a sum of terms with constant weighting coefficients multiplying each term. The coefficients can be imaginary or any positive or negative number including +1 and -1. For example, the point or vector r = (1, 2, 3) in three-dimensional space can be written as a linear combination of unit vectors.
$r = 1 \bar {x} + 2 \bar {y} + 3 \bar {z} \label {6-1}$
A Cartesian displacement coordinate gives the displacement in a particular direction of an atom from its equilibrium position. The equilibrium positions of all the atoms are those points where no forces are acting on any of the atoms. Usually the displacements from equilibrium are considered to be small. For illustration, the Cartesian displacement coordinates for HCl are defined in Table $1$, and they are illustrated in Figure $1$.
Table 6. Cartesian displacement coordinates for HCl.*
$q_1 = X_{H} - X^e_H$
$q_2 = y_{H} - y^e_H$
$q_3 = z_H - z^e_H$
$q_4 = x_{Cl} - x^e_{Cl}$
$q_5 = y_{Cl} - y^e_{Cl}$
$q_6 = z_{Cl} - z^e_{Cl}$
*The superscript e designates the coordinate value at the equilibrium position.
Note that the position of one atom can be written as a vector $r_1$ where $r_1 = (x_1, y_1, z_1)$, and the positions of two atoms can be written as two vectors $r_1$ and $r_2$ or as a generalized vector that contains all six components $r = (x_1, y_1, z_1, x_2, y_2, z_2)$. Similarly the six Cartesian displacement coordinates can be written as such a generalized vector $q = (q_1, q_2, q_3, q_4, q_5, q_6)$.
For a diatomic molecule it is easy to find the linear combinations of the Cartesian displacement coordinates that form the normal coordinates and describe the normal modes. Just take sums and differences of the Cartesian displacement coordinates. Refer to Table $1$ and Figure $4$. for the definition of the q's. The combination q1 + q4 corresponds to translation of the entire molecule in the x direction; call this normal coordinate Tx. Similarly we can define Ty = q2 + q5 and Tz = q3 + q6 as translations in the y and z directions, respectively. Now we have three normal coordinates that account for three of the degrees of freedom, the three translations of the entire molecule.
What do we do about the remaining three degrees of freedom? Here let's use a simple rule for doing creative science: if one thing works, try something similar and examine the result. In this case, if adding quantities works, try subtracting them. Examine the combination q2 - q5. This combination means that H is displaced in one direction and Cl is displaced in the opposite direction. Because of the bond, the two atoms cannot move completely apart, so this small displacement of each atom from equilibrium is the beginning of a rotation about the z-axis. Call this normal coordinate Rz. Similarly define Ry = q3 - q6 to be rotation about the y-axis. We now have found two rotational normal coordinates corresponding to two rotational degrees of freedom.
The remaining combination, q1 - q4, corresponds to the atoms moving toward each other along the x-axis. This motion is the beginning of a vibration, i.e. the oscillation of the atoms back and forth along the x-axis about their equilibrium positions, and accounts for the remaining sixth degree of freedom. We use Q for the vibrational normal coordinate.
$Q = q_1 - q_4 \label {6-2}$
To summarize: a normal coordinate is a linear combination of atomic Cartesian displacement coordinates that describes the coupled motion of all the atoms that comprise a molecule. A normal mode is the coupled motion of all the atoms described by a normal coordinate. While diatomic molecules have only one normal vibrational mode and hence one normal vibrational coordinate, polyatomic molecules have many.
Exercise $2$
Draw and label six diagrams, each similar to Figure $1$, to show the 3 translational, 2 rotational and 1 vibrational normal coordinates of a diatomic molecule.
Exercise $3$
Vibrational normal modes have several distinguishing characteristics. Examine the animations for the normal modes of benzene shown in Figure $3$ to identify and make a list of these characteristics. Use a molecular modeling program to calculate and visualize the normal modes of another molecule.
The list of distinguishing characteristics of normal modes that you compiled in Exercise $4$ should include the following four properties. If not, reexamine the animations to confirm that these characteristics are present.
1. In a particular vibrational normal mode, the atoms move about their equilibrium positions in a sinusoidal fashion with the same frequency.
2. Each atom reaches its position of maximum displacement at the same time, but the direction of the displacement may differ for different atoms.
3. Although the atoms are moving, the relationships among the relative positions of the different atoms do not change.
4. The center of mass of the molecule does not move.
For the example of HCl, see Table $1$, the first property, stated mathematically, means
$q_1 = A_1 \sin (\omega t) \text {and} q_4 = A_4 \sin (\omega t) \label {6-3}$
The maximum displacements or amplitudes are given by A1 and A4, and the frequency of oscillation (in radians per second) is ω for both displacement coordinates involved in the normal vibrational mode of HCl. Substitution of Equations (6-3) for the displacement coordinates into the expression determined above for the vibrational normal coordinate, Equation (6-2), yields
$Q = q_1 - q_4 = A_1 \sin (\omega t) - A_4 \sin (\omega t) \label {6-4}$
This time-dependent expression describes the coupled motions of the hydrogen and chlorine atoms in a vibration of the HCl molecule. In general for a polyatomic molecule, the magnitude of each atom's displacement in a vibrational normal mode may be different, and some can be zero. If an amplitude, the A, for some atom in some direction is zero, it means that atom does not move in that direction in that normal mode. In different normal modes, the displacements of the atoms are different, and the frequencies of the motion generally are different. If two or more vibrational modes have the same vibrational frequency, these modes are called degenerate.
You probably noticed in Exercise $4$ that the atoms reached extreme points in their motion at the same time but that they were not all moving in the same direction at the same time. These characteristics are described by the second and third properties from the list above. For the case of HCl, the two atoms always move in exactly opposite directions during the vibration. Mathematically, the negative sign in Equation that we developed for the normal coordinate, Q, accounts for this relationship.
This timing with respect to the direction of motion is called the phasing of the atoms. In a normal mode, the atoms move with a constant phase relationship to each other. The phase relationship is represented by a phase angle φ in the argument of the sine function that describes the time oscillation, sin(ωt + φ). The angle is called a phase angle because it shifts the sine function on the time axis. We can illustrate this phase relationship for HCl. Use the trigonometric identity
$- \sin {\theta} = \sin (\theta + 180^0) \label {6-5}$
in Equation \ref{6-4} to obtain
$Q = A_1 \sin (\omega t ) + A_4 \sin (\omega t + 180^0) \label {6-6}$
to see that that the phase angle for this case is 180o.
The phase angle $\varphi$ accounts for the fact that the H atom and the Cl atom reach their maximum displacements in the positive x-direction, +A1 and +A4, at different times. Generally in a normal mode the phase angle $\varphi$ is 0o or 180o. If $\varphi$ = 0o for both atoms, the atoms move together, and they are said to be in-phase. For the vibration of a diatomic molecule such as HCl, the phase angle for one atom is $\varphi$ = 0o, and the phase angle for the other atom is $\varphi$ = 180o. The atoms therefore move in opposite directions any time, and the atoms are said to be 180o out-of-phase. When $\varphi$ is 180o, two atoms reach the extreme points in their motion at the same time, but one is in the positive direction and the other is in the negative direction.
Phase relationships can be seen by watching a marching band. All the players are executing the same marching motion at the same frequency, but a few may be ahead or behind the rest. You might say, "They are out-of-step." You also could say, "They are out-of-phase."
To illustrate the fourth property for HCl, recall that the center of mass for a diatomic molecule is defined as the point where the following equation is satisfied.
$m_H d_H = m_{Cl} d_{Cl} \label {6-7}$
The masses of the atoms are given by $m_H$ and $m_Cl$, and $d_H$ and $d_{Cl}$ are the distances of these atoms from the center of mass.
Exercise $4$
Find the distances, dH and dCl, of the H and Cl atoms from the center of mass in HCl given that the bond length is 0.13 nm. In general for a diatomic molecule, AB, what determines the ratio dA/dB, and which atom moves the greater distance in the vibration?
In general, to satisfy the center of mass condition, a light atom is located further from the center of mass than a heavy atom. To keep the center of mass fixed during a vibration, the amplitude of motion of an atom must depend inversely on its mass. In other words, a light atom is located further from the center of mass and moves a longer distance in a vibration than a heavy atom.
Exercise $5$
Find the ratio of A1 to A4 from Equation that keeps the HCl center of mass stationary during a vibration. Find values for A1 and A4 that satisfy the condition
$A12 + A42= 1.$
Exercise $6$
For a vibrating HCl molecule, use the four properties of a normal vibrational mode, listed previously, to sketch a graph showing the position of the H atom (plot 1) and the position of the Cl atom (plot 2) as a function of time. Both plots should be on the same scale. Hint: place x on the vertical axis and time on the horizontal axis.
In general, a molecule with 3N spatial degrees of freedom has 3 translational normal modes (along each of the three axes), 3 rotational normal modes (around each of the three axes), and 3N-6 (the remaining number) different vibrational normal modes of motion. A linear molecule, as we have just seen, only has two rotational modes, so there are 3N-5 vibrational normal modes. Rotational motion about the internuclear axis in a linear molecule is not one of the 3N spatial degrees of freedom derived from the translation of atoms in three-dimensional space. Rather, such motion corresponds to other degrees of freedom, rotational motion of the electrons and a spinning motion of the nuclei. Indeed, the electronic wavefunction for a linear molecule is characterized by some angular momentum (rotation) about this axis, and nuclei have a property called spin.
Exercise $7$
Identify the number of translational, rotational, and vibrational normal modes for the following molecules: $Cl_2, CO_2, H_2O, CH_4, C_2H_2, C_2H_4, C_2H_6, C_6H_6$. Using your intuition, draw diagrams similar to the ones in Exercise $3$ to show the normal modes of $H_2O$ and $C_2H_4$. It is difficult to identify the normal modes of triatomic and larger molecules by intuition. A mathematical analysis is essential. It is easier to see the normal modes if you use a molecular modeling program like Spartan or Gaussian to generate and display the normal modes.
You probably found in trying to complete Exercise $7$ that it is difficult to identify the normal modes and normal coordinates of triatomic and large molecules by intuition. A mathematical analysis is essential. A general analysis based on the Lagrangian formulation of classical mechanics is described separately. | textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Book%3A_Quantum_States_of_Atoms_and_Molecules_(Zielinksi_et_al)/06%3A_Vibrational_States/6.01%3A_Spatial_Degrees_of_Freedom%2C_Normal_Coordinates_and_Normal_Modes.txt |
A classical description of the vibration of a diatomic molecule is needed because the quantum mechanical description begins with replacing the classical energy with the Hamiltonian operator in the Schrödinger equation. It also is interesting to compare and contrast the classical description with the quantum mechanical picture.
The motion of two particles in space can be separated into translational, vibrational, and rotational motions. The internal motions of vibration and rotation for a two-particle system can be described by a single reduced particle with a reduced mass $μ$ located at $r$.
For a diatomic molecule, Figure $1$, the vector r corresponds to the internuclear axis. The magnitude or length of r is the bond length, and the orientation of r in space gives the orientation of the internuclear axis in space. Changes in the orientation correspond to rotation of the molecule, and changes in the length correspond to vibration. The change in the bond length from the equilibrium bond length is the normal vibrational coordinate Q for a diatomic molecule.
We can use Newton's equation of motion
$\vec{F}= m \vec{a} \label {6-8}$
to obtain a classical description of how a diatomic molecule vibrates. In this equation, the mass, m, is the reduced mass μ of the molecule, the acceleration, $a$, is $d^2Q/dt^2$, and the force, $f$, is the force that pulls the molecule back to its equilibrium bond length. If we consider the bond to behave like a spring, then this restoring force is proportional to the displacement from the equilibrium length, which is Hooke's Law
$F = - kQ \label {6-9}$
where $k$ is the force constant. Hooke's Law says that the force is proportional to, but in opposite direction to, the displacement, $Q$. The force constant, $k$, reflects the stiffness of the spring. The idea incorporated into the application of Hooke's Law to a diatomic molecule is that when the atoms move away from their equilibrium positions, a restoring force is produced that increases proportionally with the displacement from equilibrium. The potential energy for such a system increases quadratically with the displacement. (See Exercise $9$ below.)
$V (Q) = \dfrac {1}{2} k Q^2 \label {6-10}$
Hooke's Law or the harmonic (i.e. quadratic) potential given by Equation $\ref{6-10}$ is a common approximation for the vibrational oscillations of molecules. The magnitude of the force constant $k$ depends upon the nature of the chemical bond in molecular systems just as it depends on the nature of the spring in mechanical systems. The larger the force constant, the stiffer the spring or the stiffer the bond. Since it is the electron distribution between the two positively charged nuclei that holds them together, a double bond with more electrons has a larger force constant than a single bond, and the nuclei are held together more tightly. In fact IR and other vibrational spectra provide information about the molecular composition of substances and about the bonding structure of molecules because of this relationship between the electron density in the bond and the bond force constant. Note that a stiff bond with a large force constant is not necessarily a strong bond with a large dissociation energy.
Example $1$
1. Show that minus the first derivative of the harmonic potential energy function in Equation $\ref{6-10}$ with respect to $Q$ is the Hooke's Law force.
2. Show that the second derivative is the force constant, $k$.
3. At what value of $Q$ is the potential energy a minimum; at what value of $Q$ is the force zero?
4. Sketch graphs to compare the potential energy and the force for a system with a large force constant to one with a small force constant.
In view of the above discussion, Equation \ref{6-8} can be rewritten as
$\dfrac {d^2 Q(t)}{dt^2} + \dfrac {k}{\mu} Q(t) = 0 \label {6-11}$
Equation $\ref{6-11}$ is the equation of motion for a classical harmonic oscillator. It is a linear second-order differential equation that can be solved by the standard method of factoring and integrating as described in Chapter 5.
Example $2$
Substitute the following functions into Equation \ref{6-11} to show that they are both possible solutions to the classical equation of motion.
$Q(t) = Q_0 e^{i \omega t} \text {and} Q(t) = Q_0 e^{-i \omega t}$
where
$\omega = \sqrt {\dfrac {k}{\mu}}$
Note that the Greek symbol ω for frequency represents the angular frequency $2π\nu$.
Example $3$
Show that sine and cosine functions also are solutions to Equation \ref{6-11}.
Example $4$
Using the sine function, sketch a graph showing the displacement of the bond from its equilibrium length as a function of time. Such motion is called harmonic. Show how your graph can be used to determine the frequency of the oscillation. Obtain an equation for the velocity of the object as a function of time, and plot the velocity on your graph also. Note that momentum is mass times velocity so you know both the momentum and position at all times.
Example $5$
Identify what happens to the frequency of the motion as the force constant increases in one case and as the mass increases in another case. If the force constant is increased by 9 times and the mass is increased by 4 times, by what factor does the frequency change?
The energy of the vibration is the sum of the kinetic energy and the potential energy. The momentum associated with the vibration is
$P_Q = \mu \dfrac {dQ}{dt} \label {6-12}$
so the energy can be written as
$E = T + V = \dfrac {P^2_Q}{2 \mu} + \dfrac {k}{2} Q^2 \label {6-13}$
Example $6$
What happens to the frequency of the oscillation as the vibration is excited with more and more energy? What happens to the maximum amplitude of the vibration as it is excited with more and more energy?
Example $7$
If a molecular vibration is excited by collision with another molecule and is given a total energy $E_{hit}$ as a result, what is the maximum amplitude of the oscillation? Is there any constraint on the magnitude of energy that can be introduced?
We can generalize this discussion to any normal mode in a polyatomic molecule. The normal coordinate associated with a normal mode can be thought of as a vector $Q$, with each component giving the displacement amplitude of a particular atom in a particular direction. Equation \ref{6-11} then applies to the length of this vector $Q = |Q|$. As $Q$ increases, it means the displacements of all the atoms that move in that normal mode increase, and the restoring force increases as well. | textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Book%3A_Quantum_States_of_Atoms_and_Molecules_(Zielinksi_et_al)/06%3A_Vibrational_States/6.02%3A_Classical_Description_of_the_Vibration_of_a_Diatomic_Molecule.txt |
In this section we contrast the classical and quantum mechanical treatments of the harmonic oscillator, and we describe some of the properties that can be calculated using the quantum mechanical harmonic oscillator model. The problems at the end of the chapter require that you do some of these calculations, which involve the evaluation of non-trivial integrals. Methods for evaluating such integrals are provided in a detailed math supplement. These integrals are important. They also will appear in later chapters on electronic structure. Working through the problems with the support of the link will give you the opportunity to engage the mathematics on your own terms and deepen your understanding of the material in this section.
For a classical oscillator as described in Section 6.2 we know exactly the position, velocity, and momentum as a function of time. The frequency of the oscillator (or normal mode) is determined by the effective mass M and the effective force constant K of the oscillating system and does not change unless one of these quantities is changed. There are no restrictions on the energy of the oscillator, and changes in the energy of the oscillator produce changes in the amplitude of the vibrations experienced by the oscillator.
For the quantum mechanical oscillator, the oscillation frequency of a given normal mode is still controlled by the mass and the force constant (or, equivalently, by the associated potential energy function). However, the energy of the oscillator is limited to certain values. The allowed quantized energy levels are equally spaced and are related to the oscillator frequencies as given by Equation $\ref{6-30}$.
$E_v = \left ( v + \dfrac {1}{2} \right ) \hbar \omega \label {6-30}$
with
$v = 0, 1, 2, 3, \cdots$
In a quantum mechanical oscillator, we cannot specify the position of the oscillator (the exact displacement from the equilibrium position) or its velocity as a function of time; we can only talk about the probability of the oscillator being displaced from equilibrium by a certain amount. This probability is given by
$Pr [ Q \text {to} Q + dQ] = \psi ^*_v (Q) \psi _v (Q) dQ \label {6-32}$
We can, however, calculate the average displacement and the mean square displacement of the atoms relative to their equilibrium positions. This average is just $\left \langle Q \right \rangle$, the expectation value for Q, and the mean square displacement is $\left \langle Q^2 \right \rangle$, the expectation value for $Q_2$. Similarly we can calculate the average momentum $\left \langle P_Q \right \rangle$, and the mean square momentum $\left \langle P^2_Q \right \rangle$, but we cannot specify the momentum as a function of time.
Physically what do we expect to find for the average displacement and the average momentum? Since the potential energy function is symmetric around Q = 0, we expect values of Q > 0 to be equally as likely as Q < 0. The average value of Q therefore should be zero.
These results for the average displacement and average momentum do not mean that the harmonic oscillator is sitting still. As for the particle-in-a-box case, we can imagine the quantum mechanical harmonic oscillator as moving back and forth and therefore having an average momentum of zero. Since the lowest allowed harmonic oscillator energy, $E_0$, is $\dfrac{\hbar \omega}{2}$ and not 0, the atoms in a molecule must be moving even in the lowest vibrational energy state. This phenomenon is called the zero-point energy or the zero-point motion, and it stands in direct contrast to the classical picture of a vibrating molecule. Classically, the lowest energy available to an oscillator is zero, which means the momentum also is zero, and the oscillator is not moving.
Exercise $23$b
Compare the quantum mechanical harmonic oscillator to the classical harmonic oscillator at v=1 and v=50.
Since the average values of the displacement and momentum are all zero and do not facilitate comparisons among the various normal modes and energy levels, we need to find other quantities that can be used for this purpose. We can use the root mean square deviation (see also root-mean-square displacement) (also known as the standard deviation of the displacement) and the root-mean-square momentum as measures of the uncertainty in the oscillator's position and momentum. These uncertainties are calculated in Problem 3 at the end of this chapter. For a molecular vibration, these quantities represent the standard deviation in the bond length and the standard deviation in the momentum of the atoms from the average values of zero, so they provide us with a measure of the relative displacement and the momentum associated with each normal mode in all its allowed energy levels. These are important quantities to determine because vibrational excitation changes the size and symmetry (or shape) of molecules. Such changes affect chemical reactivity, the absorption and emission of radiation, and the dissipation of energy in radiationless transitions.
In Problem 2, we show that the product of the standard deviations for the displacement and the momentum, $\sigma_Q$ and $\sigma_p$, satisfies the Heisenberg Uncertainty Principle.
$\sigma_Q \sigma_p \ge \dfrac{\hbar}{2}$
The harmonic oscillator wavefunctions form an orthonormal set, which means that all functions in the set are normalized individually
$\int \limits _{-\infty}^{\infty} \psi ^*_v (x) \psi _v (x) dx = 1 \label {6-33}$
and are orthogonal to each other.
$\int \limits _{-\infty}^{\infty} \psi ^*_{v'} (x) \psi _v (x) dx = 0 \label {6-34}$
for $v' \ne v$.
The fact that a family of wavefunctions forms an orthonormal set is often helpful in simplifying complicated integrals. We will use these properties in Section 6.6, for example, when we determine the harmonic oscillator selection rules for vibrational transitions in a molecule and calculate the absorption coefficients for the absorption of infrared radiation.
Finally, we can calculate the probability that a harmonic oscillator is in the classically forbidden region. What does this tantalizing statement mean? Classically, the maximum extension of an oscillator is obtained by equating the total energy of the oscillator to the potential energy, because at the maximum extension all the energy is in the form of potential energy. If all the energy weren't in the form of potential energy at this point, the oscillator would have kinetic energy and momentum and could continue to extend further away from its rest position. Interestingly, as we show below, the wavefunctions of the quantum mechanical oscillator extend beyond the classical limit, i.e. beyond where the particle can be according to classical mechanics.
The lowest allowed energy for the quantum mechanical oscillator is called the zero-point energy, $E_0 = \dfrac {\hbar \omega}{2}$. Using the classical picture described in the preceding paragraph, this total energy must equal the potential energy of the oscillator at its maximum extension. We define this classical limit of the amplitude of the oscillator displacement as $Q_0$. When we equate the zero-point energy for a particular normal mode to the potential energy of the oscillator in that normal mode, we obtain
$\dfrac {\hbar \omega}{2} = \dfrac {KQ^2_0}{2} \label {6-35}$
Recall that K is the effective force constant of the oscillator in a particular normal mode and that the frequency of the normal mode is given by Equation $\ref{6-31}$ which is
$\omega = \sqrt {\dfrac {K}{M}} \label {6-31}$
Solving for Q0 in Equation $\ref{6-35}$ by substituting for ω and rearranging, we obtain the very interesting result
$Q^2_0 = \dfrac {\hbar \omega}{K} = \dfrac {\hbar}{M\omega} = \dfrac {\hbar}{\sqrt {KM}} = {\beta}^2 \label {6-36}$
Here we see that β, the parameter we introduced in Equation 6-20, is more than just a way to collect variables; β has physical significance. It is the classical limit to the amplitude (maximum extension) of an oscillator with energy $E_0 = \dfrac {\hbar \omega}{2}$. Because β has this meaning, the variable x gives the displacement of the oscillator from its equilibrium position in units of the maximum classically allowed displacement for the v = 0 state (lowest energy state). In other words, x = 1 means the oscillator is at this classical limit, and x = 0.5 means it is halfway there.
Exercise $24$
The HCl equilibrium bond length is 0.127 nm and the v = 0 to v = 1 transition is observed in the infrared at 2886 cm-1. Compute the vibrational energy of HCl in its lowest state. Compute the classical limit for the stretching of the HCl bond from its equilibrium length in this state. What percent of the equilibrium bond length is this extension?
The classical limit, $Q_0$, for the lowest-energy state is given by Equation $\ref{6-36}$; i.e., $Q_0 = \pm \beta$ or $x = \dfrac {Q_0}{\beta} = \pm 1$. Examination of the quantum mechanical wavefunction for the lowest-energy state reveals that the wavefunction Ψ0(x) extends beyond these points. Higher energy states have higher total energies, so the classical limits to the amplitude of the displacement will be larger for these states.
Exercise $25$
Mark x = +1 and x = - 1 on the graph for $|\psi _0 (x)^2|$ in Figure $7$ and note whether the wavefunction is zero at these points.
The observation that the wavefunctions are not zero at the classical limit means that the quantum mechanical oscillator has a finite probability of having a displacement that is larger than what is classically possible. The oscillator can be in a region of space where the potential energy is greater than the total energy. Classically, when the potential energy equals the total energy, the kinetic energy and the velocity are zero, and the oscillator cannot pass this point. A quantum mechanical oscillator, however, has a finite probability of passing this point. For a molecular vibration, this property means that the amplitude of the vibration is larger than what it would be in a classical picture. In some situations, a larger amplitude vibration could enhance the chemical reactivity of a molecule.
Exercise $26$
Plot the probability density for v = 0 and v = 1 states. Mark the classical limits on each of the plots, since the limits are different because the total energy is different for v = 0 and v = 1. Shade in the regions of the probability densities that extend beyond the classical limit.
The fact that a quantum mechanical oscillator has a finite probability to enter the classically forbidden region of space is a consequence of the wave property of matter and the Heisenberg Uncertainty Principle. A wave changes gradually, and the wavefunction approaches zero gradually as the potential energy approaches infinity.
We should be able to calculate the probability that the quantum mechanical harmonic oscillator is in the classically forbidden region for the lowest energy state, the state with v = 0. The classically forbidden region is shown by the shading of the regions beyond Q0 in the graph you constructed for Exercise $26$. The area of this shaded region gives the probability that the bond oscillation will extend into the forbidden region. To calculate this probability, we use
$Pr [ \text {forbidden}] = 1 - Pr [ \text {allowed}] \label {6-37}$
because the integral from 0 to $Q_0$ for the allowed region can be found in integral tables and the integral from $Q_0$ to ∞ cannot. The form of the integral, Pr[allowed], to evaluate is
$Pr [ \text {allowed}] = 2 \int \limits _0^{Q_0} \psi _0^* (Q) \psi _0 (Q) dQ \label {6-38}$
The factor 2 appears in Equation $\ref{6-38}$ from the symmetry of the wavefunction, which extends from $-Q_0 to +Q_0$. To evaluate the integral in Equation $\ref{6-38}$, use the wavefunction and do the integration in terms of x, Equation (6-29). Recall that for v = 0, Q = Q0 corresponds to x = 1. Including the normalization constant, Equation $\ref{6-28}$ produces
$Pr [ \text {allowed}] = \dfrac {2}{\sqrt {\pi}} \int \limits _0^1 exp (-x^2) dx \label {6-39}$
The integral in Equation $\ref{6-39}$ is called an error function (ERF), and can only be evaluated numerically. Values can be found in books of mathematical tables or obtained with Mathcad. When the limit of integration is 1, ERF(l) = 0.843 and Pr[forbidden] = 0.157. This result means that the quantum mechanical oscillator can be found in the forbidden region 16% of the time. This effect is substantial and leads to the phenomenon called quantum mechanical tunneling.
Exercise $27$
Numerically Verify that Pr[allowed] in Equation (6-39) equals 0.843. To obtain a value for the integral do not use symbolic integration or symbolic equals. | textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Book%3A_Quantum_States_of_Atoms_and_Molecules_(Zielinksi_et_al)/06%3A_Vibrational_States/6.04%3A_Harmonic_Oscillator_Properties.txt |
Quantum mechanical tunneling is a consequence of the fact that a vibrating molecule has a significant probability to be in the classically forbidden region of space, i.e. beyond the classical limit. Suppose that rather than having a harmonic potential for the displacement of an atom from its equilibrium position, one has a double well potential with a finite potential-energy barrier between the two sides as shown in Figure $1$.
How might the wavefunctions for the position of the atom, or other particle such as an electron, with this type of potential look? A reasonable starting approximation would be to consider a harmonic oscillator wavefunction in each well. Because of their asymptotic approach to zero, the functions extend into the region of the barrier, i.e. into the classically forbidden region. These functions can even connect up with each other if the barrier is not too high or too wide. The connection of the two functions means that a particle starting out in the well on the left side has a finite probability of tunneling through the barrier and being found on the right side even though the energy of the particle is less than the barrier height. According to classical mechanics, the particle would be trapped on the left side if it did not have enough energy to go over the barrier. In quantum mechanics, the particle can tunnel through the barrier. An energy barrier does not necessarily restrict a quantum mechanical system to a certain region of space because the wavefunctions can penetrate through the barrier region. Tunneling has been proposed to explain electron transfer in some enzyme reactions and to account for mutations of DNA base pairs as a hydrogen atom in a hydrogen bond tunnels through the barrier from the electronegative atom of one base to the electronegative atom in the partner base.
What do we mean when we say that something like tunneling occurs quantum mechanically but not classically? We mean that classical mechanics is not an adequate description of the way the atomic world behaves. Classical mechanics works fine for macroscopic objects but not for nanoscopic objects. The importance of the mass and Planck's constant in determining whether an object can be described classically or not can be illustrated by the Uncertainty Principle. When the Uncertainty Principle,
$\Delta x \Delta p > \frac {\hbar}{2},$
and the relationship between momentum and wavelength,
$p = \dfrac{h}{λ} \nonumber$
was applied to macroscopic objects like a baseball. It was found that the uncertainties and wavelength are so small compared to macroscopic dimensions that the wave properties of these objects are not detectable. Large masses can be described classically because Planck's constant is so small. The situations examined in the following paragraphs consider tunneling along these same lines.
We first look at the case of a proton. Is it reasonable to think that a proton can tunnel through a potential barrier? Consider a hydrogen bond between two paired bases in a nucleic acid helix and the resulting double-well potential-energy function similar to the one illustrated in Figure $1$.
Exercise $1$
Sketch potential-energy functions for each of the hydrogen bonds in a guanine-cytosine base pair. Describe how your drawings reflect the nature of the base pairing shown in standard biochemistry textbooks.
We want to compare $Q_0$, the maximum classical displacement of the proton, with the separation of the potential wells, d, which we take as the width of the potential barrier at half its height. If d is much larger than $Q_0$, tunneling would not be probable because the wavefunction, which falls off very rapidly (exponentially), would become extremely small, essentially zero within the barrier as Q increases beyond $Q_0$. On the other hand, if d is not too much larger than $Q_0$, then the wavefunction will still have a significant non-zero value at the further side of the barrier and tunneling will be probable.
The rate at which the wavefunction approaches zero in the barrier region depends upon both the height and width of the barrier. As the barrier height increases, the width must become smaller for tunneling to be significant, but we can still get a feeling for whether tunneling is reasonable or not by comparing the separation of the potential wells with $Q_0$. For a hydrogen-bonded proton between two electronegative centers, d has been calculated to be approximately 0.1 nm (about 10% of a N-H bond length) and $Q^2_0 = \frac {\hbar}{m \omega}$ giving $Q_0$ = 0.01 nm. Since d is only 10 times larger than Q0 in this estimate, tunneling cannot be excluded as a real possibility.
Exercise $2$
Obtain a value for $Q_0$ by using the mass of a proton and a characteristic vibrational frequency for a NH or OH bond (3300 cm-1).
Exercise $3$
Identify what happens to $Q_0$, the ratio $\frac {d}{Q_0}$, and your expectations for tunneling as the mass of the particle increases, e.g. if the proton were replaced by a deuteron.
Now consider a macroscopic system for comparison. Assume that you and your bicycle are stuck in a valley one hill away from your home in the next valley. You are in your lowest energy state, and you have 1000 m to go. What conclusion do we reach if we apply the above estimates for the probability of tunneling to you? We want to use $Q^2_0 \approx \frac {\hbar}{m \omega}$ to estimate your maximum displacement from the bottom of the valley and compare it to the distance to your home valley. For example, we can give you a mass of 100 kg and a frequency of oscillation between the two hills that form the valley of ω = 2π / 100 s. Your mass is 29 orders of magnitude (factors of ten) greater than that of a proton, and your oscillation frequency is 16 orders of magnitude smaller than that of a proton. This frequency means that it takes 100 s for you to complete one cycle rolling back and forth between the two hills that form the valley in which you are located. The larger mass dominates the smaller frequency with the result that your Q0 is 10-18 m, and the hill you need to cross is 1000 m wide, $d \approx 10^3$ m. A bicyclist in the lowest energy quantum state has a maximum displacement of 10-18 nm (10-27 m) and a distance to tunnel of about 103 m. Tunneling under these conditions is not a probable event because the distance to go is so much larger than the maximum displacement. A bicyclist wouldn't have a chance in a lifetime of getting home without generating sufficient energy to ride over the hill. What would happen if Planck's constant were much larger? If h were 107 J s, then tunneling also would be important for massive objects like people, and it would make our lives easier and more interesting.
Exercise $4$
Verify that if h = 107 J s then tunneling would be important for people. | textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Book%3A_Quantum_States_of_Atoms_and_Molecules_(Zielinksi_et_al)/06%3A_Vibrational_States/6.05%3A_Quantum_Mechanical_Tunneling.txt |
Photons can be absorbed or emitted, and the harmonic oscillator can go from one vibrational energy state to another. Which transitions between vibrational states are allowed? If we take an infrared spectrum of a molecule, we see numerous absorption bands, and we need to relate these bands to the allowed transitions involving different normal modes of vibration.
The selection rules are determined by the transition moment integral.
$\mu_T = \int \limits _{-\infty}^{\infty} \psi_{v'}^* (Q) \hat {\mu} (Q) \psi _v (Q) dQ \label{6.6.1}$
To evaluate this integral we need to express the dipole moment operator, $\hat {\mu}$, in terms of the magnitude of the normal coordinate Q. The dipole moment operator is defined as
$\hat {\mu} = \sum _{electrons} er + \sum _{nuclei} qR \label{6.6.2}$
where the two sums are over all the electrons and nuclei and involve the particle charge (-e or q) multiplying the position vector (r or R). We can obtain this dipole moment operator in terms of the magnitude of the normal coordinate, Q, in a simple way by using a Taylor series expansion for the dipole moment.
$\mu (Q) = \mu _{Q=0} + \left ( \dfrac {d \mu (Q)}{dQ} \right ) _{Q=0} Q + \left ( \dfrac {d^2 \mu (Q)}{dQ^2} \right ) _{Q=0} Q^2 + \cdots \label{6.6.3}$
Retaining only the first two terms and substituting into Equation $\ref{6.6.1}$ produces
$\mu _T = \mu _{Q=0} \int \limits _{-\infty}^{\infty} \psi _{v'} (Q) \psi _v (Q) dQ + \left ( \dfrac {d \mu (Q)}{dQ} \right ) _{Q=0} \int \limits _{-\infty}^{\infty} Q\psi _{v'}^* (Q) \psi _v (Q) dQ \label{6.6.4}$
In the above expressions, μQ = 0 is the dipole moment of the molecule when the nuclei are at their equilibrium positions, and $\left (\dfrac {d\mu (Q)}{dQ} \right )_{Q=0}$ is the linear change in the dipole moment due to the displacement of the nuclei in the normal mode. The derivative is the linear change because it multiplies Q and not a higher power of Q in Equation \ref{6.6.3}. Both $μ$ and $\left (\dfrac {d\mu (Q)}{dQ}\right )_{Q=0}$ are moved outside of the integral because they are constants that no longer depend on Q because they are evaluated at Q = 0.
The integral in the first term in Equation6.6.4) is 0 because any two harmonic oscillator wavefunctions are orthogonal. The integral in the second term of Equation is zero except when $v' = v \pm 1$ as demonstrated in Exercise $32$. Also note that the second term is zero if
$\left (\dfrac {d\mu (Q)}{dQ}\right )_{Q=0} = 0 \label{6.6.5}$
Exercise $1$
Use one of the Hermite polynomial recursion relations to verify that the second integral in Equation $\ref{6.6.4}$ is 0 unless $v' = v \pm 1$.
If we are to observe absorption of infrared radiation due to a vibrational transition in a molecule, the transition moment cannot be zero. This condition requires that the dipole moment derivative Equation $\ref{6.6.5}$ cannot be zero and that the vibrational quantum number change by one unit. The normal coordinate motion must cause the dipole moment of the molecule to change in order for a molecule to absorb infrared radiation. If the normal coordinate oscillation does not cause the dipole moment to change then $\mu _T = 0$ and no infrared absorption is observed.
$\text { For allowed transitions} \Delta v = \pm 1 \label {6.6.6}$
Consider oxygen and nitrogen molecules. Because they are symmetrical, their dipole moments are zero, μ = 0. Since the vibrational motion (only bond stretching for a diatomic molecule) preserves this symmetry, the change in the dipole moment due to the vibrational motion also is zero, $\dfrac {d\mu (Q)}{dQ} = 0$. Consequently, oxygen and nitrogen cannot absorb infrared radiation as a result of vibrational motion.
This result has important practical consequences. Chemists can do infrared spectroscopy in the air. The spectrometer need not be evacuated to eliminate the absorption due to oxygen and nitrogen. This situation greatly simplifies the spectrometer design, lowers the cost of the instrument, and makes it more convenient to use.
Exercise $2$
Explain why the absorption coefficient in Beer's Law is larger for some normal modes than for others.
The case $v' = v + 1$ corresponds to going from one vibrational state to a higher energy one by absorbing a photon with energy hν. The case v' = v − 1 corresponds to a transition that emits a photon with energy hν. In the harmonic oscillator model infrared spectra are very simple; only the fundamental transitions, $\Delta = \pm 1$, are allowed. The associated transition energy is $\hbar \omega$, according to Equation \ref{6-30}). The transition energy is the change in energy of the oscillator as it moves from one vibrational state to another, and it equals the photon energy.
$\Delta E = E_{final} - E_{initial} = hv_{photon} = \hbar \omega _{oscillator} \label{6.6.7}$
In a perfect harmonic oscillator, the only possibilities are $\Delta = \pm 1$; all others are forbidden. This conclusion predicts that the vibrational absorption spectrum of a diatomic molecule consists of only one strong line, as shown in Figure $1$, because as you showed in your energy level diagram in Exercise $20$, the energy levels are equally spaced in the harmonic oscillator model. If the levels were not equally spaced, then transitions from v = 0 to 1 and from v = 1 to 2, etc. would occur at different frequencies.
The actual spectrum is more complex, especially at high resolution. There is a fine structure due to the rotational states of the molecule. These states will be discussed in the next chapter. The spectrum is enriched further by the appearance of lines due to transitions corresponding to $\Delta = \pm n$ where n > 1. These transitions are called overtone transitions and their appearance in spectra despite being forbidden in the harmonic oscillator model is due to the anharmonicity of molecular vibrations. Anharmonicity means the potential energy function is not strictly the harmonic potential. The first overtone, Δv = 2, generally appears at a frequency slightly less than twice that of the fundamental, i.e. the frequency due to the Δv = 1 transition.
Exercise $2$
Compute the approximate transition frequencies in wavenumber units for the first and second overtone transitions in HCl given that the fundamental is at 2886 cm-1.
Also note that hot bands, those involving transitions from thermally populated states having v>0, can be present in the spectra. The number density of molecules, nv, in a particular energy level, v, at any temperature is proportional to the Boltzmann factor.
$n_v \propto e^{-\dfrac {E_v}{k_BT}} \label{6.6.8}$
For molecules at room temperature or below, the v = 0 vibrational state is the one that is most heavily populated.
Exercise $3$
Using the Boltzmann distribution, determine the ratio of the number of HCl molecules in the v = 1 vibrational state compared to the v = 0 state at room temperature. Comment on the expected intensity of the hot-band transition from the v = 1 state at room temperature. At what temperature might the hot band for HCl have an intensity that is 25% of the fundamental band at that temperature?
These considerations explain the low-resolution absorption spectrum of a diatomic molecule shown in Figure $1$. Such spectra are simple because only the fundamental (v = 0 to 1) is intense. The overtones (v = 0 to v > 1) are very weak and are not shown in the spectrum. They only appear at all because the actual molecular potential is slightly anharmonic, making the harmonic oscillator model and selection rules only an approximation. This anharmonicity becomes more important for the higher energy states (v >> 0) that involve larger displacements from equilibrium, i.e. larger values of Q. We conclude that the harmonic oscillator approximation is very good because the forbidden overtone transitions are indeed weak.
Generally each intense peak that is seen in an infrared spectrum of a polyatomic molecule corresponds to the fundamental transition of a different normal mode because the overtones are forbidden in the harmonic approximation and hot bands are weak at room temperature. However in a polyatomic molecule, combination bands that involve the excitation of two normal modes also can be intense. They arise from a higher order term in the expansion of μ(Q); namely,
$\left ( \dfrac {\partial ^2 \mu}{\partial Q_A \partial Q_B} \right)_{Q=0} Q_A Q_B$
This derivative gives the change in the dipole moment due to the motion of two normal modes simultaneously. | textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Book%3A_Quantum_States_of_Atoms_and_Molecules_(Zielinksi_et_al)/06%3A_Vibrational_States/6.06%3A_Harmonic_Oscillator_Selection_Rules.txt |
Q6.1
Demonstrate for one or two cases that the harmonic oscillator wavefunctions form an orthonormal set.
Q6.2
Show that the behavior of the harmonic oscillator for the v = 0 state is consistent with the Heisenberg Uncertainty Principle by computing the product of the standard deviations for displacement and momentum, i.e. $\sigma _Q \sigma _P$. You can express $\sigma _Q \sigma _P$ in terms of <Q>,
, <Q2>, and <p2>.
Q6.3
Complete the following:
1. For v = 0 vibrations of HF, HCl, and HBr compute the standard deviations for the displacements from equilibrium. The vibrational frequencies are given by 4139, 2886, and 2559 $cm^{-1}$.
2. Use the standard deviation in the displacement as well as the classical maximum possible displacement ($Q_0$) to characterize the ground state vibrational amplitude and compare these quantities to an estimate of the bond lengths in these molecules obtained from the atomic covalent radii for H (37 pm), F (72 pm), Cl (99 pm), and Br (114 pm).
3. What percent of the original bond length are these characteristic displacements? Is this bond length changing significantly during a vibration?
4. What molecular properties determine the vibrational amplitude? How does the vibrational amplitude depend on these properties?
5. Explain the differences in the vibrational amplitudes for these three molecules.
Q6.4
Complete the following:
1. Use the harmonic oscillator model to describe the vibration of nitrogen, and prepare a Mathcad graph showing the potential energy function. The vibrational frequency is given by 2360 $cm^{-1}$.
2. Mark the energy of the v = 8 state on your graph of the potential energy with a horizontal line. Prepare another graph showing the v = 8 harmonic oscillator wavefunction and probability density function.
3. On both graphs, mark the classical limit to the displacement of the oscillator with the energy in (b) from its equilibrium position.
4. Describe the probability of finding the oscillator at various distances away from the equilibrium position. How does the quantum mechanical oscillator differ from the classical oscillator in this regard?
Q6.5
Complete the following
1. Do unwanted bands due to oxygen, nitrogen, water, and carbon dioxide in the air appear in infrared spectra of chemical samples?
2. How many normal modes are there for $H_2O$? How many of these are infrared active?
3. How many normal modes and infrared active modes are there for $CO_2$?
4. Look up the vibrational frequencies of water and carbon dioxide in library books on spectroscopy and make sketches of the low-resolution IR spectra that you would expect. Include overtone, combination, and hot bands. Key references on molecular vibrations are Molecular Vibrations by Wilson, Decius, and Cross and Infrared and Raman Spectra by Herzberg.
Q6.6
1. Complete the following:
1. Compare the potentials and the wavefunctions for the harmonic oscillator and the particle in a box. Summarize the similarities and the differences.
2. Is the harmonic oscillator selection rule Δv = ±1 almost or approximately true also for the particle in a box?
3. Why would you expect this selection rule to be approximately valid or to fail completely for the particle in a box?
Q6.7
Another model that often is applied to molecules is the Morse oscillator. The potential energy for the Morse oscillator has the following form:
$V(x) = D_e (1 - e^{-\beta x})^2,$
where x is the displacement of the oscillator from its equilibrium position, and $D_e$ and $\beta$ are constants or parameters.
1. At what value of $x$ is $V(x)$ a minimum?
2. What is $V(x)$ when $x$ becomes very large?
3. What happens to $V(x)$ as $x$ becomes a large negative number?
4. Based on your answers to $a$, $b$, and $c$, sketch a graph of $V(x)$ vs $x$.
5. From your graph, identify the parameter in the Morse potential that you would call the equilibrium dissociation energy.
6. How does the equilibrium dissociation energy in the Morse potential differ from the actual bond dissociation energy at 0 K?
7. Mark, using arrows and labels, the equilibrium dissociation energy and the bond dissociation energy on your graph.
8. Expand $V(x)$ in a power series and show that the force constant for a corresponding harmonic potential is given by $k = 2D_e \beta ^2$.
9. The parameters $D_e$ and $\beta$ generally are evaluated for specific molecules from observed vibrational frequencies. Using values obtained for HCl ($D_e = 7.31 \times 10^{-19}$ J, $\beta$= $1.82 \times 10^{10} m^{-1}$), plot on the same graph both the Morse potential and the harmonic potential for HCl. In making your plot, use $x = R - R_e$, where $R$ is the internuclear separation, and Re is the equilibrium bond length, 127.5 pm.
10. What insights, especially regarding the idea of anharmonicity, do you gain by comparing the plots that you constructed in the previous part (above) of the Morse potential and the harmonic potential?
Q6.8
1. Sketch a double well potential for the hydrogen bond found between oxygen and nitrogen in a G-C base pair of DNA. On which side of the diagram is the nitrogen? Sketch the wavefunction with v = 0 for the N-proton bond. Draw a continuation of the wavefunction into the forbidden region and beyond into the oxygen side of the bond. How do you think the wavefunction on the oxygen side of the bond will look? Use the relative amplitudes as predicted by the probability distribution for the wavefunction to predict the likelihood for finding the proton on the oxygen side of the hydrogen bond. If the DNA double strand separated at just the moment when the proton was on the oxygen side of the molecule, and this situation were stable for a period of time that is longer than required for most biochemical reactions, what might be the consequences for replication or transcription using this mutated strand of DNA? You may need to review some biochemistry, especially the structures of hydrogen-bonded base pairs to complete this problem.
{template.Zielinski()}} | textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Book%3A_Quantum_States_of_Atoms_and_Molecules_(Zielinksi_et_al)/06%3A_Vibrational_States/6.0S%3A_6.S%3A_Vibrational_States_%28Exercises%29.txt |
In this chapter we developed the quantum mechanical description of the harmonic oscillator for a diatomic molecule and applied it to the normal modes of molecular vibrations. We examined the functional form of the wavefunctions and the associated energy level structure. We can calculate expectation values (average values) and standard deviations for the displacement, the momentum, the square of the displacement, and the square of the momentum. The wavefunctions, which form an orthonormal set, were used to determine electric dipole selection rules for spectroscopic transitions, and in the problems at the end of the chapter, they are used to calculate several properties of the harmonic oscillator. The phenomenon of quantum mechanical tunneling through a potential-energy barrier was introduced and its relationship to real chemical phenomena was illustrated by consideration of hydrogen bonding in DNA. We finally looked at the nature of low-resolution IR spectra and introduced the anharmonicity concept to account for forbidden overtone transitions in spectra. The presence of combination bands in spectra was attributed to second derivative terms in the expansion of the dipole moment operator in terms of the normal coordinates. The simple harmonic oscillator model works well for molecules at room temperature because the molecules are in the lower vibrational levels where the effects of anharmonicity are small.
Self-Assessment Quiz
1. Write a definition of a normal vibrational mode.
2. Write a definition of a normal vibrational coordinate.
3. List the steps in a methodology for finding the normal vibrational coordinates and frequencies.
4. What is a harmonic oscillator?
5. How is the harmonic oscillator relevant to molecular properties?
6. Write the Hamiltonian operator for a one-dimensional harmonic oscillator.
7. What are the major steps in the procedure to solve the Schrödinger equation for the harmonic oscillator?
8. What are the three parts of a harmonic oscillator wavefunction?
9. How is the quantum number v produced in solving the Schrödinger equation for the harmonic oscillator?
10. What are the allowed energies for a quantum harmonic oscillator?
11. What determines the frequency of a quantum harmonic oscillator?
12. What information about a molecular vibration is provided by the harmonic oscillator wavefunction for a normal coordinate?
13. Sketch graphs of the harmonic oscillator potential energy and a few wavefunctions.
14. Draw the harmonic oscillator energy level diagram.
15. Why is the lowest possible energy of the quantum oscillator not zero?
16. Compute the approximate energy for the first overtone transition in HBr given that the fundamental is 2564 cm-1.
17. If a transition from vibrational energy level v = 3 to v = 4 were observed in an infrared spectrum, where would that spectral line appear relative to the one for the transition from v = 0 to v = 1?
18. What is the harmonic oscillator selection rule for vibrational excitation by infrared radiation?
19. Explain why the infrared absorption coefficient is larger for some normal modes than for others.
20. Why is it possible for quantum particles to tunnel through potential barriers?
21. What are the values of integrals like $\int \limits _{-\infty}^{\infty} \psi ^*_n (Q) \psi _m (Q) dQ$ using harmonic oscillator wavefunctions? | textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Book%3A_Quantum_States_of_Atoms_and_Molecules_(Zielinksi_et_al)/06%3A_Vibrational_States/6.0S%3A_6.S%3A_Vibrational_States_%28Summary%29.txt |
Molecules rotate as well as vibrate. Transitions between rotational energy levels in molecules generally are found in the far infrared and microwave regions of the electromagnetic spectrum.
• 7.1: Introduction to Rotation
Molecules rotate as well as vibrate. Transitions between rotational energy levels in molecules generally are found in the far infrared and microwave regions of the electromagnetic spectrum. We will see that the magnitude of the molecule's moment of inertia causes rotational transitions in these spectral regions. We also will learn why the lines are nearly equally spaced and vary in intensity. Such spectra can be used to determine bond lengths, and even bond angles in polyatomic molecules.
• 7.2: The Hamiltonian Operator for Rotational Motion
Translational motion can be separated from rotational motion if we specify the position of the center of mass by a vector R, and the positions of each atom relative to the center of mass. Since translational motion and rotational motion are separable, i.e. independent, the translational and rotational energies will add, and the total wavefunction will be a product of a translational function and a rotational function.
• 7.3: Solving the Rigid Rotor Schrödinger Equation
To solve the Schrödinger equation for the rigid rotor, we will separate the variables and form single-variable equations that can be solved independently.
• 7.4: Angular Momentum Operators and Eigenvalues
ngular momentum is a key component in the physical descriptions of rotating systems. It is important because angular momentum, just like energy and linear momentum, must be conserved in any process. Consequently angular momentum is used to derive selection rules for spectroscopic transitions, determine which states of atoms and molecules can be affected by various perturbations, and identify possible and forbidden mechanisms in chemical reactions.
• 7.5: Quantum Mechanical Properties of Rotating Diatomic Molecules
In this section we examine the rotational states for a diatomic molecule by comparing the classical interpretation of the angular momentum vector with the probabilistic interpretation of the angular momentum wavefunctions. We want to answer the following types of questions. How do we describe the orientation of a rotating diatomic molecule in space? Is the molecule actually rotating? What properties of the molecule can be physically observed?
• 7.6: Rotational Spectroscopy of Diatomic Molecules
The permanent electric dipole moments of polar molecules couple to the electric field of electromagnetic radiation to induce transitions between the rotational states of the molecules. The energies that are associated with these transitions are detected in the far infrared and microwave regions of the spectrum. The selection rules for the rotational transitions are derived from the transition moment integral by using the spherical harmonic functions and the appropriate dipole moment operator.
• 7.7: Overview of the Rigid Rotor
We found that the rotational wavefunctions are functions called the Spherical Harmonics, and that these functions are products of Associated Legendre Functions and the eimφeimφ function. Two quantum numbers, J and mJ , are associated with the rotational motion of a diatomic molecule. The quantum numbers identify or specify the particular functions that describe particular rotational states.
• 7.E: Rotational States (Exercises)
Exercises for the "Quantum States of Atoms and Molecules" TextMap by Zielinksi et al.
07: Rotational States
Molecules rotate as well as vibrate. Transitions between rotational energy levels in molecules generally are found in the far infrared and microwave regions of the electromagnetic spectrum. A rotational spectrum of a simple diatomic molecule is illustrated in Figure \(1\) and quantitative information about this spectrum is given in Table \(2\) near the end of this chapter. Notice that the lines are nearly equally spaced and vary in intensity. In this chapter, we will see that the magnitude of the molecule's moment of inertia causes rotational transitions to lie in these spectral regions. We also will learn why the lines are nearly equally spaced and vary in intensity. Such spectra can be used to determine bond lengths, and even bond angles in polyatomic molecules.
To develop a description of the rotational states, we will consider the molecule to be a rigid object, i.e. the bond lengths are fixed and the molecule cannot vibrate. This model for rotation is called the rigid-rotor model. It is a good approximation (even though a molecule vibrates as it rotates, and the bonds are elastic rather than rigid) because the amplitude of the vibration is small compared to the bond length.
The rotation of a rigid object in space is very simple to visualize. Pick up any object and rotate it. There are orthogonal rotations about each of the three Cartesian coordinate axes just as there are orthogonal translations in each of the directions in three-dimensional space; see Figures \(2\) and \(3\). The rotations are said to be orthogonal because one can not describe a rotation about one axis in terms of rotations about the other axes just as one can not describe a translation along the x-axis in terms of translations along the y- and z-axes. For a linear molecule, the motion around the interatomic axis (x-axis) is not a rotation.
In this chapter we consider the case of a diatomic molecule. Solving the Schrödinger equation for rotational motion will give us the rotational energies and angular momenta, the wavefunctions associated with these energy levels and angular momenta, and the quantum numbers that serve to label the energy levels, angular momenta, and wavefunctions. The quantum numbers appear because of boundary conditions imposed on the wavefunctions.
We will find that the energy of rotation is quantized. This quantization and the selection rules for spectroscopic transitions between the various energy levels lead to the absorption lines seen in the rotational spectrum in Figure \(1\). The angular momentum also is quantized, which means that only certain values of the angular momentum are possible, and, when some direction is uniquely defined by an electric or magnetic field, only certain orientations of the rotating molecule in space are possible. This restriction on orientation is called space quantization.
As you can see in Figure \(4\) and Exercise \(1\), the angular momentum vector for a classical rotating system is perpendicular to the plane of rotation. The direction of the vector is given by applying the right-hand rule to the direction of rotation. The orientation of a classical rotating diatomic molecule therefore is defined by the plane in which the internuclear axis lies during rotation or by the direction of the angular momentum vector, which is perpendicular to this plane. The direction of the angular momentum vector is called the axis-of-rotation. Angular momentum vectors are useful because they provide a shorthand way to represent the classical motion of a rotating diatomic molecule. Given only an angular momentum vector, it is possible to reconstruct the direction and plane of rotation in addition to the magnitude of the angular momentum, which is in turn a function of the reduced mass, bond length, and angular velocity of the rotating molecule.
The material in this chapter is very important because the wavefunctions obtained by solving the Schrödinger equation for rotational motion also will be used to describe the hydrogen atom in the next chapter. The hydrogen atom wavefunctions, in turn, are the key to understanding atomic and molecular structure and chemical reactivity.
Exercise \(1\)
To visualize these different orientations and the axis-of-rotation, use your pencil to represent the internuclear axis of a diatomic molecule, rotate it in different planes and align another pencil along the axis-of-rotation in each case. | textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Book%3A_Quantum_States_of_Atoms_and_Molecules_(Zielinksi_et_al)/07%3A_Rotational_States/7.01%3A_Introduction_to_Rotation.txt |
We start our consideration of rotational motion with a system consisting of two atoms connected by a rigid bond, shown in Figure $1$. Translational motion can be separated from rotational motion if we specify the position of the center of mass by a vector $R$, and the positions of each atom relative to the center of mass by vectors $r_1$ and $r_2$. The positions of the atoms then are given by $R + r_1$ and $R + r_2$. The motion of the two particles is described as the translational motion of the center of mass plus the rotational motion of the two particles around the center of mass.
The quantum mechanical description of translational motion, which corresponds to a free particle with total mass $m_1 + m_2$, was described in Chapter 5. Since translational motion and rotational motion are separable, i.e. independent, the translational and rotational energies will add, and the total wavefunction will be a product of a translational function and a rotational function.
Exercise $1$
What do you need to know in order to write the Hamiltonian for the rigid rotor?
We start our quantum mechanical description of rotation with the Hamiltonian:
$\hat {H} = \hat {T} + \hat {V} \label {7.1}$
To explicitly write the components of the Hamiltonian operator, first consider the classical energy of the two rotating atoms and then transform the classical momentum that appears in the energy equation into the equivalent quantum mechanical operator. In the classical analysis the rotational motion of two particles separated by a distance $r$ can be treated as a single particle with reduced mass $μ$ at a distance $r$ from the center of rotation, which is the center of mass.
The kinetic energy of the reduced particle is
$T = \dfrac {p^2}{2 \mu} \label {7-2}$
where
$P^2 = P^2_x + P^2_y + P^2_z \label {7-3}$
Transforming $T$ to a quantum-mechanical operator yields
$\hat {T} = - \dfrac {\hbar ^2 \nabla ^2}{2 \mu} \label {7-4}$
where $\nabla ^2$ is the Laplacian operator.
$\nabla ^2 = \dfrac {\partial ^2}{\partial x^2} + \dfrac {\partial ^2}{\partial y^2} + \dfrac {\partial ^2}{\partial z^2} \label {7-5}$
The rigid rotor model does not include the presence of electric or magnetic fields, or any other external force. Since there are no forces acting on the rotating particle, the potential energy is constant, and we can set it to zero or to any other value because only changes in energy are significant, and there is no absolute zero of energy.
$\hat {V} = 0 \label {7-6}$
Therefore, the Hamiltonian operator for the Schrödinger equation describing this system consists only of the kinetic energy term.
$\hat {H} = \hat {T} + \hat {V} = - \dfrac {\hbar ^2 \nabla ^2}{2 \mu} \label {7-7}$
In Equation \ref{7-5} we wrote the Laplacian operator in Cartesian coordinates. Cartesian coordinates (x, y, z) describe position and motion relative to three axes that intersect at 90º. They work fine when the geometry of a problem reflects the symmetry of lines intersecting at 90º, but the Cartesian coordinate system is not so convenient when the geometry involves objects going in circles as in the rotation of a molecule. In this case, spherical coordinates $(r,\,\theta,\,\phi)$ shown in Figure $2$ are better.
The limits of these coordinantes are
• $0 \le r < \infty$
• $0 \le \theta < \pi$
• $0 \le \varphi < 2\pi$
Spherical coordinates are better because they reflect the spherical symmetry of a rotating molecule. Spherical coordinates have the advantage that motion in a circle can be described by using only a single coordinate. For example, as shown in Figure $2$, changing φ describes rotation around the z‑axis. Changing $θ$ also is very simple. It describes rotation in any plane containing the z‑axis, and $r$ describes the distance from the origin for any value of $θ$ and $φ$.
This situation is analogous to choosing Cartesian or spherical coordinates to locate rooms in a building. Cartesian coordinates are excellent if the building is designed with hallways intersecting at 90º and with an elevator running perpendicular to the floors. Cartesian coordinates would be awkward to use for addresses in a spherical satellite space station with spherical hallways at various distances from the center.
Exercise $2$
Imagine and draw a sketch of a spherical space station with spherical shells for hallways. Show how three spherical coordinates can be used to specify the address of a particular room in terms of values for $r$, $θ$, and $φ$.
In order to use spherical coordinates, we need to express $\nabla ^2$ in terms of $r$, $θ$ and $θ$ and $φ$. The result of the coordinate transformation is
$\nabla ^2 = \dfrac {1}{r^2} \left ( \dfrac {\partial}{\partial r} r^2 \dfrac {\partial}{\partial r} + \dfrac {1}{\sin \theta} \dfrac {\partial}{\partial \theta} \sin \theta \dfrac {\partial}{\partial \theta} + \dfrac {1}{\sin ^2 \theta } \dfrac {\partial ^2}{\partial \varphi ^2} \right ) \label {7-19}$
The Hamiltonian operator in spherical coordinates now becomes
$\hat {H} = \dfrac {-\hbar ^2}{2 \mu r^2} [\dfrac {\partial}{\partial r} r^2 \dfrac {\partial}{\partial r} + \dfrac {1}{\sin \theta} \dfrac {\partial}{\partial \theta} \sin \theta \dfrac {\partial}{\partial \theta} + \dfrac {1}{\sin ^2 \theta } \dfrac {\partial ^2}{\partial \varphi ^2}] \label {7-11}$
This version of the Hamiltonian looks more complicated than Equation \ref{7-7}, but it has the advantage of using variables that are separable (see Separation of Variables). As you may recall, when the variables are separable, the Schrödinger equation can be written as a sum of terms, with each term depending only on a single variable, and the wavefunction solutions are products of functions that each depend on only one variable. | textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Book%3A_Quantum_States_of_Atoms_and_Molecules_(Zielinksi_et_al)/07%3A_Rotational_States/7.02%3A_The_Hamiltonian_Operator_for_Rotational_Motion.txt |
To solve the Schrödinger equation for the rigid rotor, we will separate the variables and form single-variable equations that can be solved independently. Only two variables $\theta$ and $\varphi$ are required in the rigid rotor model because the bond length, $r$, is taken to be the constant $r_0$. We first write the rigid rotor wavefunctions as the product of a theta-function depending only on $\theta$ and a $\phi$-function depending only on $\varphi$
$\psi (\theta , \varphi ) = \Theta (\theta ) \Phi (\varphi) \label{7-12}$
We then substitute the product wavefunction and the Hamiltonian written in spherical coordinates into the Schrödinger Equation \ref{7-13}:
$\hat {H} \psi (\theta , \varphi ) = E \psi (\theta , \varphi ) \label{7-13}$
to obtain
$-\dfrac {\hbar ^2}{2\mu r^2_0} \left [ \dfrac {\partial}{\partial r_0} r^2_0 \dfrac {\partial}{\partial r_0} + \dfrac {1}{\sin \theta} \dfrac {\partial}{\partial \theta } \sin \theta \dfrac {\partial}{\partial \theta } + \dfrac {1}{\sin ^2 \theta} \dfrac {\partial ^2}{\partial \varphi ^2} \right ] \Theta (\theta ) \Phi (\varphi) = E \Theta (\theta) \Phi (\varphi) \label {7-14}$
Since $r = r_0$ is constant for the rigid rotor and does not appear as a variable in the functions, the partial derivatives with respect to $r$ are zero; i.e. the functions do not change with respect to $r$. We also can substitute the symbol $I$ for the moment of inertia, $\mu r^2_0$ in the denominator of the left hand side of Equation \ref{7-14}, to give
$-\dfrac {\hbar ^2}{2I} \left [ \dfrac {1}{\sin \theta} \dfrac {\partial}{\partial \theta } \sin \theta \dfrac {\partial}{\partial \theta } + \dfrac {1}{\sin ^2 \theta} \dfrac {\partial ^2}{\partial \varphi ^2}\right ] \Theta (\theta ) \Phi (\varphi) = E \Theta (\theta) \Phi (\varphi) \label{7-15}$
To begin the process of separating the variables in Equation \ref{7-15}, multiply each side of the equation by $\dfrac {2I}{\hbar ^2}$ and $\dfrac {-\sin ^ \theta}{\Theta (\theta) \Phi (\varphi)}$ to give
$\dfrac {1}{\Theta (\theta) \psi (\varphi)} \left [ \sin \theta \dfrac {\partial}{\partial \theta } \sin \theta \dfrac {\partial}{\partial \theta } + \dfrac {\partial ^2}{\partial \varphi ^2}\right ] \Theta (\theta ) \Phi (\varphi) = \dfrac {-2IE \sin ^2 \theta}{\hbar ^2} \label {7-16}$
Simplify the appearance of the right-hand side of Equation \ref{7-16} by defining a parameter $\lambda$:
$\lambda = \dfrac {2IE}{\hbar ^2} \label{7-17}$
Note that this $\lambda$ has no connection to a wavelength; it is merely being used as an algebraic symbol for the combination of constants shown in Equation \ref{7-17}.
Inserting $\lambda$, evaluating partial derivatives, and rearranging Equation \ref{7-16} produces
$\dfrac {1}{\Theta (\theta)} \left [ \sin \theta \dfrac {\partial}{\partial \theta } \left (\sin \theta \dfrac {\partial}{\partial \theta } \right ) \Theta (\theta) + \left ( \lambda \sin ^2 \theta \right ) \Theta (\theta) \right ] = - \dfrac {1}{\Phi (\varphi)} \dfrac {\partial ^2}{\partial \varphi ^2} \Phi (\varphi) \label {7-18}$
Exercise $1$
Carry out the steps leading from Equation \ref{7-16} to Equation \ref{7-18}. Keep in mind that, if $y$ is not a function of $x$,
$\dfrac {dy}{dx} = y \dfrac {d}{dx}$
Equation \ref{7-18} says that the function on the left, depending only on the variable $\theta$, always equals the function on the right, depending only on the variable $\varphi$, for all values of $\theta$ and $\varphi$. The only way two different functions of independent variables can be equal for all values of the variables is if both functions are equal to a constant (review separation of variables). We call this constant $m^2$ because soon we will need the square root of it. The two differential equations to solve are the $\theta$ -equation
$\sin \theta \dfrac {d}{d \theta} \left ( \sin \theta \dfrac {d}{d \theta} \right ) \Theta (\theta ) + \left ( \lambda \sin ^2 \theta - m^2 \right ) \Theta (\theta ) = 0 \label {7-19}$
and the $\varphi$ -equation
$\dfrac {d^2}{d \varphi ^2} \Phi (\varphi ) + m^2 \Phi (\varphi) = 0 \label {7-20}$
The partial derivatives have been replaced by total derivatives because only a single variable is involved in each equation.
Exercise $2$
Show how Equations \ref{7-19} and \ref{7-20} are obtained from Equation \ref{7-18}.
The $\varphi$-equation is similar to the Schrödinger equation for the free particle. Since we already solved this differential equation in Chapter 5, we immediately write the solutions:
$\Phi _m (\varphi) = N e^{\pm im \varphi} \label {7-21}$
Exercise $3$
Substitute Equation \ref{7-21} into Equation \ref{7-20} to show that it is a solution to that differential equation.
The normalization condition, Equation \ref{7-22}, is used to find a value for $N$ that satisfies Equation \ref{7-21}.
$\int \limits ^{2 \pi} _0 \Phi ^*(\varphi) \Phi (\varphi) d \varphi = 1 \label {7-22}$
The range of the integral is only from $0$ to $2π$ because the angle $\varphi$ specifies the position of the internuclear axis relative to the x-axis of the coordinate system and angles greater than $2π$ do not specify additional new positions.
Exercise $4$
Use the normalization condition, Equation \ref{7-21} to show that
$N = (2π)-\dfrac{1}{2}.$
Values for $m$ are found by using a cyclic boundary condition. The cyclic boundary condition means that since $\varphi$ and $\varphi + 2\varphi$ refer to the same point in three-dimensional space, $\Phi (\varphi)$ must equal $\Phi (\varphi + 2 \varphi )$, i.e.
$e^{im\varphi} = e^{im (\varphi + 2\pi)}$
$= e^{im\varphi} e^{im2\pi} \label {7-23}$
For the equality in Equation $\ref{7-23}$ to hold, $e^{im2\pi}$ must equal 1, which is true only when
$m = \cdots , -3, -2, -1, 0, 1, 2, 3, \cdots \label {7-24}$
In other words m can equal any positive or negative integer or zero.
Exercise $5$
Use Euler’s Formula to show that $e^{im2π}$ equals 1 for $m$ equal to zero or any positive or negative integer.
Thus, the $Φ$ function is
$\Phi _m (\varphi ) = (2\pi) ^{-1/2} e^{\pm im\varphi} \label {7-25}$
$\text { with } m = 0, \pm 1, \pm 2, \cdots$
Finding the $\Theta (\theta)$ functions that are solutions to the $\theta$-equation, Equation \ref{7-19}, is a more complicated process. Solutions are found to be a set of power series called Associated Legendre Functions, which are power series of trigonometric functions, i.e. products and powers of sine and cosine functions. The $\Theta (\theta)$ functions, along with their normalization constants, are shown in the third column of Table $1$.
Table $1$: Spherical Harmonic Wavefunctions
m J $\Theta ^m_J (\theta)$ $\Phi (\varphi)$ $Y^m_J (\theta , \varphi)$
0 0 $\dfrac {1}{\sqrt {2}}$ $\dfrac {1}{\sqrt {2 \pi}}$ $\dfrac {1}{\sqrt {4 \pi}}$
0 1 $\sqrt {\dfrac {3}{2}}\cos \theta$ $\dfrac {1}{\sqrt {2 \pi}}$ $\sqrt {\dfrac {3}{4 \pi}}\cos \theta$
1 1 $\sqrt {\dfrac {3}{4}}\sin \theta$ $\dfrac {1}{\sqrt {2 \pi}}e^{i \varphi}$ $\sqrt {\dfrac {3}{8 \pi}}\sin \theta e^{i \varphi}$
-1 1 $\sqrt {\dfrac {3}{4}}\sin \theta$ $\dfrac {1}{\sqrt {2 \pi}}e^{-i\varphi}$ $\sqrt {\dfrac {3}{8 \pi}}\sin \theta e^{-i \varphi}$
0 2 $\sqrt {\dfrac {5}{8}}(3\cos ^2 \theta - 1)$ $\dfrac {1}{\sqrt {2 \pi}}$ $\sqrt {\dfrac {5}{16\pi}}(3\cos ^2 \theta - 1)$
1 2 $\sqrt {\dfrac {15}{4}} \sin \theta \cos \theta$ $\dfrac {1}{\sqrt {2 \pi}}e^{i \varphi}$ $\sqrt {\dfrac {15}{8\pi}} \sin \theta \cos \theta e^{i\varphi}$
-1 2 $\sqrt {\dfrac {15}{4}} \sin \theta \cos \theta$ $\dfrac {1}{\sqrt {2 \pi}}e^{-i\varphi}$ $\sqrt {\dfrac {15}{8\pi}} \sin \theta \cos \theta e^{-i\varphi}$
2 2 $\sqrt {\dfrac {15}{16}} \sin ^2 \theta$ $\dfrac {1}{\sqrt {2 \pi}}e^{2i\varphi}$ $\sqrt {\dfrac {15}{32\pi}} \sin ^2 \theta e^{2i\varphi}$
-2 2 $\sqrt {\dfrac {15}{16}} \sin ^2 \theta$ $\dfrac {1}{\sqrt {2 \pi}}e^{2i\varphi}$ $\sqrt {\dfrac {15}{32\pi}} \sin ^2 \theta e^{-2i\varphi}$
The solution to the $\theta$-equation requires that $λ$ in Equation \ref{7-18} be given by
$\lambda = J (J + 1) \label {7-26}$
where
$J \ge |m| \label {7-27}$
$J$ can be 0 or any positive integer greater than or equal to m. Each pair of values for the quantum numbers, $J$ and $m$, identifies a rotational state and a wavefunction. For clarity in remembering that $J$ controls the allowed values of $m$, $m$ is often referred to as $m_J$, and we will now use that notation.
The combination of Equations \ref{7-17} and \ref{7-26} reveals that the energy of this system is quantized.
$E = \dfrac {\hbar ^2 \lambda}{2I} = J(J + 1) \dfrac {\hbar ^2}{2I} \label {7-28}$
Exercise $6$
Compute the energy levels for a rotating molecule for J = 0 to J = 5 using units of $\dfrac {\hbar ^2}{2I}$.
Using Equation \ref{7-28}, you can construct a rotational energy level diagram. For simplicity, use energy units of $\dfrac {\hbar ^2}{2I}$. The lowest energy state has $J = 0$ and $m_J = 0$. This state has an energy $E_0 = 0$. There is only one state with this energy, i.e. one set of quantum numbers, one wavefunction, and one set of properties for the molecule.
The next energy level is $J = l$ with energy $\dfrac {2\hbar ^2}{2I}$. There are three states with this energy because $m_J$ can equal +1, 0, or ‑1. These different states correspond to different orientations of the rotating molecule in space. These states are discussed in detail in Sections 7.3 and 7.4. States with the same energy are said to be degenerate. The degeneracy of an energy level is the number of states with that energy. The degeneracy of the $J = l$ energy level is 3 because there are three states with the energy $\dfrac {2\hbar ^2}{2I}$.
The next energy level is for $J = 2$. The energy is $\dfrac {6\hbar ^2}{2I}$, and there are five states with this energy corresponding to $m_J = +2, +1, 0, ‑1, ‑2$. The energy level degeneracy is five. Note that the spacing between energy levels increases as J increases. Also note that the degeneracy increases. The degeneracy is always $2J+1$ because $m_J$ ranges from $+J$ to $‑J$ in integer steps, including 0.
Exercise $7$
For $J = 0$ to $J = 5$, identify the degeneracy of each energy level and the values of the mJ quantum number that go with each value of the $J$ quantum number. Construct a rotational energy level diagram (see Drawing Energy Level Diagrams) including $J$ = 0 through 5. Label each level with the appropriate values for the quantum numbers J and $m_J$. Describe how the spacing between levels varies with increasing $J$.
A wavefunction that is a solution to the rigid rotor Schrödinger equation (defined in Equation \ref{7-12}) can be written as a single function Y$\theta, \varphi)$, which is called a spherical harmonic function.
$Y^{m_J} _J (\theta , \varphi ) = \Theta ^{|m_J|}_J (\theta) \Phi _{m_J} (\varphi) \label {7-29}$
The spherical harmonic wavefunction is labeled with mJ and J because its functional form depends on both of these quantum numbers. These functions are tabulated above for $J = 0$ through $J = 2$ and for $J = 3$ in Table $1$ plots of some of the $\theta$-functions are shown in Figure $1$.
The two-dimensional space for a rigid rotor is defined as the surface of a sphere of radius $r_0$, as shown in Figure $2$.
The probability of finding the internuclear axis at specific coordinates $\theta _0$ and $\varphi _0$ within an infinitesimal area $ds$ on this curved surface is given by
$Pr \left [ \theta _0, \varphi _0 \right ] = Y^{m_{J*}}_J (\theta _0, \varphi _0) Y^{m_J}_J (\theta _0, \varphi _0) ds \label {7-30}$
where the area element ds is centered at $\theta _0$ and $\varphi _0$. The absolute square (or modulus squared) of the rigid rotor wavefunction $Y^{m_{J*}}_J (\theta, \varphi) Y^{m_J}_J (\theta, \varphi)$ gives the probability density for finding the internuclear axis oriented at $\theta$ to the z-axis and $\varphi$ to the x-axis, and in spherical coordinates the area element used for integrating $\theta$ and $\varphi$ is
$ds = \sin \theta d \theta d \varphi \label {7-31}$
Exercise $8$
Use calculus to evaluate the probability of finding the internuclear axis of a molecule described by the $J = 1$, $m_J = 0$ wavefunction somewhere in the region defined by a range in $\theta$ of 0° to 45°, and a range in of 0° to 90°. Note that a double integral will be needed. Sketch this region as a shaded area on Figure $1$.
Consider the significance of the probability density function by examining the J = 1, mJ = 0 wavefunction. The Spherical Harmonic for this case is
$Y^0_1 = \left ( \dfrac {3}{4 \pi} \right )^{\dfrac {1}{2}} \cos \theta \label {7-32}$
The polar plot of $( Y^0_1)^2$ is shown in Figure $1$. For $J = 1$ and $m_J = 0$, the probability of finding the internuclear axis is independent of the angle $\varphi$ from the x-axis, and greatest for finding the internuclear axis along the z‑axis, but there also is a probability for finding it at other values of $\theta$ as well. So, although the internuclear axis is not always aligned with the z-axis, the probability is highest for this alignment. Also, since the probability is independent of the angle $\varphi$, the internuclear axis can be found in any plane containing the z-axis with equal probability.
The $J = 1$, $m_J = 0$ function is 0 when $\theta$ = 90°. Therefore, the entire xy-plane is a node. This fact means the probability of finding the internuclear axis in this particular horizontal plane is 0 in contradiction to our classical picture of a rotating molecule. In the classical picture, a molecule rotating in a plane perpendicular to the xy‑plane must have the internuclear axis lie in the xy‑plane twice every revolution, but the quantum mechanical description says that the probability of being in the xy-plane is zero. This conclusion means that molecules are not rotating in the classical sense, but they still have some, but not all, of the properties associated with classical rotation. The properties they retain are associated with angular momentum.
Exercise $9$
For each state with $J = 0$ and $J = 1$, use the function form of the $Y$ spherical harmonics and Figure $1$ to determine the most probable orientation of the internuclear axis in a diatomic molecule, i.e. the most probable values for $\theta$ and $\theta$.
Exercise $10$
Write a paragraph describing the information about a rotating molecule that is provided in the polar plot of $Pr [\theta, \theta ]$ for the $J = 1$, $m_J = \pm 1$ state in Figure $1$. Compare this information to the classical picture of a rotating object. | textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Book%3A_Quantum_States_of_Atoms_and_Molecules_(Zielinksi_et_al)/07%3A_Rotational_States/7.03%3A_Solving_the_Rigid_Rotor_Schr%C3%B6dinger_Equation.txt |
Angular momentum is a key component in the physical descriptions of rotating systems. It is important because angular momentum, just like energy and linear momentum, must be conserved in any process. Consequently angular momentum is used to derive selection rules for spectroscopic transitions, determine which states of atoms and molecules can be affected by various perturbations, and identify possible and forbidden mechanisms in chemical reactions. Rotational angular momentum also explains the splitting of spectral lines in electric and magnetic fields and the angular distributions of gas-phase reaction products.
Now that we have the rotational wavefunctions that describe the rotational states, we need the angular momentum operators that enable us to extract the angular momentum properties from the wavefunctions. In this section we develop the operators for total angular momentum and the z-component of angular momentum, and use these operators to learn about the quantized nature of angular momentum for a rotating diatomic molecule.
Since the energy of a rotating object is related to its total angular momentum M and moment of inertia I,
$M^2 = 2IE \label {7-33}$
the quantization of energy arising from the quantum-mechanical treatment of rotation, given by Equation \ref{7-33}, means that the total angular momentum also is quantized.
$M^2 = J(J + 1) {\hbar}^2 \label {7-34}$
Consequently, $J$ is called the rotational angular momentum quantum number. In the equation above, $M^2$ is a scalar quantity corresponding to the square of the length of the angular momentum vector, $M$. From this equation, we can learn something about the magnitude of the angular momentum of the rotating molecule, but nothing about the orientation.
Exercise $1$
Show that the combination of Equations \ref{7-33} and \ref{7-28} lead to Equation \ref{7-34}.
Just as there is an operator for the energy, $\hat{H}$, there also is an operator for the square of the angular momentum. We can discover this operator if we make some substitutions and rewrite Equation \ref{7-15}. Start with Equation \ref{7-14}, and use the spherical harmonic function in place of the product function $\Theta (\theta) \psi (\varphi)$ to obtain
$-\dfrac {\hbar ^2}{2I} \left[ \dfrac {\partial}{\partial r_0} r^2_0 \dfrac {\partial}{\partial r_0} + \dfrac {1}{\sin \theta} \dfrac {\partial}{d \partial} \sin \theta \dfrac {\partial}{d \partial} + \dfrac {1}{\sin ^2 \theta} \dfrac {\partial ^2}{\partial \varphi ^2}\right] Y^{mJ}_J (\theta , \varphi) = EY^{mJ}_J (\theta , \varphi) \label {7-35}$
Multiplying both sides by 2I and then using Equation \ref{7-33} to replace 2IE on the right-hand side with $M^2$ yields
$- \hbar ^2 \left[\dfrac {1}{\sin \theta} \dfrac {\partial}{d \partial} \sin \theta \dfrac {\partial}{d \partial} + \dfrac {1}{\sin ^2 \theta} \dfrac {\partial ^2}{\partial \varphi ^2}\right] Y(\theta , \varphi) = M^2Y (\theta , \varphi) \label {7-36}$
Equation \ref{7-36} is an eigenvalue equation. The operator on the left operates on the spherical harmonic function to give a value for $M^2$, the square of the rotational angular momentum, times the spherical harmonic function. This operator thus must be the operator for the square of the angular momentum.
$\hat {M}^2 = -\hbar ^2 \left[\dfrac {1}{\sin \theta} \dfrac {\partial}{d \partial} \sin \theta \dfrac {\partial}{d \partial} + \dfrac {1}{\sin ^2 \theta} \dfrac {\partial ^2}{\partial \varphi ^2}\right] \label {7-37}$
The spherical harmonics therefore are eigenfunctions of $\hat {M} ^2$ with eigenvalues given by Equation \ref{7-34}, where $J$ is the angular momentum quantum number. The magnitude of the angular momentum, i.e. the length of the angular momentum vector, $\sqrt {M^2}$, varies with the quantum number $J$. The classical interpretation of this fact is that the molecule rotates with higher angular velocity in a state with higher $J$ since neither the mass nor the radius of rotation can change.
The $m_J$ quantum number is associated with the $\varphi$-equation, Equation \ref{7-20}. Figure $8$ shows that $\varphi$ describes rotation about the z-axis. Since angular momentum results from rotation about an axis, it seems plausible that the mJ quantum number is related to the z-component of angular momentum. To demonstrate that this association of m with the z-component of angular momentum is indeed correct, we need to write an operator for the z-component of angular momentum. When we operate on the $\Phi$ function with this operator, we expect to get an eigenvalue for the z-component of angular momentum.
We create an angular momentum operator by changing the classical expression for angular momentum into the corresponding quantum mechanical operator. The classical expression for the z-component of angular momentum is
$M_z = xp_y - yp_x \label {7-38}$
By substituting the equivalents for the coordinates and momenta we obtain
$\hat {M} _z = -i \hbar \left ( x \dfrac {\partial}{\partial y} - y \dfrac {\partial}{\partial x} \right ) \label {7-39}$
After changing to spherical coordinates by using the chain rule and trigonometric identities , $\hat {M} _z$ becomes
$\hat {M} _z = i \hbar \dfrac {\partial}{ \partial \varphi} \label {7-40}$
Exercise $2$
Start with $\dfrac {\partial}{ \partial \varphi}$ and change to Cartesian coordinates by using the chain rule to prove that
$\dfrac {\partial}{ \partial \varphi} = x \dfrac {\partial}{ \partial y} - y \dfrac {\partial}{ \partial x}.$
Exercise $3$
Use the operator $\hat {M} _z$ to operate on the general form of the wavefunction $\phi _m (\varphi)$ given in Equation \ref{7-25}. Based on your result, what are the possible values for the z-component of the angular momentum?
In Exercise $15$ we did indeed produce an eigenvalue equation that tells us that the z-component of angular momentum is
$M_z = m_J \hbar \label {7-41}$
The z-component of the angular momentum is very useful because it provides information about the orientation of the total angular momentum vector, $M$. The magnitude of the total angular momentum vector, $M$, can be determined from $M^2$, but it is only through the value of $M_z$ that we know anything about the orientation of $M$.
Exercise $4$
Determine the lengths of the angular momentum vectors, $M$, for $J = 0$, $1$, and $2$ and the lengths of their projections on the z-axis.
One might expect from classical mechanics to be able to obtain the other two components of angular momentum, $M_x$ and $M_y$, as well. These components are the projections of the angular momentum vector onto the x- and y-axes. In order for the rigid rotor wavefunctions to be eigenfunctions of $\hat {M}_x$ and $\hat {M}_y$ as well as $\hat {M}_z$, these operators must commute with $\hat {M}_z$. They do not commute. Since the rigid rotor wavefunctions are not eigenfunctions of $\hat {M}_x$ and $\hat {M}_y$, we cannot obtain their eigenvalues, which is another way of saying that we cannot know anything about the x and y components of angular momentum. Only the z component of the angular momentum can be determined in the quantum mechanical system. This limitation is a manifestation of the Heisenberg Uncertainty Principle. Since we know $M_z$ exactly (there is no uncertainty), we can have no knowledge of $M_x$ or $M_y$ (the uncertainty must be infinite). This conclusion means that the angular momentum vector can be pointing with equal probability anywhere on a circle around the z-axis, giving all possible projections on the x and y axes. See Figure $1$ for an illustration with J = 1 and $m_J = -1$, $0$, and $+1$.
As we saw in Exercise $16$, the quantum mechanical results for a rotating diatomic molecule provide us with the magnitudes of the angular momentum vector for each state, along with the projection of the vector for that state along the z-axis. We can get no information about the projection of the vector on the other two axes. Using this collection of results and a little trigonometry, we can construct the quantitative physical picture of the angular momentum vector for each of the rotational states that is shown in Figure $1$. If $\alpha$ is the angle between the angular momentum vector and the z-axis, then in general
$\cos \alpha = \dfrac {m_J \hbar}{\sqrt {J(J + 1) \hbar ^2}} \label {7-42}$
where $\alpha$ can be obtained using the inverse cosine (arccos)
$\alpha = arccos \left[ \dfrac {m_J}{\sqrt {J (J + 1)}}\right]$
Exercise $5$
Calculate the possible angles a $J = 1$ angular momentum vector can have with respect to the z-axis.
Exercise $6$
What is the rotational energy and angular momentum of a molecule in the state with $J = 0$? Describe the rotation of a molecule in this state.
Classically the plane of rotation is perpendicular to the angular momentum vector. We can locate angular momentum vectors shown in Figure $1$ for any rotational state by calculating the value of $\alpha$ using the appropriate quantum numbers for that state in Equation \ref{7-43}. If we then impose the classical interpretation of the angular momentum vector, we can construct a physical picture of a rotating diatomic molecule associated with the angular momentum vector for each rotational state, as discussed in the next section of this chapter. | textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Book%3A_Quantum_States_of_Atoms_and_Molecules_(Zielinksi_et_al)/07%3A_Rotational_States/7.04%3A_Angular_Momentum_Operators_and_Eigenvalues.txt |
In this section we examine the rotational states for a diatomic molecule by comparing the classical interpretation of the angular momentum vector with the probabilistic interpretation of the angular momentum wavefunctions. We want to answer the following types of questions. How do we describe the orientation of a rotating diatomic molecule in space? Is the molecule actually rotating? What properties of the molecule can be physically observed? In what ways does the quantum mechanical description of a rotating molecule differ from the classical image of a rotating molecule?
Introduction
In isotropic space, meaning all directions are equivalent, any given molecule can have any orientation. The z-axis can have any orientation. Since we cannot distinguish different directions in isotropic space, different molecules will have different angular momentum vectors and all orientations of these vectors in space will be equally probable. Thus for example, even though our quantum mechanical description says that only three states with different angular momentum and internuclear axis orientations are allowed for the $J = 1$ energy level ($m_J = 0, 1, -1$), there are no practical consequences of this fact in isotropic space.
We can make space anisotropic by applying an external field, e.g. an electric field or a magnetic field. The field direction is a unique direction in space, and it is most convenient to assign the z-axis to that direction. Our rotational wavefunctions and operators implicitly are designed to have z be the special axis because of the relationships between the spherical coordinates and the Cartesian coordinates. The effects of external electric and magnetic fields on atomic and molecular spectra is an active area of research that has provided much insight into the properties of atoms and molecules and the use of quantum mechanics to describe them. For example, magnetic field effects on atomic spectra contributed to the discovery of electron spin and are discussed in the next chapter. These effects are known as Stark effects (electric field) and Zeeman effects (magnetic field) after the pioneers who discovered them.
For the ground state, J = 0, the rotational energy and angular momentum are zero. With no rotational energy or angular momentum, the molecule cannot be rotating! It may be vibrating and translating, but it is not rotating. What is its orientation in space? Examine the wavefunction in Figure $9$. All values of $\theta$ and $\varphi$ are equally probable; we have no information about the orientation of the molecule in this state. The fact that there is no uncertainty in the angular momentum (it is exactly zero), and no information about the orientation (the uncertainty is infinite) is consistent with the Heisenberg Uncertainty Principle.
The space for a rigid rotor is defined by all the values for the variables $\theta$ and $\varphi$. Since a rigid rotor is not constrained to any region of this space by a potential boundary, the energy and angular momentum can be zero because the uncertainty in the location of the rotor in this space is infinite. The rotor can be anywhere; $\theta$ and $\varphi$ can have any values. This situation is analogous to the free particle, where we knew the momentum exactly but had no knowledge of the particle's position. When $J$ is different from 0, the uncertainty in the location of the rotor decreases. It is more likely to be found in some regions of space than in others. This decrease in uncertainty about the location of the rotor is accompanied by an increase in the uncertainty in the angular momentum as required by the Uncertainty Principle. We still know the magnitude of the angular momentum vector exactly, but its direction in space is uncertain.
Next consider the case $J = l$ with $m_J = +1, 0, -1$. The length of the angular momentum vectors for all of these states is $\sqrt {2} \hbar$, see Exercise $16$ and Figure $11$. From a classical perspective, the non-zero value for angular momentum means that the molecule must be rotating. The projections of the vectors onto the z-axis are ħ, 0, and -ħ for the $m_J = +1, 0, -1$ states, respectively. The classical interpretation of this result is that, while the plane of rotation of the molecule, which is perpendicular to the angular momentum vector describing each state, is confined to a specific orientation with respect to the z-axis, it is not confined with respect to the x- and y- axes.
The classical picture of rotation and interpretation of the angular momentum vector for the J = 1, $m_J = 0$ state places the internuclear axis in the positions shown in Figure $1$, rotating with equal probability in any plane containing the z-axis but unconstrained with respect to the angle $\varphi$.
For the $J = 1$, $m_J = 1$ and $m_J = -1$ states, the wavefunctions are given by
\begin{align} Y^1_1 &= \sqrt {\frac {3}{8 \pi}} \sin \theta e^{i \varphi} \[4pt] Y^{-1}_1 &= \sqrt {\frac {3}{8 \pi}} \sin \theta e^{-i \varphi} \label {7-44} \end{align}
The imaginary component in these wavefunctions is somewhat disconcerting until we realize that the modulus squared of the wavefunction has the physical interpretation of a probability density, and the imaginary component disappears in the modulus.
$|Y^1_1|^2 = |Y^{-1}_1|^2 = \frac {3}{8 \pi} \sin ^2 \theta \label {7-45}$
In these expressions, which are identical, the angle $\varphi$ again does not appear. There are no constraints on the wavefunctions with respect to the angle $\varphi$; i.e., the probability density is spread evenly over all values of $\varphi$associated with a particular $\theta$. The $\theta$ dependence is a $\sin^2$ function, which has a maximum at 90º and goes to 0º as $\theta$ goes to 0º and 180º for both of these rotational states as shown in Figure $3$.
The classical interpretation of the rotation associated with the angular momentum vector for the $m_J = 1$ state, which is tilted at an angle of 45º from the z-axis, says that the possible planes of rotation containing the internuclear axis of the molecule are aligned perpendicular to the angular momentum vector, as shown in Figure $\PageIndex{4a}$. The molecule is rotating in any of these planes with equal probability.
For the $m_J = -1$ state, the angular momentum vector is at an angle $\alpha$ = 135º from the z-axis, and the possible planes of rotation containing the internuclear axis are perpendicular to it, as shown in Figure $\PageIndex{4b}$. Using the right hand rule, the direction of rotation is clockwise for the $m_J = -1$ state and counterclockwise for the $m_J = 1$ state.
Exercise $19$
Five states have $J = 2$. Calculate the angles the angular momentum vectors for these states make with respect to the z-axis. Sketch a diagram similar to Figure $4$ in which all five angular momentum vectors for the $J = 2$ energy level are placed on the same diagram. Describe how the molecule rotates relative to one of these vectors. | textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Book%3A_Quantum_States_of_Atoms_and_Molecules_(Zielinksi_et_al)/07%3A_Rotational_States/7.05%3A_Quantum_Mechanical_Properties_of_Rotating_Diatomic_Molecules.txt |
The permanent electric dipole moments of polar molecules can couple to the electric field of electromagnetic radiation. This coupling induces transitions between the rotational states of the molecules. The energies that are associated with these transitions are detected in the far infrared and microwave regions of the spectrum. For example, the microwave spectrum for carbon monoxide shown at the beginning of the chapter in Figure $1$.1 spans a frequency range of 100 to 1200 GHz, which corresponds to 3 - 40 $cm^{-1}$.
The selection rules for the rotational transitions are derived from the transition moment integral by using the spherical harmonic functions and the appropriate dipole moment operator, $\hat {\mu}$.
$\mu _T = \int Y_{J_f}^{m_f*} \hat {\mu} Y_{J_i}^{m_i} \sin \theta\, d \theta\, d \varphi \label {7-46}$
Evaluating the transition moment integral involves a bit of mathematical effort. This evaluation reveals that the transition moment depends on the square of the dipole moment of the molecule, $\mu ^2$ and the rotational quantum number, $J$, of the initial state in the transition,
$\mu _T = \mu ^2 \dfrac {J + 1}{2J + 1} \label {7-47}$
and that the selection rules for rotational transitions are
$\Delta J = \pm 1 \label {7-48}$
$\Delta m_J = 0, \pm 1 \label {7-49}$
For $\Delta J = +1$, a photon is absorbed; for $\Delta J = -1$ a photon is emitted.
Exercise $1$
Explain why your microwave oven heats water but not air. Hint: draw and compare Lewis structures for components of air and for water.
The energies of the rotational levels are given by Equation $\ref{7-28}$,
$E = J(J + 1) \dfrac {\hbar ^2}{2I} \label {7-28}$
and each energy level has a degeneracy of $2J+1$ due to the different $m_J$ values.
Exercise $2$
Use the rotational energy level diagram for $J = 0$, $J=1$, and $J=2$ that you produced in Exercise $9$, and add arrows to show all the allowed transitions between states that cause electromagnetic radiation to be absorbed or emitted.
Transition Energies
The transition energies for absorption of radiation are given by
\begin{align} E_{photon} &= \Delta E_{states} \[4pt] &= E_f - E_i \[4pt] &= h \nu \[4pt] &= hc \bar {\nu} \label {7-50} \end{align}
Substituted Equation \ref{7-28} into Equation \ref{7-50}
\begin{align} h \nu &=hc \bar {\nu} \[4pt] &= J_f (J_f +1) \dfrac {\hbar ^2}{2I} - J_i (J_i +1) \dfrac {\hbar ^2}{2I} \label {7-51} \end{align}
Since microwave spectroscopists use frequency, and infrared spectroscopists use wavenumber units when describing rotational spectra and energy levels, both $\nu$ and $\bar {\nu}$ are included in Equation $\ref{7-51}$, and $J_i$ and $J_f$ are the rotational quantum numbers of the initial (lower) and final (upper) levels involved in the absorption transition. When we add in the constraints imposed by the selection rules, $J_f$ is replaced by $J_i + 1$, because the selection rule requires $J_f – J_i = 1$ for absorption. The equation for absorption transitions then can be written in terms of the quantum number $J_i$ of the initial level alone.
$h \nu = hc \bar {\nu} = 2 (J_i + 1) \dfrac {\hbar ^2}{2I} \label {7-52}$
Divide Equation $\ref{7-52}$ by $h$ to obtain the frequency of the allowed transitions,
$\nu = 2B (J_i + 1) \label {7-53}$
where $B$, the rotational constant for the molecule, is defined as
$B = \dfrac {\hbar ^2}{2hI} \label {7-54}$
Exercise $3$
Complete the steps going from Equation $\ref{7-51}$ to Equation $\ref{7-54}$ and identify the units of $B$ at the end.
Exercise $4$
Infrared spectroscopists use units of wave numbers. Rewrite the steps going from Equation $\ref{7-51}$ to Equation $\ref{7-54}$ to obtain expressions for $\hbar {\nu}$ and B in units of wave numbers. Note that to convert $B$ in Hz to $B$ in $cm^{-1}$, you simply divide the former by $c$.
Figure $1$ shows the rotational spectrum of $\ce{^{12}C^{16}O}$ as a series of nearly equally spaced lines. The line positions $\nu _J$, line spacings, and the maximum absorption coefficients ( $\gamma _{max}$), the absorption coefficients associated with the specified line position) for each line in this spectrum are given here in Table $1$.
Table $1$: Rotational Transitions in $\ce{^{12}C^{16}O}$ at 40 K
J
$\nu _J$(MHz)
Spacing from previous line(MHz)
$\gamma _{max}$
0 115,271.21 0 0.0082
1 230,538.01 115,266.80 0.0533
2 345,795.99 115,257.99 0.1278
3 461,040.76 115,244.77 0.1878
4 576,267.91 115,227.15 0.1983
5 691,473.03 115,205.12 0.1618
6 806,651.78 115,178.68 0.1064
7 921,799.55 115,147.84 0.0576
8 1,036,912.14 115,112.59 0.0262
9 1,151,985.08 115,072.94 0.0103
Let’s try to reproduce Figure $1$ from the data in Table $1$ by using the quantum theory that we have developed so far. Equation $\ref{7-53}$ predicts a pattern of exactly equally spaced lines. The lowest energy transition is between $J_i = 0$ and $J_f = 1$ so the first line in the spectrum appears at a frequency of $2B$. The next transition is from $J_i = 1$ to $J_f = 2$ so the second line appears at $4B$. The spacing of these two lines is 2B. In fact the spacing of all the lines is $2B$ according to this equation, which is consistent with the data in Table $1$ showing that the lines are very nearly equally spaced. The difference between the first spacing and the last spacing is less than 0.2%.
Exercise $5$
Use Equation $\ref{7-53}$ to prove that the spacing of any two lines in a rotational spectrum is $2B$. That is, derive $\nu _{J_i + 1} - \nu _{J_i} = 2B$.
Non-Rigid Rotor Effects
Centrifugal stretching of the bond as $J$ increases causes the decrease in the spacing between the lines in an observed spectrum. This decrease shows that the molecule is not really a rigid rotor. As the rotational angular momentum increases with increasing $J$, the bond stretches. This stretching increases the moment of inertia and decreases the rotational constant. Centrifugal stretching is exactly what you see if you swing a ball on a rubber band in a circle (Figure $1$).
The effect of centrifugal stretching is smallest at low $J$ values, so a good estimate for $B$ can be obtained from the $J = 0$ to $J = 1$ transition. From $B$, a value for the bond length of the molecule can be obtained since the moment of inertia that appears in the definition of B, Equation $\ref{7-54}$, is the reduced mass times the bond length squared.
Exercise $6$
Use the frequency of the $J = 0$ to $J = 1$ transition observed for carbon monoxide to determine a bond length for carbon monoxide.
When the centrifugal stretching is taken into account quantitatively, the development of which is beyond the scope of the discussion here, a very accurate and precise value for $B$ can be obtained from the observed transition frequencies because of their high precision. Rotational transition frequencies are routinely reported to 8 and 9 significant figures.
As we have just seen, quantum theory successfully predicts the line spacing in a rotational spectrum. An additional feature of the spectrum is the line intensities. The lines in a rotational spectrum do not all have the same intensity, as can be seen in Figure $1$.1 and Table $1$. The maximum absorption coefficient for each line, $\gamma _{max}$, is proportional to the magnitude of the transition moment, $\mu _T$ which is given by Equation $\ref{7-47}$, and to the population difference between the initial and final states, $\Delta n$. Since $\Delta n$ is the difference in the number of molecules present in the two states per unit volume, it is actually a difference in number density.
$\gamma _{max} = C_{\mu T} \cdot \Delta n \label {7-55}$
where $C_{\mu T}$ includes constants obtained from a more complete derivation of the interaction of radiation with matter.
The dependence on the number of molecules in the initial state is easy to understand. For example, if no molecules were in the $J = 7$, $m_J = 0$ state, no radiation could be absorbed to produce a $J = 7$, $m_J = 0$ to $J = 8$, $m_J = 0$ transition. The dependence of the line intensity on the population of the final state is explained in the following paragraphs.
When molecules interact with an electromagnetic field (i.e., a photon), they can be driven from one state to another with the absorption or emission of energy. Usually there are more molecules in the lower energy state and the absorption of radiation is observed as molecules go from the lower state to the upper state. This situation is the one we have encountered up to now. In some situations, there are more molecules in the upper state and the emission of radiation is observed as molecules are driven from the upper state to the lower state by the electromagnetic field. This situation is called population inversion, and the process is called stimulated emission. Stimulated emission is the reason lasers are possible. Laser is an acronym for light amplification by stimulated emission of radiation. Even in the absence of an electromagnetic field, atoms and molecules can lose energy spontaneously and decay from an upper state to a lower energy state by emitting a photon. This process is called spontaneous emission. Stimulated emission therefore can be thought of as the inverse of absorption because both processes are driven by electromagnetic radiation, i.e. the presence of photons.
Whether absorption or stimulated emission is observed when electromagnetic radiation interacts with a sample depends upon the population difference, $\Delta n$, of the two states involved in the transition. For a rotational transition,
$\Delta n = n_J - n_{J+1} \label {7-56}$
where $n_J$ represents the number of molecules in the lower state and $n_{J+1}$ represents the number in the upper state per unit volume. If this difference is 0, there will be no net absorption or stimulated emission because they exactly balance. If this difference is positive, absorption will be observed; if it is negative, stimulated emission will be observed.
We can develop an expression for $\Delta n$ that uses only the population of the initial state, $n_J$, and the Boltzmann factor. The Boltzmann factor allows us to calculate the population of a higher state given the population of a lower state, the energy gap between the states and the temperature. Multiply the right-hand side of Equation $\ref{7-56}$ by $n_J/n_J$ to obtain
$\Delta n = \left ( 1 - \dfrac {n_{J+1}}{n_J} \right ) n_J \label {7-57}$
Next recognize that the ratio of populations of the states is given by the Boltzmann factor which when substituted into yields
$\Delta n = \left ( 1 - e^{\dfrac {-h \nu _J}{kT}} \right ) n_J \label {7-58}$
where $h \nu _J$ is the energy difference between the two states. For the rigid rotor model
$\nu _J = 2B (J + 1)$
so Equation $\ref{7-58}$ can be rewritten as
$\Delta n = \left ( 1 e^{\dfrac {-2hB(J+1)}{kT}} \right ) n_J \label {7-59}$
Equation expresses the population difference between the two states involved in a rotational transition in terms of the population of the initial state, the rotational constant for the molecule, $B$, the temperature of the sample, and the quantum number of the initial state.
To get the number density of molecules present in the initial state involved in the transition, $n_J$, we multiply the fraction of molecules in the initial state, $F_J$, by the total number density of molecules in the sample, $n_{total}$.
$n_J = F_J \cdot n_{total} \label {7-60}$
The fraction $F_J$ is obtained from the rotational partition function.
$F_J = (2J + 1) \left (\dfrac {hB}{kT} \right ) \left ( e^{\dfrac {-2hB(J+1)}{kT}} \right ) \label {7-61}$
The exponential is the Boltzmann factor that accounts for the thermal population of the energy states. The factor $2J+1$ in this equation results from the degeneracy of the energy level. The more states there are at a particular energy, the more molecules will be found with that energy. The ($hB/kT$) factor results from normalization to make the sum of $F_J$ over all values of $J$ equal to 1. At room temperature and below only the ground vibrational state is occupied; so all the molecules ($n_{total}$) are in the ground vibrational state. Thus the fraction of molecules in each rotational state in the ground vibrational state must add up to 1.
Exercise $7$
Show that the numerator, $J(J+1)hB$ in the exponential of Equation \ref{7-61} is the energy of level $J$.
Exercise $8$: Hydrogen Chloride
Calculate the relative populations of the lowest ($J = 0$) and second ($J = 1$) rotational energy level in the $\ce{HCl}$ molecule at room temperature. Do the same for the lowest and second vibrational levels of $\ce{HCl}$. Compare the results of these calculations. Are Boltzmann populations important to vibrational spectroscopy? Are Boltzmann populations important for rotational spectroscopy?
Now we put all these pieces together and develop a master equation for the maximum absorption coefficient for each line in the rotational spectrum, which is identified by the quantum number, $J$, of the initial state. Start with Equation $\ref{7-55}$ and replace $\mu _T$ using Equation $\ref{7-47}$.
$\gamma _{max} = C \left ( \mu ^2 \dfrac {J + 1}{2J + 1} \right ) \cdot \Delta n \label {7-62}$
Then replace $\Delta n$ using Equation $\ref{7-59}$.
$\gamma _{max} = C \left ( \mu ^2 \dfrac {J + 1}{2J + 1} \right ) \left ( e^{\dfrac {-2hB(J+1)}{kT}} \right ) n_J \label {7-63}$
Finally replace nJ using Equations $\ref{7-60}$ and $\ref{7-61}$ to produce
$\gamma _{max} = C \left[ \mu ^2 \dfrac {J + 1}{2J + 1}\right] \left[ e^{\dfrac {-2hB(J+1)}{kT}}\right] \left[ (2J + 1) \left (\dfrac {hB}{kT} \right ) \left ( e^{\dfrac {-2hB(J+1)}{kT}} \right )\right] n_{total} \label {7-64}$
Equation $\ref{7-64}$ enables us to calculate the relative maximum intensities of the peaks in the rotational spectrum shown in Figure $2$, assuming all molecules are in the lowest energy vibrational state, and predict how this spectrum would change with temperature. The constant $C$ includes the fundamental constants $\epsilon_o$, $c$ and $h$, that follow from a more complete derivation of the interaction of radiation with matter. The complete theory also can account for the line shape and width and includes an additional radiation frequency factor.
$C = \dfrac {2 \pi}{3 \epsilon _0 ch } \label {7-65}$
In the spectrum shown in Figure $1$.1, the absorption coefficients for each peak first increase with increasing $J$ because the difference in the populations of the states increases and the factor ($J+1$) increases. Notice that the denominator in the factor resulting from the transition moment cancels the degeneracy factor $2J+1$. After the maximum the second Boltzmann factor, which is a decreasing exponential as $J$ increases, dominates, and the intensity of the peaks drops to zero. Exploration of how well Equation $\ref{7-64}$ corresponds to the data in Table $1$ and discovering how a rotational spectrum changes with temperature are left to an end-of-the-chapter activity.
Exercise $9$
Why does not the first Boltzmann factor in Equation $\ref{7-64}$ cause the intensity to drop to zero as $J$ increases. | textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Book%3A_Quantum_States_of_Atoms_and_Molecules_(Zielinksi_et_al)/07%3A_Rotational_States/7.06%3A_Rotational_Spectroscopy_of_Diatomic_Molecules.txt |
We found that the rotational wavefunctions are functions called the Spherical Harmonics, and that these functions are products of Associated Legendre Functions and the $e^{im} \varphi$ function. Two quantum numbers, $J$ and $m_J$, are associated with the rotational motion of a diatomic molecule. The quantum numbers identify or specify the particular functions that describe particular rotational states. The functions are written as
$\psi _{J,m_J} (\theta , \varphi) = Y_{J, m_J} (\theta , \varphi) = \Theta ^{|m_J|}_J (\theta) \Phi _{m_J} (\varphi) \label {7-66}$
The absolute square of the wavefunction evaluated at a particular $(\theta , \varphi)$ gives the probability density for finding the internuclear axis aligned at these angles.
Constraints on the wavefunctions arose from boundary conditions, the requirement that the functions be single valued, and the interpretation of the functions as probability amplitudes. The Spherical Harmonic functions for the rigid rotor have these necessary properties only when $|m_J| \le J$ and mJ is an integer. J is the upper limit to the value of mJ, but there is no upper limit to the value for J. The subscript J is added to mJ as a reminder that J controls the allowed range of mJ.
The angular momentum of a rotating diatomic molecule is quantized by the same constraints that quantize the energy of a rotating system. As summarized in the table below, the rotational angular momentum quantum number, J, specifies both the energy and the square of the angular momentum. The z-component of the angular momentum is specified by mJ.
Rotational spectra consist of multiple lines spaced nearly equally apart because many rotational levels are populated at room temperature and the rotational energy level spacing increases by approximately $2B$ with each increase in $J$. The rotational constant, $B$, can be used to calculate the bond length of a diatomic molecule. The spectroscopic selection rules for rotation, shown in the Overview table, allow transitions between neighboring J states with the constraint that mJ change by 0 or 1 unit. Additionally, the molecule must have a non-zero dipole moment in order to move from one state to another by interacting with electromagnetic radiation. The factors that interact to control the line intensities in rotational spectra $(\gamma _{max})$ include the magnitude of the transition moment, $\mu _T$, and the population difference between the initial and final states involved in the transition, $\Delta n$.
So far you have seen three different quantum mechanical models (the particle-in-a-box, the harmonic oscillator, and the rigid rotor) that can be used to describe chemically interesting phenomena (absorption of light by cyanine dye molecules, the vibration of molecules to determine bond force constants, and the rotation of molecules to determine bond lengths). For these cases, you should remember the chemical problem, the form of the Hamiltonian, and the characteristics of the wavefunctions (i.e. the names of the functions, and their mathematical and graphical forms). Also remember the associated energy level structure, values for the quantum numbers, and selection rules for electric-dipole transitions.
As we shall see in the following chapter, the selection rules for the rigid rotor also apply to the hydrogen atom and other atoms because the atomic wavefunctions include the same spherical harmonic angular functions, eigenfunctions of the angular momentum operators $\hat {M} ^2$ and $\hat {M} _z$. The selection rules result from transition moment integrals that involve the same angular wavefunctions and therefore are the same for rotational transitions in diatomic molecules and electronic transitions in atoms.
Exercise $1$
Complete the table below. For an example of a completed table, see Chapter 4.
Overview of key concepts and equations for the Rigid Rotor
• Potential energy
• Hamiltonian
• Wavefunctions
• Quantum Numbers
• Energies
• Spectroscopic Selection Rules
• Angular Momentum Properties
7.0E: 7.E: Rotational States (Exercises)
Q7.1
Consider a homonuclear diatomic molecule described by the rotational wavefunction $Y^0_1 (\theta , \varphi )$.
1. Sketch graphical representations of this function by plotting the amplitude of the function vs. some coordinate with all other coordinates held constant.
2. Sketch a three-dimensional polar plot of this function where the three dimensions are x, y, and z.
3. Sketch a picture to show how this molecule is rotating in space.
Q7.2
Consider a homonuclear diatomic molecule of mass M and bond length D described by the rotational wavefunction $Y^{-1}_2 (\theta , \varphi )$.
1. What is the rotational energy of this molecule?
2. What is the rotational angular momentum?
3. What is the z-component of the angular momentum?
4. What angle does the angular momentum vector make with respect to the z-axis?
5. If the molecule is oxygen, what are the numerical answers to 1) – 4)?
Q7.3
Develop an equation for the stimulated emission of a photon. Compare your result to Equation (7-58).
Q7.4
When centrifugal stretching is included in the energy for the states of the rigid rotor, equation has an extra term $v_{allowed} = 2B (J_i + 1) - 4D(J_i + 1)^3$, Equation (7-67), where J is the quantum number for the initial rotational state, B is the rotational constant and D is the centrifugal distortion constant. Use the data in Table 7.2 to determine both B and D graphically. Be careful how you use units. Compare the magnitudes of B and D. What is the percent difference between B determined without centrifugal stretching and that found here including centrifugal stretching? What would be the corresponding percent error in the bond length computed from B?
Q7.5
Write a paragraph explaining why you might expect the same functions involving spherical coordinates to describe both the rigid rotor and the hydrogen atom.
David M. Hanson, Erica Harvey, Robert Sweeney, Theresa Julia Zielinski ("Quantum States of Atoms and Molecules") | textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Book%3A_Quantum_States_of_Atoms_and_Molecules_(Zielinksi_et_al)/07%3A_Rotational_States/7.07%3A_Overview_of_the_Rigid_Rotor.txt |
The hydrogen atom is of special interest because the hydrogen atom wavefunctions obtained by solving the hydrogen atom Schrödinger equation are a set of functions called atomic orbitals that can be used to describe more complex atoms and even molecules. This feature is particularly useful because, as we shall see in Chapters 9 and 10, the Schrödinger equation for more complex chemical systems cannot be solved analytically. By using the atomic orbitals obtained from the solution of the hydrogen atom Schrödinger equation, we can describe the structure and reactivity of molecules and the nature of chemical bonds. The spacings and intensities of the spectroscopic transitions between the electronic states of the hydrogen atom also are predicted quantitatively by the quantum treatment of this system.
• 8.1: The Schrödinger Equation
The hydrogen atom, consisting of an electron and a proton, is a two-particle system, and the internal motion of two particles around their center of mass is equivalent to the motion of a single particle with a reduced mass.
• 8.2: The Wavefunctions
The solutions to the hydrogen atom Schrödinger equation are functions that are products of a spherical harmonic function and a radial function.
• 8.3: Orbital Energy Levels, Selection Rules, and Spectroscopy
The orbital energy eigenvalues obtained by solving the hydrogen atom Schrödinger equation is negative and approaches zero as the quantum number n approaches infinity. Because the hydrogen atom is used as a foundation for multi-electron systems, it is useful to remember the total energy (binding energy) of the ground state hydrogen atom.
• 8.4: Magnetic Properties and the Zeeman Effect
Electrons in atoms also are moving charges with angular momentum so they too produce a magnetic dipole, which is why some materials are magnetic. A magnetic dipole interacts with an applied magnetic field, and the energy of this interaction is given by the scalar product of the magnetic dipole moment, and the magnetic field.
• 8.5: Discovering Electron Spin
We then have charge moving in a circle, angular momentum, and a magnetic moment, which interacts with the magnetic field and gives us the Zeeman-like effect that we observed. To describe electron spin from a quantum mechanical perspective, we must have spin wavefunctions and spin operators. The properties of the spin states are deduced from experimental observations and by analogy with our treatment of the states arising from the orbital angular momentum of the electron.
• 8.6: Other One-Electron Systems
The quantum mechanical treatment of the hydrogen atom can be extended easily to other one-electron systems such as \(He^+\), \(Li^{2+}\), etc. The Hamiltonian changes in two places. Most importantly, the potential energy term is changed to account for the charge of the nucleus, which is the atomic number of the atom or ion, \(Z\), times the fundamental unit of charge, \(e\).
• 8.7: Spin-Orbitals and Electron Configurations
The wavefunctions obtained by solving the hydrogen atom Schrödinger equation are associated with orbital angular motion and are often called spatial wavefunctions, to differentiate them from the spin wavefunctions. The complete wavefunction for an electron in a hydrogen atom must contain both the spatial and spin components.
• 8.8: Coupling of Angular Momentum and Spectroscopic Term Symbols
The observation of fine structure in atomic hydrogen emission revealed that an orbital energy level diagram does not completely describe the energy levels of atoms. This fine structure also provided key evidence at the time for the existence of electron spin, which was used not only to give a qualitative explanation for the multiplets but also to furnish highly accurate calculations of the multiplet splittings.
• 8.E: The Hydrogen Atom (Exercises)
Exercises for the "Quantum States of Atoms and Molecules" TextMap by Zielinksi et al.
• 8.S: The Hydrogen Atom (Summary)
08: The Hydrogen Atom
The hydrogen atom, consisting of an electron and a proton, is a two-particle system, and the internal motion of two particles around their center of mass is equivalent to the motion of a single particle with a reduced mass. This reduced particle is located at $r$, where $r$ is the vector specifying the position of the electron relative to the position of the proton. The length of $r$ is the distance between the proton and the electron, and the direction of $r$ and the direction of $r$ is given by the orientation of the vector pointing from the proton to the electron. Since the proton is much more massive than the electron, we will assume throughout this chapter that the reduced mass equals the electron mass and the proton is located at the center of mass.
Exercise $1$
1. Assuming the Bohr radius gives the distance between the proton and electron, calculate the distance of the proton from the center of mass, and calculate the distance of the electron from the center of mass.
2. Calculate the reduced mass of the electron-proton system.
3. In view of your calculations in (a) and (b), comment on the validity of a model in which the proton is located at the center of mass and the reduced mass equals the electron mass.
Since the internal motion of any two-particle system can be represented by the motion of a single particle with a reduced mass, the description of the hydrogen atom has much in common with the description of a diatomic molecule that we considered in Chapter 7. The Schrödinger Equation for the hydrogen atom
$\hat {H} (r , \theta , \varphi ) \psi (r , \theta , \varphi ) = E \psi ( r , \theta , \varphi) \label {8-1}$
employs the same kinetic energy operator, $\hat {T}$, written in spherical coordinates as developed in Chapter 7. For the hydrogen atom, however, the distance, r, between the two particles can vary, unlike the diatomic molecule where the bond length was fixed, and the rigid rotor model was used. The hydrogen atom Hamiltonian also contains a potential energy term, $\hat {V}$, to describe the attraction between the proton and the electron. This term is the Coulomb potential energy,
$\hat {V} (r) = - \dfrac {e^2}{4 \pi \epsilon _0 r } \label {8-2}$
where r is the distance between the electron and the proton. The Coulomb potential energy depends inversely on the distance between the electron and the nucleus and does not depend on any angles. Such a potential is called a central potential.
The full expression for $\hat {H}$ in spherical coordinates is
$\hat {H} (r , \theta , \varphi ) = - \dfrac {\hbar ^2}{2 \mu r^2} \left [ \dfrac {\partial}{\partial r} \left (r^2 \dfrac {\partial}{\partial r} \right ) + \dfrac {1}{\sin \theta } \dfrac {\partial}{\partial \theta } \left ( \sin \theta \dfrac {\partial}{\partial \theta} \right ) + \dfrac {1}{\sin ^2 \theta} \dfrac {\partial ^2}{\partial \varphi ^2} \right ] - \dfrac {e^2}{4 \pi \epsilon _0 r } \label {8-3}$
The contributions from rotational and radial components of the motion become clearer if we write out the complete Schrödinger equation,
$\left \{ -\dfrac {\hbar ^2}{2 \mu r^2} \left [ \dfrac {\partial}{\partial r} \left (r^2 \dfrac {\partial}{\partial r} \right ) + \dfrac {1}{\sin \theta } \dfrac {\partial}{\partial \theta } \left ( \sin \theta \dfrac {\partial}{\partial \theta} \right ) + \dfrac {1}{\sin ^2 \theta} \dfrac {\partial ^2}{\partial \varphi ^2} \right ] - \dfrac {e^2}{4 \pi \epsilon _0 r } \right \} \psi (r , \theta , \varphi ) = E \psi (r , \theta , \varphi ) \label {8-4}$
multiply both sides of Equation \ref{8-4} by $2 \mu r$, and rearrange to obtain
$\hbar ^2 \dfrac {\partial}{\partial r} \left ( r^2 \dfrac {\partial}{\partial r} \psi (r , \theta , \varphi ) \right ) + 2 \mu r^2 \left [ E + \dfrac {e^2}{4 \pi \epsilon _0 r } \right ] \psi (r , \theta , \varphi ) =$
$- \hbar^2 \left [ \dfrac {1}{\sin \theta } \dfrac {\partial}{\partial \theta } \left ( \sin \theta \dfrac {\partial}{\partial \theta} \right ) + \dfrac {1}{\sin ^2 \theta} \dfrac {\partial ^2}{\partial \varphi ^2} \right ] \psi (r , \theta , \varphi ) \label {8-5}$
Manipulating the Schrödinger equation in this way helps us recognize the square of the angular momentum operator in Equation \ref{8-5}. The square of the angular momentum operator, which was defined in Chapter 7, is repeated here as Equation \ref{8-6}.
$\hat {M} ^2 = -\hbar ^2 \left [\dfrac {1}{\sin \theta } \dfrac {\partial}{\partial \theta } \left ( \sin \theta \dfrac {\partial}{\partial \theta} \right ) + \dfrac {1}{\sin ^2 \theta} \dfrac {\partial ^2}{\partial \varphi ^2} \right ] \label {8-6}$
Substituting Equation \ref{8-6} into Equation \ref{8-5} produces
$\hbar ^2 \dfrac {\partial}{\partial r } \left ( r^2 \dfrac {\partial}{\partial r} \psi (r , \theta , \varphi ) \right ) + 2 \mu r^2 [ E - \hat {V} ] \psi (r , \theta , \varphi ) = \hat {M} ^2 \psi (r, \theta , \varphi ) \label {8-7}$
Exercise $1$
Show the algebraic steps going from Equation \ref{8-4} to Equation \ref{8-5} and finally to Equation \ref{8-7}. Justify the statement that the rotational and radial motion are separated in Equation \ref{8-7}.
Since the angular momentum operator does not involve the radial variable, $r$, we can separate variables in Equation \ref{8-7} by using a product wavefunction, as we did previously in Chapter 7. From our work on the rigid rotor, Chapter 7, we know that the eigenfunctions of the angular momentum operator are the Spherical Harmonic functions, $Y (\theta , \varphi )$, so a good choice for a product function is
$\psi (r , \theta , \varphi ) = R (r) Y (\theta , \varphi ) \label {8-8}$
The Spherical Harmonic functions table provide information about where the electron is around the proton, and the radial function R(r) describes how far the electron is away from the proton.
To separate variables, substitute the product function, Equation \ref{8-8}, into Equation \ref{8-7}, evaluate partial derivatives, divide each side by R(r) $Y (\theta, \varphi )$, and set each side of that resulting equation equal to a constant $\lambda$.
$\dfrac {\hbar ^2}{R (r)} \dfrac {\partial}{\partial r} r^2 \dfrac {\partial}{\partial r} R(r) + \dfrac {2 \mu r^2}{R (r)} [ E - V ] R (r) = \lambda \label {8-9}$
$\dfrac {1}{Y (\theta , \varphi )} \hat {M} ^2 Y (\theta , \varphi ) = \lambda \label {8-10}$
Equations \ref{8-9} and \ref{8-10} represent the radial differential equation and the angular differential equation, respectively. As we describe below, they are solved separately to give the $Y (\theta , \varphi )$ angular functions and the $R(r)$ radial functions.
Exercise $3$
Complete the steps leading from Equation \ref{8-7} to Equation \ref{8-9} and Equation \ref{8-10}.
Rearranging Equation \ref{8-10} yields
$\hat {M} ^2 Y^{m_l}_l (\theta , \varphi ) = \lambda Y^{m_l}_l (\theta , \varphi ) \label {8-11}$
where we have added the indices $l$ and $m_l$ to identify a particular spherical harmonic function. Note that the notation has changed from that used in Chapter 7. It is customary to use $J$ and $m_J$ to represent the angular momentum quantum numbers for rotational states, but for electronic states, it is customary to use $l$ and $m_l$ to represent the same thing. Further, the electronic angular momentum is designated by L and the corresponding operator is called $\hat {L}$. In complete electronic notation, Equation \ref{8-11} is
$\hat {L} ^2 Y^{m_l}_l (\theta , \varphi ) = \lambda Y^{m_l}_l (\theta , \varphi ) \label {8-12}$
Equation \ref{8-12} says that $Y^{m_l}_l (\theta , \varphi )$ must be an eigenfunction of the angular momentum operator $\hat {L} ^2$ with eigenvalue $λ$. We know from the discussion of the Rigid Rotor that the eigenvalue λ is $J(J+1)ħ^2$, or in electronic notation, $l (l + 1) \hbar ^2$. Consequently, Equation \ref{8-12} becomes
$\hat {L} ^2 Y^{m_l}_l (\theta , \varphi ) = l (l + 1) \hbar ^2 Y^{m_l}_l (\theta , \varphi ) \label {8-13}$
Using this value for λ and rearranging Equation \ref{8-9}, we obtain
$- \dfrac {\hbar ^2}{2 \mu r^2} \dfrac {\partial}{\partial r} r^2 \dfrac {\partial}{\partial r} R(r) + \left [ \dfrac {l(l +1) \hbar ^2}{2 \mu r^2} + V (r) - E \right ] R (r) = 0 \label {8-14}$
Exercise $4$
Write the steps leading from Equation \ref{8-9} to Equation \ref{8-14}.
The details for solving Equation \ref{8-14} are provided elsewhere, but the procedure and consequences are similar to previously examined cases. As for the harmonic oscillator, an asymptotic solution (valid at large $r$) is found, and then the complete solutions are written as products of the asymptotic solution and polynomials arising from sequential truncations of a power series.
The asymptotic solution is
$R_{asymptotic} (r) = e^{-\dfrac {r}{n} a_0} \label {8-15}$
where n will turn out to be a quantum number and $a_0$ is the Bohr radius. Note that this function decreases exponentially with distance, in a manner similar to the decaying exponential portion of the harmonic oscillator wavefunctions, but with a different distance dependence, $r$ vs. $r^2$.
Exercise $5$
What happens to the magnitude of $R_{asymptotic}(r)$ as the distance $r$ from the proton approaches infinity? Sketch a graph of the function, $R_{asymptotic}(r)$. Why might this behavior be expected for an electron in a hydrogen atom?
ExercisE $6$
Why is exp(-r/na0) used instead of exp(+r/na0) as the exponential component of the hydrogen atom radial function?
The polynomials produced by the truncation of the power series are related to the associated Laguerre polynomials, $L_n , _l(r)$, where the set of ci are constant coefficients.
$L_n, _l (r) = \sum _{r=0}^{n-l-1} c_i r^i \label {8-16}$
These polynomials are identified by two indices or quantum numbers, n and $l$. Physically acceptable solutions require that n must be greater than or equal to $l +1$. The smallest value for $l$ is zero, so the smallest value for n is 1. The angular momentum quantum number affects the solution to the radial equation because it appears in the radial differential equation, Equation \ref{8-14}.
The $R(r)$ functions, Equation \ref{9-17}, that solve the radial differential Equation \ref{8-14}, are products of the associated Laguerre polynomials times the exponential factor, multiplied by a normalization factor $(N_{n,l})$ and $\left (\dfrac {r}{a_0} \right ) ^l$.
$R (r) = N_{n,l} \left ( \dfrac {r}{a_0} \right ) ^l L_{n,l} (r) e^{-\dfrac {r}{n {a_0}}} \label {8-17}$
As in Chapter 6, the decreasing exponential term overpowers the increasing polynomial term so that the overall wavefunction exhibits the desired approach to zero at large values of $r$. The first six radial functions are provided in Table $1$. Note that the functions in the table exhibit a dependence on $Z$, the atomic number of the nucleus. As discussed later in this chapter, other one electron systems have electronic states analogous to those for the hydrogen atom, and inclusion of the charge on the nucleus allows the same wavefunctions to be used for all one-electron systems. For hydrogen, Z = 1.
Table $1$: Radial functions for one-electron atoms and ions. Z is the atomic number of the nucleus, and $\rho = \dfrac {Zr}{a_0}$, where $a_0$ is the Bohr radius and $r$ is the radial variable.
n $l$ $R_{n,l} (\rho)$
1 0 $2 \left (\dfrac {Z}{a_0} \right ) ^{3/2} e^{-\rho}$
2 0 $\dfrac {1}{2 \sqrt {2}}\left (\dfrac {Z}{a_0} \right ) ^{3/2} (2 - \rho) e^{-\rho/2}$
2 1 $\dfrac {1}{2 \sqrt {6}}\left (\dfrac {Z}{a_0} \right ) ^{3/2} \rho e^{-\rho/2}$
3 0 $\dfrac {1}{81 \sqrt {3}}\left (\dfrac {Z}{a_0} \right ) ^{3/2} (27 - 18 \rho + 2\rho ^2) e^{-\rho/3}$
3 1 $\dfrac {1}{81 \sqrt {6}}\left (\dfrac {Z}{a_0} \right ) ^{3/2} (6 \rho - \rho ^2) e^{-\rho/3}$
3 2 $\dfrac {1}{81 \sqrt {30}}\left (\dfrac {Z}{a_0} \right ) ^{3/2} \rho ^2 e^{-\rho/3}$
The constraint that n be greater than or equal to $l +1$ also turns out to quantize the energy, producing the same quantized expression for hydrogen atom energy levels that was obtained from the Bohr model of the hydrogen atom discussed in Chapter 2.
$E_n = - \dfrac {\mu e^4}{8 \epsilon ^2_0 h^2 n^2}$
It is interesting to compare the results obtained by solving the Schrödinger equation with Bohr’s model of the hydrogen atom. There are several ways in which the Schrödinger model and Bohr model differ. First, and perhaps most strikingly, the Schrödinger model does not produce well-defined orbits for the electron. The wavefunctions only give us the probability for the electron to be at various directions and distances from the proton. Second, the quantization of angular momentum is different from that proposed by Bohr. Bohr proposed that the angular momentum is quantized in integer units of $\hbar$, while the Schrödinger model leads to an angular momentum of $\sqrt{(l (l +1)} \hbar$. Third, the quantum numbers appear naturally during solution of the Schrödinger equation while Bohr had to postulate the existence of quantized energy states. Although more complex, the Schrödinger model leads to a better correspondence between theory and experiment over a range of applications that was not possible for the Bohr model.
Exercise $7$
Explain how the Schrödinger equation leads to the conclusion that the angular momentum of the hydrogen atom can be zero, and explain how the existence of such states with zero angular momentum contradicts Bohr's idea that the electron is orbiting around the proton in the hydrogen atom. | textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Book%3A_Quantum_States_of_Atoms_and_Molecules_(Zielinksi_et_al)/08%3A_The_Hydrogen_Atom/8.01%3A_The_Schr%C3%B6dinger_Equation.txt |
The solutions to the hydrogen atom Schrödinger equation are functions that are products of a spherical harmonic function and a radial function.
$\psi _{n, l, m_l } (r, \theta , \varphi) = R_{n,l} (r) Y^{m_l}_l (\theta , \varphi) \label {8-20}$
The wavefunctions for the hydrogen atom depend upon the three variables r, $\theta$, and $\varphi$ and the three quantum numbers n, $l$, and $m_l$. The variables give the position of the electron relative to the proton in spherical coordinates. The absolute square of the wavefunction, $| \psi (r, \theta , \varphi )|^2$, evaluated at $r$, $\theta$, and $\varphi$ gives the probability density of finding the electron inside a differential volume $d \tau$, centered at the position specified by r, $\theta$, and $\varphi$.
Exercise $1$
What is the value of the integral
$\int \limits _{\text{all space}} | \psi (r, \theta , \varphi )|^2 d \tau \, ? \nonumber$
The quantum numbers have names: $n$ is called the principal quantum number, $l$ is called the angular momentum quantum number, and $m_l$ is called the magnetic quantum number because (as we will see in Section 8.4) the energy in a magnetic field depends upon $m_l$. Often $l$ is called the azimuthal quantum number because it is a consequence of the $\theta$-equation, which involves the azimuthal angle $\Theta$, referring to the angle to the zenith.
These quantum numbers have specific values that are dictated by the physical constraints or boundary conditions imposed upon the Schrödinger equation: $n$ must be an integer greater than 0, $l$ can have the values 0 to n‑1, and $m_l$ can have $2l + 1$ values ranging from $-l$ ‑ to $+l$ in unit or integer steps. The values of the quantum number $l$ usually are coded by a letter: s means 0, p means 1, d means 2, f means 3; the next codes continue alphabetically (e.g., g means $l = 4$). The quantum numbers specify the quantization of physical quantities. The discrete energies of different states of the hydrogen atom are given by $n$, the magnitude of the angular momentum is given by $l$, and one component of the angular momentum (usually chosen by chemists to be the z‑component) is given by $m_l$. The total number of orbitals with a particular value of $n$ is $n^2$.
Exercise $2$
Consider several values for n, and show that the number of orbitals for each n is $n^2$.
Exercise $3$
Construct a table summarizing the allowed values for the quantum numbers n, $l$, and $m_l$. for energy levels 1 through 7 of hydrogen.
Exercise $4$
The notation 3d specifies the quantum numbers for an electron in the hydrogen atom. What are the values for n and $l$ ? What are the values for the energy and angular momentum? What are the possible values for the magnetic quantum number? What are the possible orientations for the angular momentum vector?
The hydrogen atom wavefunctions, $\psi (r, \theta , \varphi )$, are called atomic orbitals. An atomic orbital is a function that describes one electron in an atom. The wavefunction with n = 1, $l=1$, and $m_l$ = 0 is called the 1s orbital, and an electron that is described by this function is said to be “in” the ls orbital, i.e. have a 1s orbital state. The constraints on $n$, $l)$, and $m_l$ that are imposed during the solution of the hydrogen atom Schrödinger equation explain why there is a single 1s orbital, why there are three 2p orbitals, five 3d orbitals, etc. We will see when we consider multi-electron atoms in Chapter 9 that these constraints explain the features of the Periodic Table. In other words, the Periodic Table is a manifestation of the Schrödinger model and the physical constraints imposed to obtain the solutions to the Schrödinger equation for the hydrogen atom.
Visualizing the variation of an electronic wavefunction with $r$, $\theta$, and $\varphi$ is important because the absolute square of the wavefunction depicts the charge distribution (electron probability density) in an atom or molecule. The charge distribution is central to chemistry because it is related to chemical reactivity. For example, an electron deficient part of one molecule is attracted to an electron rich region of another molecule, and such interactions play a major role in chemical interactions ranging from substitution and addition reactions to protein folding and the interaction of substrates with enzymes.
Visualizing wavefunctions and charge distributions is challenging because it requires examining the behavior of a function of three variables in three-dimensional space. This visualization is made easier by considering the radial and angular parts separately, but plotting the radial and angular parts separately does not reveal the shape of an orbital very well. The shape can be revealed better in a probability density plot. To make such a three-dimensional plot, divide space up into small volume elements, calculate $\psi^* \psi$ at the center of each volume element, and then shade, stipple or color that volume element in proportion to the magnitude of $\psi^* \psi$. Do not confuse such plots with polar plots, which look similar.
Probability densities also can be represented by contour maps, as shown in Figure $1$.
Another representational technique, virtual reality modeling, holds a great deal of promise for representation of electron densities. Imagine, for instance, being able to experience electron density as a force or resistance on a wand that you move through three-dimensional space. Devices such as these, called haptic devices, already exist and are being used to represent scientific information. Similarly, wouldn’t it be interesting to “fly” through an atomic orbital and experience changes in electron density as color changes or cloudiness changes? Specially designed rooms with 3D screens and “smart” glasses that provide feedback about the direction of the viewer’s gaze are currently being developed to allow us to experience such sensations.
Methods for separately examining the radial portions of atomic orbitals provide useful information about the distribution of charge density within the orbitals. Graphs of the radial functions, $R(r)$, for the 1s, 2s, and 2p orbitals plotted in Figure $2$.
The 1s function in Figure $2$ starts with a high positive value at the nucleus and exponentially decays to essentially zero after 5 Bohr radii. The high value at the nucleus may be surprising, but as we shall see later, the probability of finding an electron at the nucleus is vanishingly small.
Next notice how the radial function for the 2s orbital, Figure $2$, goes to zero and becomes negative. This behavior reveals the presence of a radial node in the function. A radial node occurs when the radial function equals zero other than at $r = 0$ or $r = ∞$. Nodes and limiting behaviors of atomic orbital functions are both useful in identifying which orbital is being described by which wavefunction. For example, all of the s functions have non-zero wavefunction values at $r = 0$, but p, d, f and all other functions go to zero at the origin. It is useful to remember that there are $n-1-l$ radial nodes in a wavefunction, which means that a 1s orbital has no radial nodes, a 2s has one radial node, and so on.
Exercise $5$
Examine the mathematical forms of the radial wavefunctions. What feature in the functions causes some of them to go to zero at the origin while the s functions do not go to zero at the origin?
Exercise $6$
What mathematical feature of each of the radial functions controls the number of radial nodes?
Exercise $7$
At what value of r does the 2s radial node occur?
Exercise $8$
Make a table that provides the energy, number of radial nodes, and the number of angular nodes and total number of nodes for each function with n = 1, 2, and 3. Identify the relationship between the energy and the number of nodes. Identify the relationship between the number of radial nodes and the number of angular nodes.
The quantity $R (r) ^* R(r)$ gives the radial probability density; i.e., the probability density for the electron to be at a point located the distance $r$ from the proton. Radial probability densities for three types of atomic orbitals are plotted in Figure (3\).
When the radial probability density for every value of r is multiplied by the area of the spherical surface represented by that particular value of r, we get the radial distribution function. The radial distribution function gives the probability density for an electron to be found anywhere on the surface of a sphere located a distance r from the proton. Since the area of a spherical surface is $4 \pi r^2$, the radial distribution function is given by $4 \pi r^2 R(r) ^* R(r)$.
Radial distribution functions are shown in Figure $4$. At small values of r, the radial distribution function is low because the small surface area for small radii modulates the high value of the radial probability density function near the nucleus. As we increase $r$, the surface area associated with a given value of r increases, and the $r^2$ term causes the radial distribution function to increase even though the radial probability density is beginning to decrease. At large values of $r$, the exponential decay of the radial function outweighs the increase caused by the $r^2$ term and the radial distribution function decreases.
Exercise $9$
Write a quality comparison of the radial function and radial distribution function for the 2s orbital. See Figure (5\) | textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Book%3A_Quantum_States_of_Atoms_and_Molecules_(Zielinksi_et_al)/08%3A_The_Hydrogen_Atom/8.02%3A_The_Wavefunctions.txt |
The orbital energy eigenvalues obtained by solving the hydrogen atom Schrödinger equation are given by
$E_n = -\dfrac {\mu e^4}{8 \epsilon ^2_0 h^2 n^2} \label {8.3.1}$
where $\mu$ is the reduced mass of the proton and electron, $n$ is the principal quantum number and e, $\epsilon _0$ and h are the usual fundamental constants. The energy is negative and approaches zero as the quantum number n approaches infinity. Because the hydrogen atom is used as a foundation for multi-electron systems, it is useful to remember the total energy (binding energy) of the ground state hydrogen atom, $E_H = -13.6\; eV$. The spacing between electronic energy levels for small values of $n$ is very large while the spacing between higher energy levels gets smaller very rapidly. This energy level spacing is a result of the form of the Coulomb potential, and can be understood in terms of the particle in a box model. We saw that as the potential box gets wider, the energy level spacing gets smaller. Similarly in the hydrogen atom as the energy increases, the Coulomb well gets wider and the energy level spacing gets smaller.
The line spectra produced by hydrogen atoms are a consequence of the quantum mechanical energy level expression, Equation \ref{8.3.1}. In Chapter 1 we saw the excellent match between the experimental and calculated spectral lines for the hydrogen atom using the Bohr expression for the energy, which is identical to Equation $\ref{8.3.1}$.
Exercise $1$
Using Equation $\ref{8.3.1}$ and a spreadsheet program or other software of your choice, calculate the energies for the lowest 100 energy levels of the hydrogen atom. Also calculate the differences in energy between successive levels. Do the results from these calculations confirm that the energy levels rapidly get closer together as the principal quantum number $n$ increases? What happens to the energy level spacing as the principle quantum number approaches infinity?
The solution of the Schrödinger equation for the hydrogen atom predicts that energy levels with $n > 1$ can have several orbitals with the same energy. In fact, as the energy and n increase, the degeneracy of the orbital energy level increases as well. The number of orbitals with a particular energy and value for $n$ is given by $n_2$. Thus, each orbital energy level is predicted to be $n_2$-degenerate. This high degree of orbital degeneracy is predicted only for one-electron systems. For multi-electron atoms, the electron-electron repulsion removes the $l$ degeneracy so only orbitals with the same $m_l$ quantum numbers are degenerate.
Exercise $2$
Use Equation or the data you generated in Exercise $\ref{8.3.1}$ to draw an energy level diagram to scale for the hydrogen atom showing the first three energy levels and their degeneracy. Indicate on your diagram the transition leading to ionization of the hydrogen atom and the numerical value of the energy required for ionization, in eV, atomic units and kJ/mol.
To understand the hydrogen atom spectrum, we also need to determine which transitions are allowed and which transitions are forbidden. This issue is addressed next by using selection rules that are obtained from the transition moment integral. In previous chapters we determined selection rules for the particle in a box, the harmonic oscillator, and the rigid rotor. Now we will apply those same principles to the hydrogen atom case by starting with the transition moment integral.
The transition moment integral for a transition between an initial ($i$) state and a final ($f$) state of a hydrogen atom is given by
$\left \langle \mu _T \right \rangle = \int \psi ^* _{n_f, l_f, m_{l_f}} (r, \theta , \psi ) \hat {\mu} \psi _{n_i, l_i, m_{l_i}} (r, \theta , \psi ) d \tau \label {8.3.2a}$
or in bra ket notation
$\left \langle \mu _T \right \rangle = \langle \psi ^*_{n_f, l_f, m_{l_f}} | \hat {\mu} | \psi _{n_i, l_i, m_{l_i}} \rangle \label{8.3.2b}$
where the dipole moment operator is given by
$\hat {\mu} = - e \hat {r} \label {8.3.3}$
The dipole moment operator expressed in spherical coordinates is
$\hat {\mu} = -er (\bar {x} \sin \theta \cos \psi + \bar {y} \sin \theta \sin \psi + \bar {z} \cos \theta \label {8.3.4}$
The sum of terms on the right hand side of Equation $\ref{8.3.4}$ shows that there are three components of $\left \langle \mu _T \right \rangle$ to evaluate in Equation $\ref{8.3.2a}$, where each component consists of three integrals: an $r$ integral, a $\theta$ integral, and a $\psi$ integral.
Evaluation reveals that the $r$ integral always differs from zero so
$\Delta n = n_f - n_i = \text {not restricted} \label {8.3.5}$
There is no restriction on the change in the principal quantum number during a spectroscopic transition; $\Delta n$ can be anything. For absorption, $\Delta n > 0$, for emission $\Delta n < 0$, and $\Delta n = 0$ when the orbital degeneracy is removed by an external field or some other interaction.
The selection rules for $\Delta l$ and $\Delta m_l$ come from the transition moment integrals involving $\theta$ and $\varphi$ in Equation $\ref{8.3.2a}$. These integrals are the same ones that were evaluated for the rotational selection rules, and the resulting selection rules are
$\Delta l = \pm 1$
and
$\Delta m_l = 0, \pm 1 \label {8.3.6}$
Exercise $3$
Write the spectroscopic selection rules for the rigid rotor and for the hydrogen atom. Why are these selection rules the same? | textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Book%3A_Quantum_States_of_Atoms_and_Molecules_(Zielinksi_et_al)/08%3A_The_Hydrogen_Atom/8.03%3A_Orbital_Energy_Levels%2C_Selection_Rules%2C_and_Spectroscopy.txt |
Magnetism results from the circular motion of charged particles. This property is demonstrated on a macroscopic scale by making an electromagnet from a coil of wire and a battery. Electrons moving through the coil produce a magnetic field (Figure $1$), which can be thought of as originating from a magnetic dipole or a bar magnet.
Magnetism results from the circular motion of charged particles.
Electrons in atoms also are moving charges with angular momentum so they too produce a magnetic dipole, which is why some materials are magnetic. A magnetic dipole interacts with an applied magnetic field, and the energy of this interaction is given by the scalar product of the magnetic dipole moment, and the magnetic field, $\vec{B}$.
$E_B = - \vec{\mu} _m \cdot \vec{B} \label {8.4.1}$
Magnets are acted on by forces and torques when placed within an external applied magnetic field (Figure $2$). In a uniform external field, a magnet experiences no net force, but a net torque. The torque tries to align the magnetic moment ($\vec{\mu} _m$ of the magnet with the external field $\vec{B}$. The magnetic moment of a magnet points from its south pole to its north pole.
In a non-uniform magnetic field a current loop, and therefore a magnet, experiences a net force, which tries to pull an aligned dipole into regions where the magnitude of the magnetic field is larger and push an anti-aligned dipole into regions where magnitude the magnetic field is smaller.
Quantum Effects
As expected, the quantum picture is different. Pieter Zeeman was one of the first to observe the splittings of spectral lines in a magnetic field caused by this interaction. Consequently, such splittings are known as the Zeeman effect. Let’s now use our current knowledge to predict what the Zeeman effect for the 2p to 1s transition in hydrogen would look like, and then compare this prediction with a more complete theory. To understand the Zeeman effect, which uses a magnetic field to remove the degeneracy of different angular momentum states, we need to examine how an electron in a hydrogen atom interacts with an external magnetic field, $\vec{B}$. Since magnetism results from the circular motion of charged particles, we should look for a relationship between the angular momentum $\vec{L}$ and the magnetic dipole moment $\vec{\mu} _m$.
The relationship between the magnetic dipole moment $\vec{\mu} _m$ (also referred to simply as the magnetic moment) and the angular momentum $\vec{L}$ of a particle with mass m and charge $q$ is given by
$\vec{\mu} _m = \dfrac {q}{2m} \vec{L} \label {8.4.2}$
For an electron, this equation becomes
$\vec{\mu} _m = - \dfrac {e}{2m_e} \vec{L} \label {8.4.3}$
where the specific charge and mass of the electron have been substituted for $q$ and $m$. The magnetic moment for the electron is a vector pointing in the direction opposite to $\vec{L}$, both of which classically are perpendicular to the plane of the rotational motion.
Exercise $1$
Will an electron in the ground state of hydrogen have a magnetic moment? Why or why not?
The relationship between the angular momentum of a particle and its magnetic moment is commonly expressed as a ratio, called the gyromagnetic ratio, $\gamma$. Gyro is Greek for turn so gyromagnetic simply relates turning (angular momentum) to magnetism. Now you also know why the Greek sandwiches made with meat cut from a spit turning over a fire are called gyros.
$\gamma = \dfrac {\mu _m}{L} = \dfrac {q}{2m} \label {8.4.4}$
In the specific case of an electron,
$\gamma _e = - \dfrac {e}{2m_e} \label {8.4.5}$
Exercise $2$
Calculate the magnitude of the gyromagnetic ratio for an electron.
To determine the energy of a hydrogen atom in a magnetic field we need to include the operator form of the hydrogen atom Hamiltonian. The Hamiltonian always consists of all the energy terms that are relevant to the problem at hand.
$\hat {H} = \hat {H} ^0 + \hat {H} _m \label {8.4.6}$
where $\hat {H} ^0$ is the Hamiltonian operator in the absence of the field and $\hat {H} _m$ is written using the operator forms of Equations $\ref{8.4.1}$ and $\ref{8.4.3}$),
$\hat {H}_m = - \hat {\mu} \cdot \vec{B} = \dfrac {e}{2m_e} \hat {L} \cdot B \label {8.4.7}$
The scalar product
$\hat {L} \cdot \vec{B} = \hat {L}_x B_x + \hat {L}_y B_y + \hat {L}_z B_z \label {8.4.8}$
simplifies if the z-axis is defined as the direction of the external field because then $B_x$ and $B_y$ are automatically 0, and Equation \ref{8.4.6} becomes
$\hat {H} = \hat {H}^0 + \dfrac {eB_z}{2m_e} \hat {L} _z \label {8.4.9}$
where $B_z$ is the magnitude of the magnetic field, which is along the z-axis.
We now can ask, “What is the effect of a magnetic field on the energy of the hydrogen atom orbitals?” To answer this question, we will not solve the Schrödinger equation again; we simply calculate the expectation value of the energy, $\left \langle E \right \rangle$, using the existing hydrogen atom wavefunctions and the new Hamiltonian operator.
$\left \langle E \right \rangle = \left \langle \hat {H}^0 \right \rangle + \dfrac {eB_z}{2m_e} \left \langle \hat {L} _z \right \rangle \label {8.4.10}$
where
$\left \langle \hat {H}^0 \right \rangle = \int \psi ^*_{n,l,m_l} \hat {H}^0 \psi _{n,l,m_l} d \tau = E_n \label {8.4.11}$
and
$\left \langle \hat {L}_z \right \rangle = \int \psi ^*_{n,l,m_l} \hat {L}_z \psi _{n,l,m_l} d \tau = m_l \hbar \label {8.4.12}$
Exercise $3$
Show that the expectation value $\left \langle \hat {L}_z \right \rangle = m_l \hbar$.
The expectation value approach provides an exact result in this case because the hydrogen atom wavefunctions are eigenfunctions of both $\hat {H} ^0$ and $\hat {L}_z$. If the wavefunctions were not eigenfunctions of the operator associated with the magnetic field, then this approach would provide a first-order estimate of the energy. First and higher order estimates of the energy are part of a general approach to developing approximate solutions to the Schrödinger equation. This approach, called perturbation theory, is discussed in the next chapter.
The expectation value calculated for the total energy in this case is the sum of the energy in the absence of the field, $E_n$, plus the Zeeman energy, $\dfrac {e \hbar B_z m_l}{2m_e}$
\begin{align} \left \langle E \right \rangle &= E_n + \dfrac {e \hbar B_z m_l}{2m_e} \[4pt] &= E_n + \mu _B B_z m_l \label {8.4.13} \end{align}
The factor
$\dfrac {e \hbar}{2m_e} = - \gamma _e \hbar = \mu _B \label {8.4.14}$
defines the constant $\mu _B$, called the Bohr magneton, which is taken to be the fundamental magnetic moment. It has units of $9.2732 \times 10^{-21}$ erg/Gauss or $9.2732 \times 10^{-24}$ Joule/Tesla. This factor will help you to relate magnetic fields, measured in Gauss or Tesla, to energies, measured in ergs or Joules, for any particle with a charge and mass the same as an electron.
Equation \ref{8.4.13} shows that the $m_l$ quantum number degeneracy of the hydrogen atom is removed by the magnetic field. For example, the three states $\psi _{211}$, $\psi _{21-1}$, and $\psi _{210}$, which are degenerate in zero field, have different energies in a magnetic field, as shown in Figure $3$.
The $m_l = 0$ state, for which the component of angular momentum and hence also the magnetic moment in the external field direction is zero, experiences no interaction with the magnetic field. The $m_l = +1$ state, for which the angular momentum in the z-direction is +ħ and the magnetic moment is in the opposite direction, against the field, experiences a raising of energy in the presence of a field. Maintaining the magnetic dipole against the external field direction is like holding a small bar magnet with its poles aligned exactly opposite to the poles of a large magnet (Figure $5$). It is a higher energy situation than when the magnetic moments are aligned with each other.
Exercise $4$
Carry out the steps going from Equation $\ref{8.4.10}$ to Equation $\ref{8.4.13}$.
Exercise $5$
Consider the effect of changing the magnetic field on the magnitude of the Zeeman splitting. Sketch a diagram where the magnetic field strength is on the x-axis and the energy of the three 2p orbitals is on the y-axis to show the trend in splitting magnitudes with increasing magnetic field. Be quantitative, calculate and plot the exact numerical values using a software package of your choice.
Exercise $6$
Based on your calculations in Exercise $2$ sketch a luminescence spectrum for the hydrogen atom in the n = 2 level in a magnetic field of 1 Tesla. Provide the numerical value for each of the transition energies. Use cm-1 or electron volts for the energy units. | textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Book%3A_Quantum_States_of_Atoms_and_Molecules_(Zielinksi_et_al)/08%3A_The_Hydrogen_Atom/8.04%3A_Magnetic_Properties_and_the_Zeeman_Effect.txt |
Imagine doing a hypothetical experiment that would lead to the discovery of electron spin. Your laboratory has just purchased a microwave spectrometer with variable magnetic field capacity. We try the new instrument with hydrogen atoms using a magnetic field of 104 Gauss and look for the absorption of microwave radiation as we scan the frequency of our microwave generator.
Finally we see absorption at a microwave photon frequency of $28 \times 10^9\, Hz$ (28 gigahertz). This result is really surprising from several perspectives. Each hydrogen atom is in its ground state, with the electron in a 1s orbital. The lowest energy electronic transition that we predict based on existing theory (the electronic transition from the ground state ($\psi _{100}$ to $\psi _{21m}$) requires an energy that lies in the vacuum ultraviolet, not the microwave, region of the spectrum. Furthermore, when we vary the magnetic field we note that the frequency at which the absorption occurs varies in proportion to the magnetic field. This effect looks like a Zeeman effect, but if you think about the situation, even if the 1s orbital were doubly degenerate, a $1s$ orbital still has zero orbital angular momentum, no magnetic moment, and therefore no predicted Zeeman effect!
To discover new things, experimentalists sometimes must explore new areas in spite of contrary theoretical predictions. Our theory of the hydrogen atom at this point gives no reason to look for absorption in the microwave region of the spectrum. By doing this crazy experiment, we discovered that when an electron is in the $1s$ orbital of the hydrogen atom, there are two different states that have the same energy. When a magnetic field is applied, this degeneracy is removed, and microwave radiation can cause transitions between the two states. In the rest of this section, we see what can be deduced from this experimental observation. This experiment actually could be done with electron spin resonance spectrometers available today.
In order to explain our observations, we need a new idea, a new model for the hydrogen atom. Our original model for the hydrogen atom accounted for the motion of the electron and proton in our three-dimensional world; the new model needs something else that can give rise to an additional Zeeman-like effect. We need a charged particle with angular momentum to produce a magnetic moment, just like that obtained by the orbital motion of the electron. We can postulate that our observation results from a motion of the electron that wasn’t considered in the last section - electron spin. We have a charged particle spinning on its axis. We then have charge moving in a circle, angular momentum, and a magnetic moment, which interacts with the magnetic field and gives us the Zeeman-like effect that we observed.
To describe electron spin from a quantum mechanical perspective, we must have spin wavefunctions and spin operators. The properties of the spin states are deduced from experimental observations and by analogy with our treatment of the states arising from the orbital angular momentum of the electron.
The important feature of the spinning electron is the spin angular momentum vector, which we label $S$ by analogy with the orbital angular momentum $L$. We define spin angular momentum operators with the same properties that we found for the rotational and orbital angular momentum operators. After all, angular momentum is angular momentum.
We found that (in Bra-ket notation)
$\hat {L}^2 | Y^{m_l} _l \rangle= l(l + 1) \hbar^2 | Y^{m_l}_l \rangle \label {8-41}$
so by analogy for the spin states, we must have
$\hat {S}^2 | \sigma ^{m_s} _s \rangle = s( s + 1) \hbar ^2 | \sigma ^{m_s}_s \rangle \label {8-42}$
where $\sigma$ is a spin wavefunction with quantum numbers $s$ and $m_s$ that obey the same rules as the quantum numbers $l$ and $m_l$ associated with the spherical harmonic wavefunction $Y$. We also found
$\hat {L}_z | Y^{m_l}_l \rangle = m_l \hbar | Y^{m_l}_l \rangle \label {8-43}$
so by analogy, we must have
$\hat {S}_z | \sigma ^{m_s}_s \rangle = m_s \hbar | \sigma ^{m_s}_s \rangle\label {8-44}$
Since $m_l$ ranges in integer steps from $-l$ to $+l$, also by analogy $m_s$ ranges in integer steps from $-s$ to $+s$. In our hypothetical experiment, we observed one absorption transition, which means there are two spin states. Consequently, the two values of $m_s$ must be $+s$ and $-s$, and the difference in $m_s$ for the two states, labeled f and i below, must be the smallest integer step, i.e. 1. The result of this logic is that
\begin{align} m_{s,f} - m_{s,i} &= 1 \nonumber \[4pt] (+s) - (-s) &= 1 \nonumber \[4pt] 2s &= 1 \nonumber \[4pt] s &= \dfrac {1}{2} \end{align} \label {8-45}
Therefore our conclusion is that the magnitude of the spin quantum number is 1/2 and the values for ms are +1/2 and -1/2. The two spin states correspond to spinning clockwise and counter-clockwise with positive and negative projections of the spin angular momentum onto the z-axis. The state with a positive projection, $m_s$ = +1/2, is called $\alpha$; the other is called $\beta$. These spin states are arbitrarily labeled $\alpha$ and $\beta$, and the associated spin wavefunctions also are designated by $\alpha$ and $\beta$.
From Equation \ref{8-44} the magnitude of the z-component of spin angular momentum, $S_z$, is given by
$S_z = m_s \hbar \label {8-46}$
so the value of $S_z$ is $+ħ/2$ for spin state $\alpha$ and $-ħ/2$ for spin state $\beta$. Using the same line of reasoning we used for the splitting of the $m_l$ states in Section 8.4, we conclude that the $\alpha$ spin state, where the magnetic moment is aligned against the external field direction, has a higher energy than the $\beta$ spin state.
Even though we don’t know their functional forms, the spin wavefunctions are taken to be normalized and orthogonal to each other.
$\int \alpha ^* \alpha d \tau _s = \int \beta ^* \beta d \tau _s = 1 \label {8-47}$
and
$\int \alpha ^* \beta d \tau _s = \int \beta ^* \alpha d \tau _s = 0 \label {8-48}$
where the integral is over the spin variable $\tau _s$.
Now let's apply these deductions to the experimental observations in our hypothetical microwave experiment. We can account for the frequency of the transition ($\nu$= 28 gigahertz) that was observed in this hypothetical experiment in terms of the magnetic moment of the spinning electron and the strength of the magnetic field. The photon energy, $h \nu$, is given by the difference between the energies of the two states, $E_{\alpha}$ and $E_{\beta}$
$\Delta E = h \nu = E_{\alpha} - E_{\beta} \label {8-49}$
The energies of these two states consist of the sum of the energy of an electron in a 1s orbital, $E_{1s}$, and the energy due to the interaction of the spin magnetic dipole moment of the electron, $\mu _s$, with the magnetic field, B (as in Section 8.4). The two states with distinct values for spin magnetic moment $\mu _s$ are denoted by the subscripts $\alpha$ and $\beta$.
$E_{\alpha} = E_{1s} - \mu _{s,\alpha} \cdot B \label {8-50}$
$E_{\beta} = E_{1s} - \mu _{s,\beta} \cdot B \label {8-51}$
Substituting the two equations above into the expression for the photon energy gives
\begin{align} h \nu &= E_{\alpha} - E_{\beta} \[4pt] &= (E_{1s} - \mu _{s, \alpha} \cdot B) - (E_{1s} - \mu_{s,\beta} \cdot B) \label {8-52} \[4pt] &= ( \mu _{s, \beta} - \mu _{s, \alpha}) \cdot B \label{8-53} \end{align}
Again by analogy with the orbital angular momentum and magnetic moment discussed in Section 8.4, we take the spin magnetic dipole of each spin state, $\mu _{s, \alpha}$ and $\mu _{s, \beta}$, to be related to the total spin angular momentum of each state, $S_{\alpha}$ and $S_{\beta}$, by a constant spin gyromagnetic ratio, $\gamma _s$, as shown below.
$\mu _s = \gamma _s S$
$\mu _{s, \alpha} = \gamma _s S_\alpha$
$\mu _{s, \beta} = \gamma _s S_\beta \label {8-54}$
With the magnetic field direction defined as z, the scalar product in Equation \ref{8-53} becomes a product of the z-components of the spin angular momenta, $S_{z, \alpha}$ and $S_{z, \beta}$, with the external magnetic field.
Inserting the values for $S_{z,\alpha} = +\dfrac {1}{2} \hbar$ and $S_{z, \alpha} = -\dfrac {1}{2} \hbar$ from Equation \ref{8-46} and rearranging Equation \ref{8-53} yields
$\dfrac {h \nu}{B} = - \gamma _s \hbar \label {8-56}$
Calculating the ratio $\dfrac {h \nu}{B}$ from our experimental results, $\nu = 28 \times 10^9\, Hz$ when $B = 10^4\, gauss$, gives us a value for
$- \gamma_s \hbar = 18.5464 \times 10^{-21}\, erg/gauss.$
This value is about twice the Bohr magneton,$-\gamma _e \hbar$ with $\gamma _s \hbar = 2.0023, \gamma _e \hbar$, or
$\gamma _s = 2.0023 \gamma _e \label {8-57}$
The factor of 2.0023 is called the g-factor and accounts for the deviation of the spin gyromagnetic ratio from the value expected for orbital motion of the electron. In other words, it accounts for the spin transition being observed where it is instead of where it would be if the same ratio between magnetic moment and angular momentum held for both orbital and spin motions. The value 2.0023 applies to a freely spinning electron; the coupling of the spin and orbital motion of electrons can produce other values for $g$.
Exercise $1$
Carry out the calculations that show that the g-factor for electron spin is 2.0023.
Interestingly, the concept of electron spin and the value g = 2.0023 follow logically from Dirac's relativistic quantum theory, which is beyond the scope of this discussion. Electron spin was introduced here as a postulate to explain experimental observations. Scientists often introduce such postulates parallel to developing the theory from which the property is naturally deduced.
Now that we have discovered electron spin, we need to determine how the electron spin changes when radiation is absorbed or emitted, i.e. what are the selection rules for electron spin of a single electron? Unlike orbital angular momentum, which can have several values, the spin angular momentum can have only the value
$|S| = \sqrt {s (s + 1) \hbar } = \dfrac {\sqrt {3}}{2} \hbar \label {8-58}$
Since s = ½, one spin selection rule is
$\Delta s = 0 \label {8-59}$
When a magnetic field is applied along the z-axis to remove the $m_s$ degeneracy, another magnetic field applied in the x or y direction exerts a force or torque on the magnetic dipole to turn it. This transverse field can “flip the spin,” and change the projection on the z-axis from $\dfrac {1}{+2} \hbar$ to $\dfrac {1}{-2} \hbar$ or from $\dfrac {1}{-2} \hbar$ to $\dfrac {1}{+2} \hbar$. So the other spin selection rule for a single electron is
$\Delta m_s = \pm 1 \label {8-60}$ | textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Book%3A_Quantum_States_of_Atoms_and_Molecules_(Zielinksi_et_al)/08%3A_The_Hydrogen_Atom/8.05%3A_Discovering_Electron_Spin.txt |
The quantum mechanical treatment of the hydrogen atom can be extended easily to other one-electron systems such as $\ce{He^{+}}$, $\ce{Li^{2+}}$, etc. The Hamiltonian changes in two places. Most importantly, the potential energy term is changed to account for the charge of the nucleus, which is the atomic number of the atom or ion, $Z$, times the fundamental unit of charge, $e$. As shown in Equation \ref{8.6.1}, the energy of attraction between the electron and the nucleus increases (i.e., $V$ gets more negative) as the nuclear charge increases.
$\hat {V} (r) = - \dfrac {Z e^2}{4 \pi \epsilon _0 r} \label {8.6.1}$
The other effect is a very slight change in the reduced mass included in the kinetic energy operator. In fact, the larger the nucleus, the better the approximation that the reduced mass is given by the mass of the electron.
Exercise $1$
Compare the reduced mass of the $Li^{+2}$ ion to that of the hydrogen atom.
The effects of the change in $V$ show up in the wavefunctions and the energy eigenvalues. The expression for the energy becomes
$E_n = - \frac {Z^2 \mu e^4}{8 \epsilon ^2_0 h^2 n^2} = Z^2 E_{n, H} \label {8.6.2}$
where $E_{n, H}$ is the energy of the hydrogen atom. The forms of the wavefunctions are identical to those of the hydrogen atom, except for the fact that $Z$ in the radial functions is no longer equal to 1. The selection rules are unchanged, and the Zeeman effect still occurs.
Exercise $2$
Use the orbital energy level expression in Equation \ref{8.6.2} to predict quantitatively the relative energies (in $cm^{-1}$) of the spectral lines for H and $Li^{2+}$.
As the plots in Figure $1$ reveal, the increased charge on the nucleus creates a stronger attraction for the electron and thus the electron charge density distributions shift to smaller values of $r$. These other systems look a lot like compressed hydrogen atoms.
Exercise $3$
Determine whether or not the angular momentum values, the spherical harmonic functions, and the spectroscopic selection rules that describe the electron in $\hat{H}$ are the same or are different for $\ce{Li^{2+}}$. Write a paragraph to justify your answer.
8.07: Spin-Orbitals and Electron Configurations
The wavefunctions obtained by solving the hydrogen atom Schrödinger equation are associated with orbital angular motion and are often called spatial wavefunctions, to differentiate them from the spin wavefunctions. The complete wavefunction for an electron in a hydrogen atom must contain both the spatial and spin components. We refer to the complete one-electron orbital as a spin-orbital and a general form for this orbital is
$| \varphi _{n,l,m_l , m_s} \rangle = | \psi _{n,l,m_l} (r, \theta , \psi ) \rangle | \sigma ^{m_s}_s \rangle \label {8.7.1}$
A spin-orbital for an electron in the $2p_z$ orbital with $m_s = + \frac {1}{2}$, for example, could be written as
$| \psi _{2pz_\alpha} \rangle = | \psi _{2,1,0} (r, \theta \psi) \ | \alpha \rangle \label{8.7.2}$
A common method of depicting electrons in spin-orbitals arranged by energy is shown in Figure $1$, which gives one representation of the ground state electron configuration of the hydrogen atom.
On the energy level diagram in Figure $1$, the horizontal lines labeled 1s, 2s, 2p, etc. denote the spatial parts of the orbitals, and an arrow pointing up for spin $\alpha$ and down for spin $\beta$ denotes the spin part of the wavefunction.
An alternative shorthand notation for electron configuration is the familiar form 1s1 to denote an electron in the 1s orbital. Note that this shorthand version contains information only about the spatial wavefunction; information about spin is implied. Two electrons in the same orbital have spin $\alpha$ and $\beta$, e.g. 1s2, and one electron in an orbital is assumed to have spin $\alpha$. Hydrogen atoms can absorb energy and the electron can be promoted to higher energy spin-orbitals. Examples of such excited state configurations are $2p_1$, $3d_1$, etc. | textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Book%3A_Quantum_States_of_Atoms_and_Molecules_(Zielinksi_et_al)/08%3A_The_Hydrogen_Atom/8.06%3A_Other_One-Electron_Systems.txt |
Around 1930 several spectroscopists using high resolution instruments found that lines in the hydrogen atom spectrum actually are not single lines but they are multiplets as shown for an isotopic mixture of hydrogen,$(H^1_{\alpha})$ and deuterium, ($H^2_{\alpha}$) in Figure $1$. A multiplet consists of two or more closely spaced lines. Two lines together form a doublet, three a triplet, etc. Multiplets also are called fine structure. The term fine structure means the lines are spaced close together, i.e. finely spaced. Such fine structure also was found in spectra of one-electron ions such as He+.
You should recall that the $H^1_{\alpha}$ line in the Balmer series at 656.279 nm was understood as resulting from a single transition of an electron from the n = 3 energy level to the n = 2 level. The observation of fine structure revealed that an orbital energy level diagram does not completely describe the energy levels of atoms. This fine structure also provided key evidence at the time for the existence of electron spin, which was used not only to give a qualitative explanation for the multiplets but also to furnish highly accurate calculations of the multiplet splittings.
Spin-Orbit Coupling
Specifying the orbital configuration of an atom does not uniquely identify the electronic state of the atom because the orbital angular momentum, the spin angular momentum, and the total angular momentum are not precisely specified. For example in the hydrogen 2p1 configuration, the electron can be in any of the three p-orbitals, $m_l$ = +1, 0, and –1, and have spins with $m_s$ = +1/2 or –1/2. Thus, there are 3 times 2 different possibilities or states. Also, the orbital and spin angular momentum of the electrons combine in multiple ways to produce angular momentum vectors that are characteristic of the entire atom not just individual electrons, and these different combinations can have different energies. This coupling of orbital and spin angular momentum occurs because both the electron spin and orbital motion produce magnetic dipole moments. As we have seen previously, the relationship between the angular momentum and the magnetic moment is given by the gyromagnetic ratio. These magnetic dipoles interact just like two tiny bar magnets attracting and repelling each other. This interaction is called spin-orbit interaction. The interaction energy is proportional to the scalar product of the magnetic dipole moments, which are proportional to the angular momentum vectors.
$E_{s-o} = \lambda S \cdot L \label{8.8.1}$
$\hat {H} _{s-o} = \lambda \hat {S} \cdot \hat {L} \label {8.8.2}$
where $\lambda$ represents the constant of proportionality and is called the spin-orbit coupling constant. The spin-orbit interaction couples the spin motion and orbital motion of all the electrons together. This coupling means that exact wavefunctions are not eigenfunctions of the spin and orbital angular momentum operators separately. Rather the total angular momentum J = L+S, the vector sum of the spin and orbital angular momentum, is required to be coupled for a completely accurate description of the system. Trying to describe the coupled system in terms of spin and orbital angular momentum separately is analogous to trying to describe the positions of two coupled bar magnets independently. It cannot be done; their interaction must be taken into account.
We need to be able to identify the electronic states that result from a given electron configuration and determine their relative energies. An electronic state of an atom is characterized by a specific energy, wavefunction (including spin), electron configuration, total angular momentum, and the way the orbital and spin angular momenta of the different electrons are coupled together. There are two descriptions for the coupling of angular momentum. One is called j-j coupling, and the other is called L-S coupling. The j-j coupling scheme is used for heavy elements (Z > 40), and the L-S coupling scheme is used for the lighter elements. L-S coupling also is called R-S or Russell-Saunders coupling.
L-S Coupling
In L-S coupling, the orbital and spin angular momenta of all the electrons are combined separately
$L = \sum _i l_i \label{8.8.3}$
$S = \sum _i S_i \label {8.8.4}$
The total angular momentum vector then is the sum of the total orbital angular momentum vector and the total spin angular momentum vector.
$J = L + S \label {8.8.5}$
The result of these vector sums is specified in a code that is called a Russell-Saunders term symbol, and each term symbol identifies an energy level of the atom. Consequently, the energy levels also are called terms. A term symbol has the form
$\Large ^{2s+1} L_J$
where the code letter that is used for the total orbital angular momentum quantum number L = 0, 1, 2, 3, 4, 5 is S, P, D, F, G, H, respectively. Note how this code matches that used for the atomic orbitals. The superscript $2S+1$ gives the spin multiplicity of the state, where S is the total spin angular momentum quantum number. The spin multiplicity is the number of spin states associated with a given electronic state. In order not to confuse the code letter S for the orbital angular momentum with the spin quantum number S, you must examine the context in which it is used carefully. In the term symbol, the subscript $J$ gives the total angular momentum quantum number. Because of spin-orbit coupling, only $J$ and $M_J$ are valid quantum numbers, but because the spin-orbit coupling is weak $L$, $M_L$, $S$, and $M_S$ still serve to identify and characterize the states for the lighter elements.
For example, the ground state, i.e. the lowest energy state, of the hydrogen atom corresponds to the electron configuration in which the electron occupies the 1s spatial orbital and can have either spin $\alpha$ or spin $\beta$. The term symbol for the ground state is $^2 S_{1/2}$, which is read as “doublet S 1/2”. The spin quantum number is 1/2 so the superscript $2S+1 = 2$, which gives the spin multiplicity of the state, i.e. the number of spin states equals 2 corresponding to $\alpha$ and $\beta$. The $S$ in the term symbol indicates that the total orbital angular momentum quantum number is 0 (For the ground state of hydrogen, there is only one electron and it is in an s-orbital with $l = 0$ ). The subscript ½ refers to the total angular momentum quantum number. The total angular momentum is the sum of the spin and orbital angular momenta for the electrons in an atom. In this case, the total angular momentum quantum number is just the spin angular momentum quantum number, ½, since the orbital angular momentum is zero. The ground state has a degeneracy of two because the total angular momentum can have a z-axis projection of $+\frac {1}{2} \hbar$ or $-\frac {1}{2} \hbar$, corresponding to $m_J$ = +1/2 or -1/2 resulting from the two electron spin states $\alpha$ and $\beta$. We also can say, equivalently, that the ground state term or energy level is two-fold degenerate.
Exercise $1$
Write the term symbol for a state that has 0 for both the spin and orbital angular momentum quantum numbers.
Exercise $2$
Write the term symbol for a state that has 0 for the spin and 1 for the orbital angular momentum quantum numbers
Russell-Saunders Selection Rules
Higher energy or excited orbital configurations also exist. The hydrogen atom can absorb energy, and the electron can be promoted to a higher energy orbital. The electronic states that result from these excited orbital configurations also are characterized or labeled by term symbols. The details of how to determine the term symbols for multi-electron atoms and for cases where both the orbital and spin angular momentum differ from zero are given elsewhere, along with rules for determining the relative energies of the terms.
We have found that the selection rules for a single electron moving from one atomic orbital to another are
$\Delta l = \pm 1 \label{8.8.6}$
$\Delta m_l = 0, \pm 1 \label {8.8.7}$
For an atom as a whole in the limit of L-S coupling, the Russell-Saunders selection rules are
$\Delta S = 0 \label{8.8.8}$
$\Delta L = 0, \pm 1 \label{8.8.9}$
$\Delta J = 0, \pm 1 \label {8.8.10}$
However, the $J =0$ to $J= 0$ transition is forbidden
$\Delta m_J = 0, \pm 1 \label {8.8.11}$
However, the $m_J = 0$ to $m_J = 0$ is forbidden if $\Delta J = 0$
These selection rules result from the general properties of angular momentum such as the conservation of angular momentum and commutation relations.
Now we want to apply these ideas to understand why multiplet structure is found in the luminescence spectrum of hydrogen and single electron ions. As we have said, the $H_{\alpha}$ line in the Balmer series at 656.279 nm can be understood as due to a transition of an electron in a n = 3 atomic orbital to a n = 2 atomic orbital. When this spectral line was examined using high-resolution instruments, it was found actually to be a doublet, i.e. two lines separated by 0.326 cm-1.
There are 9 degenerate orbitals associated with the n = 3 level, and 4 associated with the n = 2 level. Since an electron can be in any orbital with any one of two spins, we expect the total number of states to be twice the number of orbitals. The number of orbitals is given by n2 so there should be 8 states associated with n = 2 and 18 states associated with n = 3. Using the ideas of vector addition of angular momentum, the terms that result from having an electron in any one of these orbitals are given in Table $1$.
Table $1$: H Atom Terms Originating from n = 1, 2, and 3
Orbital Configuration
Term Symbols
Degeneracy
1s1 2S1/2 2
2s1 2S1/2 2
2p1 2P1/2, 2P3/2 2, 4
3s1 2S1/2 2
3p1 2P1/2, 2P3/2 2, 4
3d1 2D3/2, 2D5/2 4, 6
Table $1$ shows that there are 3 terms associated with n = 2, and 5 terms associated with n = 3. In principle, each term can have a different energy. The degeneracy of each term is determined by the number of projections that the total angular momentum vector has on the z-axis. These projections depend on the mJ quantum number, which ranges from +J to – J in integer steps. J is the total angular momentum quantum number, which is given by the subscript in the term symbol. This relationship between mJ and J (mJ varies from +J to – J in integer steps) is true for any angular momentum vector.
Exercise $3$
Confirm that the term symbols in Table 8. are correct.
Exercise $4$
Confirm that the values for the degeneracy in Table $1$ are correct and that the total number of states add up to 8 for n = 2 and 18 for n = 3.
The energies of the terms depend upon spin-orbit coupling and relativistic corrections that need to be included in the Hamiltonian operator in order to provide a more complete description of the hydrogen atom. As a consequence of these effects, all terms with the same n and J quantum numbers have the same energy while terms with different values for n or J have different energies. The theoretical term splittings as given by H.E. White, Introduction to Atomic Spectra (McGraw-Hill, New York, 1934) pp. 132-137. are shown in Figure $8$.2.
Figure $2$ shows 5 allowed transitions for the electron in the states associated with n = 3 to the states associated with n = 2. Of these five, two are most intense and are responsible for the doublet structure. These two transitions are indicated by the wide black lines at the bottom of the figure to correspond to the lines observed in the photographic spectrum shown in Figure $8$.2. The other transitions contribute to the width of these lines or are not observed. The theoretical value for the doublet splitting is 0.328 cm-1, which is in excellent agreement with the measured value of 0.326 cm-1. The value of 0.328 cm-1 is obtained by taking the difference, 0.364 – 0.036 cm-1,in the term splittings.
As we have just seen, the electronic states, as identified by the term symbols, are essential in understanding the spectra and energy level structure of atoms, but it also is important to associate the term symbols and states with the orbital electron configurations. The orbital configurations help us understand many of the general or coarse features of spectra and are necessary to produce a physical picture of how the electron density changes because of a spectroscopic transition.
Exercise $5$
Use the Russell-Saunders selection rules to determine which transitions contribute to the $H_{\alpha}$ line in the hydrogen spectrum.
Magnetic Field Effects
The Zeeman effect that was described in Section 8.4 only considered the orbital motion of the electron and did not include spin angular momentum and the spin magnetic moment. For a more complete analysis of the Zeeman effect associated with the n = 2 to n = 1 transition in the hydrogen atom, we need to use the term symbols for the states, examine how the mJ degeneracy is removed by the magnetic field, and determine which transitions between the states are allowed.
The states involved in a transition of an electron from the 2p atomic orbital to the 1s atomic orbital (where the hydrogen atom goes from the 2p1 configuration to the 1s1 configuration) are identified in Table $8$.1. The 2p1 configuration produces 2P3/2 and 2P1/2 terms with the latter being lower in energy by 0.364 cm-1 as shown in Figure $8$.2. The 1s1 configuration corresponds to a 2S1/2 term, which also is shown in Figure $2$.
The orbital energy in a magnetic field was given by Equation \ref{8.8.12} , which is repeated here.
$\left \langle E \right \rangle = E_0 + \mu _B B_z m_l \label {8.8.12}$
This equation can be generalized by changing the angular momentum quantum number to J and adding a g-factor to account for different gyromagnetic ratios
$\left \langle E \right \rangle = E_0 + gm_J \mu _B B_z \label {8.8.13}$
While the g-factor equals 2 for a free electron or an electron in an s-orbital, the g-factor of an electron is affected by spin-orbit coupling. For the case of L-S coupling, the g-factor is given by
$g = 1 + \frac {J(J + 1) + S( S + 1) - L (L + 1)}{2J (J + 1)} \label {8.8.14}$
To identify how the energy of states change in an electric field we need only consider the gmJ factor in Equation since B, which is the Bohr magneton, is a constant and the energy changes just scale with $B_z$, which is the magnetic field. Consequently we can describe the splittings in terms of gmJ units where one gmJ unit is the product $\mu _B B_z$.
Table $3$ lists the quantities needed for the Zeeman effect analysis. The information in this table shows that the $2P_{3/2}$ term splits into 4 components. Two components move up in energy 6/3 units and 2/3 units, respectively, and two move down –6/3 and –2/3 units. The 2P1/2 term splits into two components. One moves up 1/3 unit, and the other moves down 1/3 unit. The 2S1/2 also splits into two components, each moving 1 unit up and down, respectively. The energies of these states in a magnetic field along with the allowed transitions between them are illustrated in Figure $3$. The addition of spin angular momentum has made the situation much more complicated. Previously we considered a Zeeman effect that produced 3 spectral lines from one, now 2 lines turn into 10 lines in a magnetic field. These 10 lines correspond to 10 different possible transitions of the electron from the 2p1 configuration to the 1s1 configuration. These transitions produce two multiplets in the spectrum, one of 6 lines and one of 4 lines.
Table $3$: Items for the Zeeman Effect Analysis
Term
J
L
S
g
mJ
g mJ
$2^P_{3/2}$
3/2
1
1/2
4/3
3/2
6/3
1/2
2/3
-1/2
-2/3
-3/2
-6/3
$^2P_{1/2}$
1/2
1
1/2
2/3
1/2
1/3
-1/2
-1/3
$^2S_{1/2}$
1/2
0
½
2
1/2
1
-1/2
-1
Exercise $6$
Using the information in Figure $8$.3,(a) determine the spacing between the lines in the two multiplets in units of $\mu _B B_z$,(b) determine the magnitude of $\mu _B B_z$ </sub>for a field of 10,000 Gauss and for a field of 1 Tesla,(c) approximately what is the separation of the two multiplets in wavenumbers,(d) draw a sketch showing the field-on and field-off spectra you might record in the laboratory for the $2p \rightarrow 1s$ transition, and(e) from the allowed transitions that are shown in the figure and considering the ones that do not occur, determine what the selections rules must be for ΔS, ΔL, ΔJ, and ΔmJ. | textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Book%3A_Quantum_States_of_Atoms_and_Molecules_(Zielinksi_et_al)/08%3A_The_Hydrogen_Atom/8.08%3A_Coupling_of_Angular_Momentum_and_Spectroscopic_Term_Symbols.txt |
Q8.1
Calculate the probability density for a hydrogen 1s electron at a distance 3a0 from the proton along the z-axis (a0 is the Bohr radius).
Q8.2
Calculate the radial probability density for a hydrogen 1s electron to be 3a0 from the proton.
Q8.3
Calculate the probability that a hydrogen 1s electron is within a distance 3a0 from the nucleus.
Q8.4
Calculate and compare the average distances of the electron from the proton for the hydrogen 1s orbital and the 2s orbital. What insight do you gain from this comparison?
Q8.5
What is the percent error in the energy of the 1s orbital if the electron mass is used to calculate the energy rather than the reduced mass?
Q8.6
Calculate the energies (in units of electron volts and wavenumbers) of the three 1s to 2p transitions for a hydrogen atom in a magnetic field of 10 Tesla.
Q8.7
Calculate the frequency of radiation that would be absorbed due to a change in the electron spin state of a hydrogen atom in a magnetic field of 10 Tesla. Compare the energy of this transition to the energy of the 1s to 2p transitions in the previous problem. What insight do you gain from this comparison?
Q8.8
Which is larger for the hydrogen atom, the Zeeman splitting due to spin motion (electron in the 1s orbital) or the Zeeman splitting due to orbital motion (electron in the 2p orbitals neglecting spin)? Why is one larger than the other?
Q8.9
What is the difference between the average value of $r$ and the most probable value of $r$ where $r$ is the distance of the electron from the nucleus?
Q8.10
Show that orbitals directed along the x and y axis can be formed by taking linear combinations of the spherical harmonics $Y^{+1}_1$ and $Y^{-1}_1$. These orbitals are called $p_x$ and $p_y$. Why do you think chemists prefer to use px and py rather than the angular momentum eigenfunctions?
Q8.11
What are the expectation values of $\hat {L}_x, \hat {L}_y$, and $\hat {L}_z$ for the three 2p wavefunctions?
Q8.12
Why can $\hat {H}, \hat {L} ^2$, and $\hat {L} _z$ have the same eigenfunctions?
Q8.13
Derive the selection rules for electronic transitions in the hydrogen atom. See Section 8.3 above and selection rules in Chapter 7. Use Mathcad to generate the radial probability densities for the 3s, 3p, and 3d atomic orbitals of hydrogen. What insight do you gain by comparing these plots?
Q8.14
Examine the Periodic Table and explain the relationship between the number and types of atomic orbitals, including spin, and the columns and rows.
David M. Hanson, Erica Harvey, Robert Sweeney, Theresa Julia Zielinski ("Quantum States of Atoms and Molecules")
8.0S: 8.S: The Hydrogen Atom (Summary)
The Schrödinger equation for one-electron atoms and ions such as H, $He^+$, $Li^{2+}$, etc. is constructed using a Coulombic potential energy operator and the three-dimensional kinetic energy operator written in spherical coordinates. Because the radial and angular motions are separable, solutions to the Schrödinger equation consist of products $R (r) Y (\theta , \varphi )$ of radial functions $R(r)$ and angular functions $Y (\theta , \varphi )$ that are called atomic orbitals. Three quantum numbers, n, $l$, and $m_l$ are associated with the orbitals. Numerous visualization methods are available to enhance our understanding of the orbital shapes and sizes represented by the modulus squared of the wavefunctions. The orbital energy eigenvalues depend only on the n quantum number and match the energies found using the Bohr model of the hydrogen atom. Because all orbitals with the same principal quantum number have the same energy in one-electron systems, each orbital energy level is n2-degenerate. For example, the n = 3 level contains 9 orbitals (one 3s, three 3p’s and five 3d’s.)
Atomic spectra measured in magnetic fields have more spectral lines than those measured in field-free environments. This Zeeman effect is caused by the interaction of the imposed magnetic field with the magnetic dipole moment of the electrons, which removes the $m_l$ quantum number degeneracy.
In addition to the orbital wavefunctions obtained by solving the Schrödinger equation, electrons in atoms possess a quality called spin that has associated wavefunctions $\sigma$, quantum numbers s and ms, spin angular momentum S and spectroscopic selection rules. Interaction with a magnetic field removes the degeneracy of the two spin states, which are labeled $\alpha$ and $\beta$, and produces additional fine structure in atomic spectra. While spin does not appear during the solution of the hydrogen atom presented in this text, spin is presented as a postulate because it is necessary to explain experimental observations about atoms.
Single-electron wavefunctions that incorporate both the orbital (spatial) and spin wavefunctions are called spin-orbitals. The occupancy of spin-orbitals is called the electron configuration of an atom. The lowest energy configuration is called the ground state configuration and all other configurations are called excited state configurations. To fully understand atomic spectroscopy, it is necessary to specify the total electronic state of an atom, rather than simply specifying the orbital configuration. An electronic state, or term, is characterized by a specific energy, total angular momentum and coupling of the orbital and spin angular momenta, and can be represented by a term symbol of the form $^{2s+1} L_J$ where S is the total spin angular momentum quantum number, L is the total orbital angular momentum quantum number and J is the sum of L and S. One term may include several degenerate electron configurations. The degeneracy of a term is determined by the number of projections of the total angular momentum vector on the z-axis. The degeneracy of a term can be split by interaction with a magnetic field.
Overview of key concepts and equations for the hydrogen atom
• Potential energy
• Hamiltonian
• Wavefunctions
• Quantum Numbers
• Energies
• Spectroscopic Selection Rules
• Angular Momentum Properties | textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Book%3A_Quantum_States_of_Atoms_and_Molecules_(Zielinksi_et_al)/08%3A_The_Hydrogen_Atom/8.0E%3A_8.E%3A_The_Hydrogen_Atom_%28Exercises%29.txt |
Multi-electron systems, including both atoms and molecules, are central to the study of chemistry. While we can write the Schrödinger equations for a two-electron atom and for many-electron atoms, the Schrödinger equations for atoms (and molecules too) with more than one electron cannot be solved because of electron-electron Coulomb repulsion terms in the Hamiltonian. These terms make it impossible to separate the variables and solve the Schrödinger equation. Fortunately, reasonably good approximate solutions can be found, and an active area of research for physical chemists involves finding methods to make them even better.
In this chapter you will learn several key techniques for approximating wavefunctions and energies, and you will apply these techniques to multi-electron atoms such as helium. You also will learn how to use the theoretical treatment of the electronic states of matter to account for experimental observations about multi-electron systems. For example, the periodic trends in ionization potential and atomic size that are presented in introductory chemistry texts and reproduced here in Figure \(1\) arise directly from the nature of the electronic states of the atoms in the periodic table.
• 9.1: The Schrödinger Equation For Multi-Electron Atoms
As with the hydrogen atom, the nuclei for multi-electron atoms are so much heavier than an electron that the nucleus is assumed to be the center of mass. Fixing the origin of the coordinate system at the nucleus allows us to exclude translational motion of the center of mass from our quantum mechanical treatment.
• 9.2: Solution of the Schrödinger Equation for Atoms- The Independent Electron Approximation
In this section we will see a useful method for approaching a problem that cannot be solved analytically and in the process we will learn why a product wavefunction is a logical choice for approximating a multi-electron wavefunction.
• 9.3: Perturbation Theory
Perturbation theory is a method for continuously improving a previously obtained approximate solution to a problem, and it is an important and general method for finding approximate solutions to the Schrödinger equation.
• 9.4: The Variational Method
In this section we introduce the powerful and versatile variational method and use it to improve the approximate solutions we found for the helium atom using the independent electron approximation. One way to take electron-electron repulsion into account is to modify the form of the wavefunction. A logical modification is to change the nuclear charge, \(Z\), in the wavefunctions to an effective nuclear charge \(Z_{eff}\).
• 9.5: Single-electron Wavefunctions and Basis Functions
Finding the most useful single-electron wavefunctions to serve as building blocks for a multi-electron wavefunction is one of the main challenges in finding approximate solutions to the multi-electron Schrödinger Equation. The functions must be different for different atoms because the nuclear charge and number of electrons are different. The attraction of an electron for the nucleus depends on the nuclear charge, and the electron-electron interaction depends upon the number of electrons.
• 9.6: Electron Configurations, The Pauli Exclusion Principle, The Aufbau Principle, and Slater Determinants
The assignment of electrons to orbitals is called the electron configuration of the atom. We extend that idea to constructing multi-electron wavefunctions that obeys the Pauli Exclusion Principle, which requires that each electron in an atom or molecule must be described by a different spin-orbital. The mathematical analog of this process is the construction of the approximate multi-electron wavefunction as a product of the single-electron atomic orbitals.
• 9.7: The Self-Consistent Field Approximation (Hartree-Fock Method)
In this section we consider a method for finding the best possible one-electron wavefunctions that was published by Hartree in 1948 and improved two years later by Fock.
• 9.8: Configuration Interaction
The best energies obtained at the Hartree-Fock level are still not accurate, because they use an average potential for the electron-electron interactions. Configuration interaction (CI) methods help to overcome this limitation. The exact wavefunction must depend upon the coordinates of both electrons simultaneously. This independent-electron approximation can take to account such electron correlation effects. The method for taking correlation into account is called Configuration Interaction.
• 9.9: Chemical Applications of Atomic Structure Theory
In this section we examine how the results of the various approximation methods considered in this chapter can be used to understand and predict the physical properties of multi-electron atoms. Our results include total electronic energies, orbital energies and single-electron wavefunctions that describe the spatial distribution of electron density.
• 9.E: The Electronic States of the Multielectron Atoms (Exercises)
Exercises for the "Quantum States of Atoms and Molecules" TextMap by Zielinksi et al.
• 9.S: The Electronic States of the Multielectron Atoms (Summary)
09: The Electronic States of the Multielectron Atoms
In this chapter, we will use the helium atom as a specific example of a multi-electron atom. Figure $1$ shows a schematic representation of a helium atom with two electrons whose coordinates are given by the vectors $r_1$ and $r_2$. The electrons are separated by a distance $r_{12} = |r_1-r_2|$. The origin of the coordinate system is fixed at the nucleus. As with the hydrogen atom, the nuclei for multi-electron atoms are so much heavier than an electron that the nucleus is assumed to be the center of mass. Fixing the origin of the coordinate system at the nucleus allows us to exclude translational motion of the center of mass from our quantum mechanical treatment.
The Hamiltonian operator for the hydrogen atom serves as a reference point for writing the Hamiltonian operator for atoms with more than one electron. Start with the same general form we used for the hydrogen atom Hamiltonian
$\hat {H} = \hat {T} + \hat {V} \label {9-1}$
Include a kinetic energy term for each electron and a potential energy term for the attraction of each negatively charged electron for the positively charged nucleus and a potential energy term for the mutual repulsion of each pair of negatively charged electrons. The He atom Hamiltonian is
$\hat {H} = -\dfrac {\hbar ^2}{2m_e} (\nabla ^2_1 + \nabla ^2_2) + V (r_1) + V (r_2) + V (r_{12}) \label {9-2}$
where
$V(r_1) = -\dfrac {2e^2}{4 \pi \epsilon _0 r_1} \label {9-3}$
$V(r_2) = -\dfrac {2e^2}{4 \pi \epsilon _0 r_2} \label {9-4}$
$V(r_{12}) = +\dfrac {e^2}{4 \pi \epsilon _0 r_{12}} \label {9-5}$
Equation $\ref{9-2}$ can be extended to any atom or ion by including terms for the additional electrons and replacing the He nuclear charge +2 with a general charge $Z$; e.g.
$V(r_1) = -\dfrac {Ze^2}{4 \pi \epsilon _0 r_1} \label {9-6}$
Equation $\ref{9-2}$ then becomes
$\hat {H} = -\dfrac {\hbar ^2}{2m_e} \sum _i \nabla ^2_i + \sum _i V (r_i) + \sum _{i \ne j} V (r_{ij}) \label {9-7}$
Exercise $1$
Referring to Equation $\ref{9-7}$, explain the meaning of the three summations and write expressions for the $V(r_i)$ and $V(r_{ij})$ terms.
Exercise $2$
Write the multi-electron Hamiltonian (e.g. Equation $\ref{9-2}$) for a boron atom.
Each electron has its own kinetic energy term in Equations $\ref{9-2}$ and $\ref{9-7}$. For an atom like sodium there would be $\nabla ^2_1 , \nabla ^2_2 , \cdot , \nabla ^2_{11}$. The other big difference between single electron systems and multi-electron systems is the presence of the $V(r_{ij})$ terms which contain $1/r_{ij}$, where $r_{ij}$ is the distance between electrons $i$ and $j$. These terms account for the electron-electron repulsion that we expect between like-charged particles.
Given what we have learned from the previous quantum mechanical systems we’ve studied, we predict that exact solutions to the multi-electron Schrödinger equation in Equation $\ref{9-7}$ would consist of a family of multi-electron wavefunctions, each with an associated energy eigenvalue. These wavefunctions and energies would describe the ground and excited states of the multi-electron atom, just as the hydrogen wavefunctions and their associated energies describe the ground and excited states of the hydrogen atom. We would predict quantum numbers to be involved, as well.
The fact that electrons interact through their Coulomb repulsion means that an exact wavefunction for a multi-electron system would be a single function that depends simultaneously upon the coordinates of all the electrons; i.e., a multi-electron wavefunction, $\psi (r_1, r_2, \cdots r_i)$. The modulus squared of such a wavefunction would describe the probability of finding the electrons (though not specific ones) at a designated location in the atom. Alternatively, $ne |\psi |^2$ would describe the total amount of electron density that would be present at a particular spot in the multi-electron atom. All of the electrons are described simultaneously by a multi-electron wavefunction, so the total amount of electron density represented by the wavefunction equals the number of electrons in the atom.
All of the electrons are described simultaneously by a multi-electron wavefunction, so the total amount of electron density represented by the wavefunction equals the number of electrons in the atom.
Unfortunately, the Coulomb repulsion terms make it impossible to find an exact solution to the Schrödinger equation for many-electron atoms and molecules even if there are only two electrons. The most basic approximations to the exact solutions involve writing a multi-electron wavefunction as a simple product of single-electron wavefunctions, and obtaining the energy of the atom in the state described by that wavefunction as the sum of the energies of the one-electron components.
$\psi (r_1, r_2, \cdots , r_i) \approx \varphi _1 (r_1) \varphi _2 (r_2) \cdots \varphi _i(r_i) \label {9-8}$
By writing the multi-electron wavefunction as a product of single-electron functions, we conceptually transform a multi-electron atom into a collection of individual electrons located in individual orbitals whose spatial characteristics and energies can be separately identified. For atoms these single-electron wavefunctions are called atomic orbitals. For molecules, as we will see in the next chapter, they are called molecular orbitals. While a great deal can be learned from such an analysis, it is important to keep in mind that such a discrete, compartmentalized picture of the electrons is an approximation, albeit a powerful one. | textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Book%3A_Quantum_States_of_Atoms_and_Molecules_(Zielinksi_et_al)/09%3A_The_Electronic_States_of_the_Multielectron_Atoms/9.01%3A_The_Schr%C3%B6dinger_Equation_For_Multi-Electr.txt |
In this section we will see a useful method for approaching a problem that cannot be solved analytically and in the process we will learn why a product wavefunction is a logical choice for approximating a multi-electron wavefunction.
The helium atom Hamiltonian is re-written below with the kinetic and potential energy terms for each electron followed by the potential energy term for the electron-electron interaction. The last term, the electron-electron interaction, is the one that makes the Schrödinger equation impossible to solve.
$\hat {H} = -\dfrac {\hbar ^2}{2m} \nabla^2_1 - \dfrac {2e^2}{4 \pi \epsilon _0 r_1} - \dfrac {\hbar ^2}{2m} \nabla ^2_2 - \dfrac {2e^2}{4 \pi \epsilon _0 r_2} + \dfrac {e^2}{4 \pi \epsilon _0 r_12} \label {9-9}$
To solve the Schrödinger Equation using this Hamiltonian, we need to make an assumption that allows us to find an approximate solution. The approximation that we consider in this section is the complete neglect of the electron-electron interaction term. Odd though it seems, this assumption corresponds mathematically to treating the helium atom as two non-interacting helium ions (with one electron each) that happen to share the same nucleus.
This approximation is called the independent-electron assumption. While this assumption might seem very drastic, it is worth trying since it also presents a straightforward path to a solution. A general strategy when solving difficult problems is to make an assumption and see how the results turn out. In this case we can compare the results we obtain using the assumption to what is known experimentally about the quantum states of helium, like the ionization energies. Are we a factor of 10 off? 10000? The latter result would probably indicate that we have hit a dead end with this method, while the former might indicate a method worth refining.
Neglecting the electron repulsion term simplifies the helium atom Hamiltonian to a sum of two hydrogen-like Hamiltonians that can be solved exactly.
$\hat {H}(r_1,r_2) = \hat {H} (r_1) + \hat {H} (r_2) \label {9-10}$
The variables (the positions of the electrons, $r_1$ and $r_2$) in the Schrödinger equation separate, and we end up with two independent Schrödinger equations that are exactly the same as that for the hydrogen atom, except that the nuclear charge is +2e rather than +1e.
$\hat {H} (r_1) \varphi (r_1) = E_1 \varphi (r_1) \label {9-11}$
$\hat {H} (r_2) \varphi (r_2) = E_2 \varphi (r_2) \label {9-12}$
Exercise $1$
What is the specific mathematical form for $\hat {H} (r_1)$ in Equation $\ref{9-10}$?
Using our previous experiences with separation of variables, we realize that the wavefunction can be approximated as a product of two single-electron hydrogen-atom wavefunctions with a nuclear charge $Z = +2e$,
$\psi (r_1 , r_2) \approx \varphi (r_1) \varphi (r_2) \label {9-13}$
Exercise $2$
Write the explicit mathematical expression for the ground state wavefunction for the helium atom shown in Equation $\ref{9-13}$.
Binding Energy
As we will show below, the energy eigenvalue associated with the product wavefunction is the sum of the one-electron energies associated with the component single-electron hydrogen-atom wavefunctions.
$E_{He} = E_1 + E_2 \label {9-14}$
The energy calculated using the Schrödinger equation is also called the total energy or the binding energy. Binding energy is the energy required to separate the particles of a system (in this case the two electrons and the nucleus) to an infinite distance apart. The binding energy should not be confused with the ionization energy, $IP$, which is the energy required to remove only one electron from the helium atom. Binding energies can be measured experimentally by sequentially ionizing the atom and summing all the ionization energies. hence for the lithium atom with three electrons, the binding energy is
$E_{He} = IP_1 + IP_2 + IP_3$
The binding energy (or total energy) should not be confused with the ionization energy, $IP$, which is the energy required to remove a single electron from the atom.
Exercise $3$
Why was it unnecessary to differentiate the terms binding energy and ionization energy for the hydrogen atom and other one-electron systems?
To calculate binding energies using the approximate Hamiltonian with the missing electron-electron repulsion term, we use the expectation value integral, Equation $\ref{9-15}$. This is a general approach and we’ve used it in earlier chapters. The notation $\int d\tau$ is used to represent integration over the three-dimensional space in spherical coordinates for electrons 1 and 2.
$\left \langle E \right \rangle = \int \varphi ^*_{1s} (r_1) \varphi ^*_{1s} (r_2) [ H(r_1) + H(r_2) ] \varphi _{1s} (r_1)\varphi _{1s} (r_2) d\tau \label {9-15}$
The wavefunctions in Equation $\ref{9-15}$ are the hydrogen atom functions with a nuclear charge of +2e. The resulting energy for the helium ground state is
$E_{approx} = 2Z^2 E_H = - 108\, eV \label {9-16}$
where $Z = +2$ and $E_H$ is the binding energy of the hydrogen atom (-13.6 eV). The calculated result for the binding energy can be compared to the experimental value of -78.9 eV. The difference is due to the electron-electron interaction. The experimental and calculated binding and ionization energies are listed in Table $1$.
Table $1$: Spectroscopic and Calculated Energies for Helium
Experimental Crude Approximation
$E$ (energy to remove all electrons from nucleus) -79.0 eV -108.8 eV
${IP}$ (energy to remove weakest electron from nucleus) 24.6 eV 54.4 eV
Exercise $4$
Start with Equation $\ref{9-15}$ and show that $E$ in fact equals -108 eV. Rather than evaluating integrals, derive that
$E = 2 Z^2 E_H \nonumber$
and substitute the value for $E_H$.
The deviation of the calculated binding energy from the experimental value can be recognized as being good or bad depending on your point of view. It is bad because a 38% error is nothing to “brag about”; on the other hand, the comparison is good because the calculated value is close to the experimental value. Both the experiment and the calculation give an answer of about -100 eV for the binding energy of helium. This comparison tells you that although the electron repulsion term is important, the idea that the electrons are independent is reasonable. An independent-electron picture is reasonable because you can completely neglect the electron-electron interaction and you get a reasonable value for the binding energy, although it is not particularly accurate.
This observation is important because we can now feel justified in using the idea of independent electrons as a starting point for improved approximate solutions to the Schrödinger equation for multi-electron atoms and molecules. To find better approximate solutions for multi-electron systems, we start with wavefunctions that depend only on the coordinates of a single electron, and then take into account the electron-electron repulsion to improve the accuracy.
Getting highly accurate energies and computed properties for many-electron systems is not an impossible task. In subsequent sections of this chapter we approximate the helium atom using several additional widely applicable approaches, perturbation theory, the variational method, self consistent field theory and the Hartree-Fock approach (SCF-HF), and configuration interaction (CI). These basic computational chemistry tools are used to treat other multi-electron systems, both atomic and molecular, for applications ranging from interpretation of spectroscopy to predictions of chemical reactivity. | textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Book%3A_Quantum_States_of_Atoms_and_Molecules_(Zielinksi_et_al)/09%3A_The_Electronic_States_of_the_Multielectron_Atoms/9.02%3A_Solution_of_the_Schr%C3%B6dinger_Equation_for_.txt |
Perturbation theory is a method for continuously improving a previously obtained approximate solution to a problem, and it is an important and general method for finding approximate solutions to the Schrödinger equation. We discussed a simple application of the perturbation technique previously with the Zeeman effect.
We use perturbation theory to approach the analytically unsolvable helium atom Schrödinger equation by focusing on the Coulomb repulsion term that makes it different from the simplified Schrödinger equation that we have just solved analytically. The electron-electron repulsion term is conceptualized as a correction, or perturbation, to the Hamiltonian that can be solved exactly, which is called a zero-order Hamiltonian. The perturbation term corrects the previous Hamiltonian to make it fit the new problem. In this way the Hamiltonian is built as a sum of terms, and each term is given a name. For example, we call the simplified or starting Hamiltonian, $\hat {H} ^0$, the zero order term, and the correction term $\hat {H} ^1$, the first order term. In the general expression below, there can be an infinite number of correction terms of increasingly higher order,
$\hat {H} = \hat {H} ^0 + \hat {H} ^1 + \hat {H} ^2 + \cdots \label {9-17}$
but usually it is not necessary to have more terms than $\hat {H} ^0$ and $\hat {H} ^1$. For the helium atom,
$\hat {H} ^0 = -\frac {\hbar ^2}{2m} \nabla ^2_1 - \frac {2e^2}{4 \pi \epsilon _0 r_1} - \frac {\hbar ^2}{2m} \nabla ^2_2 - \frac {2e^2}{4 \pi \epsilon _0 r_2} \label {9-18}$
$\hat {H} ^1 = \frac {2e^2}{4 \pi \epsilon _0 r_{12}} \label {9-19}$
In the general form of perturbation theory, the wavefunctions are also built as a sum of terms, with the zero-order terms denoting the exact solutions to the zero-order Hamiltonian and the higher-order terms being the corrections.
$\psi = \psi^0 + \psi ^1 + \psi ^2 + \cdots \label {9-20}$
Similarly, the energy is written as a sum of terms of increasing order.
$E = E^0 + E^1 + E^2 + \cdots \label {9-21}$
To solve a problem using perturbation theory, you start by solving the zero-order equation. This provides an approximate solution consisting of $E_0$ and $\psi ^0$. The zero-order perturbation equation for the helium atom is
$\hat {H}^0 \psi ^0 = E^0 \psi ^0 \label {9-22}$
We already solved this equation for the helium atom and found that $E_0$ = -108 eV by using the product of two hydrogen atom wavefunctions for $\psi ^0$ and omitting the electron-electron interaction from $\hat {H} ^0$.
The next step is to improve upon the zero-order solution by including$\hat {H}^1 , \hat {H} ^2$ etc. and finding $\psi ^1$ and $E_1$, $\psi ^2$ and $E_2$, etc. The solution is improved through the stepwise addition of other functions to the previously found result. These functions are found by solving a series of Schrödinger-like equations, the higher-order perturbation equations.
The first-order perturbation equation includes all the terms in the Schrödinger equation $\hat {H} \psi = E \psi$ that represent the first order approximations to $\hat {H} , \psi$ and E. This equation can be obtained by truncating $\hat {H} , \psi$ and E after the first order terms.
$( \hat {H} ^0 + \hat {H}^1 ) (\psi ^0 + \psi ^1 ) = (E^0 + E^1) (\psi ^0 + \psi ^1 ) \label {9-23}$
Now clear the parentheses to get
$\hat {H} ^0 \psi ^0 + \hat {H} ^0 \psi ^1 + \hat {H} ^1 \psi ^0 + \hat {H} ^1 \psi ^1 = E^0 \psi ^0 + E^0 \psi ^1 + E^1 \psi ^0 + \hat {E} ^1 \psi ^1 \label {9-24}$
The order of the perturbation equation matches the sum of the superscripts for a given term in the equation above. To form the first-order perturbation equation, we can drop the $\hat {H} ^0 \varphi ^0$ and $E^0 \psi ^{0}$ terms because they are zero-order terms and because they cancel each other out, as shown by Equation $\ref{9-22}$ We can also drop the $\hat {H}\psi ^1$ and $\hat {E} ^1 \varphi ^1$ terms because they are second-order corrections formed by a product of two first-order corrections. The first order perturbation equation thus is
$\hat {H} ^0 \psi ^1 + \hat {H} ^1 \psi ^0 = E^0 \psi ^1 + E^1 \psi ^0$
To find the first order correction to the energy take the first-order perturbation equation, multiply from the left by $\psi ^{0*}$ and integrate over all the coordinates of the problem at hand.
$\int \psi ^{0*} \hat {H} ^0 \psi ^1 d\tau + \int \psi ^{0*} \hat {H} ^1 \psi ^0 d\tau = E^0 \int \psi ^{0*} \psi ^1 d\tau + E^1\int \psi ^{0*} \psi ^0 d\tau \label {9-26}$
The integral in the last term on the right hand side of Equation $\ref{9-26}$ is equal to one because the wavefunctions are normalized. Because $\hat {H} ^0$ is Hermitian, the first integral in Equation $\ref{9-26}$ can be rewritten to make use of Equation $\ref{9-22}$,
$\int \psi ^{0*} \hat {H} ^0 \psi ^1 d\tau = \int (\hat {H} ^{0*} \varphi ^{0*} ) \varphi ^1 d\tau = E^0 \int \varphi ^{0*} \varphi ^1 d\tau \label {9-27}$
which is the same as and therefore cancels the first integral on the right-hand side. Thus we are left with an expression for the first-order correction to the energy
$E^1 = \int \psi ^{0*} \hat {H} ^1 \psi ^0 d\tau \label {9-28}$
Since the derivation above was completely general, Equation $\ref{9-28}$ is a general expression for the first-order perturbation energy, which provides an improvement or correction to the zero-order energy we already obtained. The integral on the right is in fact an expectation value integral in which the zero-order wavefunctions are operated on by $\hat {H} ^1$, the first-order perturbation term in the Hamiltonian, to calculate the expectation value for the first-order energy. This derivation justifies, for example, the method we used for the Zeeman effect to approximate the energies of the hydrogen atom orbitals in a magnetic field. Recall that we calculated the expectation value for the interaction energy (the first-order correction to the energy) using the exact hydrogen atom wavefunctions (the zero-order wavefunctions) and a Hamiltonian operator representing the magnetic field perturbation (the first-order Hamiltonian term.)
Exercise $7$
Without using mathematical expressions, explain how you would solve Equation $\ref{9-28}$ for the first-order energy.
For the helium atom, the integral in Equation $\ref{9-28}$ is
$E^1 = \int \int \varphi _{1s} (r_1) \varphi _{1s} (r_2) \frac {1}{r_{12}} \varphi _{1s} (r_1) \varphi _{1s} (r_2) d\tau _1 d\tau _2 \label {9-29}$
where the double integration symbol represents integration over all the spherical polar coordinates of both electrons $r_1, \theta _1, \varphi _1 , r_2 , \theta _2 , \varphi _2$. The evaluation of these six integrals is lengthy. When the integrals are done, the result is $E^1$ = +34.0 eV so that the total energy calculated using our second approximation method, first-order perturbation theory, is
$E_{appr ox2} = E^0 + E^1 = - 74.8 eV \label {9-30}$
$E^1$ is the average interaction energy of the two electrons calculated using wavefunctions that assume there is no interaction.
The new approximate value for the binding energy represents a substantial (~30%) improvement over the zero-order energy, so the interaction of the two electrons is an important part of the total energy of the helium atom. We can continue with perturbation theory and find the additional corrections, E2, E3, etc. For example, E0 + E1 + E2 = -79.2 eV. So with two corrections to the energy, the calculated result is within 0.3% of the experimental value of -79.00 eV. It takes thirteenth-order perturbation theory (adding E1 through E13 to E0) to compute an energy for helium that agrees with experiment to within the experimental uncertainty.
Interestingly, while we have improved the calculated energy so that it is much closer to the experimental value, we learn nothing new about the helium atom wavefunction by applying the first-order perturbation theory because we are left with the original zero-order wavefunctions. In the next section we will employ an approximation that modifies zero-order wavefunctions in order to address one of the ways that electrons are expected to interact with each other. | textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Book%3A_Quantum_States_of_Atoms_and_Molecules_(Zielinksi_et_al)/09%3A_The_Electronic_States_of_the_Multielectron_Atoms/9.03%3A_Perturbation_Theory.txt |
In this section we introduce the powerful and versatile variational method and use it to improve the approximate solutions we found for the helium atom using the independent electron approximation. One way to take electron-electron repulsion into account is to modify the form of the wavefunction. A logical modification is to change the nuclear charge, Z, in the wavefunctions to an effective nuclear charge, from +2 to a smaller value, $\zeta$ (called zeta) or $Z_{eff}$. The rationale for making this modification is that one electron partially shields the nuclear charge from the other electron, as shown in Figure $1$.
A region of negative charge density between one of the electrons and the +2 nucleus makes the potential energy between them more positive (decreases the attraction between them). We can effect this change mathematically by using $\zeta < 2$ in the wavefunction expression. If the shielding were complete, then $\zeta$ would equal 1. If there is no shielding, then $\zeta = 2$. So a way to take into account the electron-electron interaction is by saying it produces a shielding effect. The shielding isn't zero, and it isn't complete, so the effective nuclear charge is between one and two.
In general, a theory should be able to make predictions in advance of knowledge of the experimental result. Consequently, a principle and method for choosing the best value for $\zeta$ or any other adjustable parameter that is to be optimized in a calculation is needed. The Variational Principle provides the required criterion and method. The Variational Principle says that the best value for any variable parameter in an approximate wavefunction is the value that gives the lowest energy for the ground state; i.e., the value that minimizes the energy. The variational method is the procedure that is used to find the lowest energy and the best values for the variable parameters.
The variational principle means that the expectation value for the binding energy obtained using an approximate wavefunction and the exact Hamiltonian operator will be higher than or equal to the true energy for the system. This idea is really powerful. When implemented, it permits us to find the best approximate wavefunction from a given wavefunction that contains one or more adjustable parameters, called a trial wavefunction. A mathematical statement of the variational principle is
$\left \langle E_{trial} \right \rangle \ge E_{true} \label {9-31}$
where
$\left \langle E_{trial} \right \rangle = \dfrac {\int \psi _{trial} ^* \hat {H} \psi _{trial} d \tau}{\int \psi _{trial} ^* \psi _{trial} d\tau } \label {9-32}$
Often the expectation value and normalization integrals in Equation $\ref{9-32}$ can be evaluated analytically. For the case of He described above, the trial wavefunction is the product wavefunction given by Equation \ref{9-13}:
$\psi (r_1 , r_2) \approx \varphi (r_1) \varphi (r_2) \label {9-13}$
the adjustable or variable parameter in the trial wavefunction is the effective nuclear charge $\zeta$, and the Hamiltonian is the complete form given below.
$\hat {H} = -\dfrac {\hbar ^2}{2m} \nabla^2_1 - \dfrac {\zeta e^2}{4 \pi \epsilon _0 r_1} - \dfrac {\hbar ^2}{2m} \nabla ^2_2 - \dfrac {\zeta e^2}{4 \pi \epsilon _0 r_2} + \dfrac {e^2}{4 \pi \epsilon _0 r_12} \label {9-9}$
When the expectation value for the trial energy is calculated for helium, the result is a function that depends on the adjustable parameter, $\zeta$.
$E_{trial} (\zeta) = \dfrac {\mu e^4}{4 \epsilon ^2_0 h} \left ( \zeta ^2 - \dfrac {27}{8} \zeta \right ) \label {9-33}$
This function is shown in Figure $2$. According to the variation principle, the minimum value of the energy on this graph is the best approximation of the true energy of the system, and the associated value of $\zeta$ is the best value for the adjustable parameter.
According to the variation principle, the minimum value of the variational energy (Equation $\ref{9-32}$) of a trial wavefunction is the best approximation of the true energy of the system.
Using the mathematical function for the energy of a system, the minimum energy with respect to the adjustable parameter can be found by taking the derivative of the energy with respect to that parameter, setting the resulting expression equal to zero, and solving for the parameter, in this case $\zeta$. This is a standard method in calculus for finding maxima and minima.
Exercise $2$
Find the value for $\zeta$ that minimizes the helium binding energy and compare the binding energy to the experimental value. What is the percent error in the calculated value?
When this procedure is carried out for He, we find $\zeta = 1.6875$ and the approximate energy we calculate using this third approximation method, $E \approx = -77.483\; eV$. Table $1$ show that a substantial improvement in the accuracy of the computed binding energy is obtained by using shielding to account for the electron-electron interaction. Including the effect of electron shielding in the wavefunction reduces the error in the binding energy to about 2%. This idea is very simple, elegant, and significant.
Table $1$: Comparison of the results of three approximation methods to experiment.
Method
He binding energy (eV)
Neglect repulsion between electrons
-108.8
First-order Perturbation
-74.8
Variation
-77.483
Experimental
-79.0
The improvement we have seen in the total energy calculations using a variable parameter $\zeta$ indicates that an important contribution of electron-electron interaction or repulsion to the total binding energy arises from the fact that each electron shields the nuclear charge from the other electron. It is reasonable to assume the electrons are independent; i.e., that they move independently, but the shielding must be taken into account in order to fine-tune the wavefunctions. The inclusion of optimizable parameters in the wavefunction allows us to develop a clear physical image of the consequences of our variation calculation. Calculating energies correctly is important, and it is also important to be able to visualize electron densities for multi-electron systems. In the next two sections, we take a temporary break from our consideration of approximation methods in order to examine multi-electron wavefunctions more closely. | textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Book%3A_Quantum_States_of_Atoms_and_Molecules_(Zielinksi_et_al)/09%3A_The_Electronic_States_of_the_Multielectron_Atoms/9.04%3A_The_Variational_Method.txt |
Finding the most useful single-electron wavefunctions to serve as building blocks for a multi-electron wavefunction is one of the main challenges in finding approximate solutions to the multi-electron Schrödinger Equation. The functions must be different for different atoms because the nuclear charge and number of electrons are different. The attraction of an electron for the nucleus depends on the nuclear charge, and the electron-electron interaction depends upon the number of electrons.
As we saw in our initial approximation methods, the most straightforward place to start in finding reasonable single-electron wavefunctions for multi-electron atoms is with the atomic orbitals produced in the quantum treatment of hydrogen, the so-called “hydrogenic” spin-orbitals. These traditional atomic orbitals, with a few modifications, give quite reasonable calculated results and are still in wide use for conceptually understanding multi-electron atoms. In this section and in Chapter 10 we will explore some of the many other single-electron functions that also can be used as atomic orbitals.
Hydrogenic spin-orbitals used as components of multi-electron systems are identified in the same way as they are for the hydrogen atom. Each spin-orbital consists of a spatial wavefunction, specified by the quantum numbers (n, $l , m_l$) and denoted ls, 2s, 2p, 3s, 3p, 3d, etc, multiplied by a spin function, specified by the quantum number $m_s$ and denoted $\alpha$ or $\beta$. In our initial approximation methods, we ignored the spin components of the hydrogenic orbitals, but they must be considered in order to develop a complete description of multi-electron systems. The subscript on the argument of the spatial function reveals which electron is being described ($r_1$ is a vector that refers to the coordinates of electron 1, for example.) No argument is given for the spin function. An example of a spin-orbital for electron 2 in a $3p_z$ orbital:
$| \varphi _{3p_z} \alpha (r_2) \rangle = \varphi _{3,1,0}(r_2) \alpha \label {9.5.1}$
In the alternative shorthand notation for this spin-orbital shown below, the coordinates for electron 2 in the spatial function are abbreviated simply by the number “2,” and the spatial function is represented by “$3p_z$” rather than "$\varphi _{3,1,0}$". The argument “2” given for the spin function refers to the unknown spin variable for electron 2. Many slight variations on these shorthand forms are in use in this and other texts, so flexibility and careful reading are important.
$| \varphi _{3p_z}\alpha (2) \rangle = 3p_z (2) \alpha (2) \label {9.5.2}$
In this chapter we will continue the trend of moving away from writing specific mathematical functions and toward a more symbolic, condensed representation. Your understanding of the material in this and future chapters requires that you keep in mind the form and properties of the specific functions denoted by the symbols used in each equation.
Exercise $1$
Write the full mathematical form of $\varphi _{3pz\alpha}$ using as much explicit functional detail as possible.
The basic mathematical functions and thus the general shapes and angular momenta for hydrogenic orbitals are the same as those for hydrogen orbitals. The differences between atomic orbitals for the hydrogen atom and those used as components in the wavefunctions for multi-electron systems lie in the radial parts of the wavefunctions and in the energies. Specifically, the differences arise from the replacement of the nuclear charge Z in the radial parts of the wavefunctions by an adjustable parameter $\zeta$ that is allowed to vary in approximation calculations in order to model the interactions between the electrons. We discussed such a procedure for helium The Variational Method previously. The result is that electrons in orbitals with different values for the angular momentum quantum number, $l$, have different energies. Figure $1$ shows the results of a quantum mechanical calculation on argon in which the degeneracy of the 2s and 2p orbitals is found to be removed, as is the degeneracy of the 3s, 3p, and 3d orbitals.
The energy of each electron now depends not only on its principal quantum number, $n$, but also on its angular momentum quantum number, $l$.
The presence of $\zeta$ in the radial portions of the wavefunctions also means that the electron probability distributions associated with hydrogenic atomic orbitals in multi-electron systems are different from the exact atomic orbitals for hydrogen. Figure $2$ compares the radial distribution functions for an electron in a 1s orbital of hydrogen (the ground state), a 2s orbital in hydrogen (an excited configuration of hydrogen) and a 1s orbital in helium that is described by the best variational value of $\zeta$. Our use of hydrogen-like orbitals in quantum mechanical calculations for multi-electron atoms helps us to interpret our results for multi-electron atoms in terms of the properties of a system we can solve exactly.
Exercise $2$
Analyze Figure $2$ and write a paragraph about what you can discern about the relative sizes of ground state hydrogen, excited state hydrogen and ground state helium atoms.
While they provide useful stepping off points for understanding computational results, nothing requires us to use the hydrogenic functions as the building blocks for multi-electrons wavefunctions. In practice, the radial part of the hydrogenic atomic orbital presents a computational difficulty because the radial function has nodes, positive and negative lobes, and steep variations that make accurate evaluation of integrals by a computer slow. Consequently other types of functions are generally used in building multi-electron functions. These usually are related to the hydrogenic orbitals to aid in the analysis of molecular electronic structure. For example, Slater-type atomic orbitals (STO’s), designated below as $S_{nlm} (r, \theta , \varphi )$, avoid the difficulties imposed by the hydrogenic functions. The STO’s, named after their creator, John Slater, were the first alternative functions that were used extensively in computations. STO’s do not have any radial nodes, but still contain a variational parameter $\zeta$ (zeta), that corresponds to the effective nuclear charge in the hydrogenic orbitals. In Equation $\ref{9-36}$ and elsewhere in this chapter, the distance, $r$, is measured in units of the Bohr radius, $a_0$.
$S_{nlm} (r, \theta , \varphi ) = \dfrac {(2 \zeta )^{n+1/2}}{[(2n)!]^{1/2}} r^{n-1} e^{-\zeta r } Y^m_l (\theta , \varphi ) \label {9-36}$
Exercise $3$
1. Write the radial parts of the 1s, 2s, and 2p atomic orbitals for hydrogen.
2. Write the radial parts of the n = 1 and n = 2 Slater–type orbitals (STO).
3. Check that the above five functions are normalized.
4. Graph these five functions, measuring r in units of the Bohr radius.
5. Graph the radial probability densities for these orbitals. Put the hydrogen orbital and the corresponding STO on the same graph so they can be compared easily.
6. Adjust the zeta parameter $\zeta$ in each case to give the best match of the radial probability density for the STO with that of the corresponding hydrogen orbital.
7. Comment on the similarities and differences between the hydrogen orbitals and the STOs and the corresponding radial probability densities.
Linear Variational Method
An alternative approach to the general problem of introducing variational parameters into wavefunctions is the construction of a single-electron wavefunction as a linear combination of other functions. For hydrogen, the radial function decays, or decreases in amplitude, exponentially as the distance from the nucleus increases. For helium and other multi-electron atoms, the radial dependence of the total probability density does not fall off as a simple exponential with increasing distance from the nucleus as it does for hydrogen. More complex single-electron functions therefore are needed in order to model the effects of electron-electron interactions on the total radial distribution function. One way to obtain more appropriate single-electron functions is to use a sum of exponential functions in place of the hydrogenic spin-orbitals.
An example of such a wavefunction created from a sum or linear combination of exponential functions is written as
$\varphi _{1s} (r_1) = \sum _j c_j e^{-\zeta _j r_j /a_o} \label{9-37}$
The linear combination permits weighting of the different exponentials through the adjustable coefficients (cj) for each term in the sum. Each exponential term has a different rate of decay through the zeta-parameter $\zeta _j$. The exponential functions in Equation $\ref{9-37}$ are called basis functions. Basis functions are the functions used in linear combinations to produce the single-electron orbitals that in turn combine to create the product multi-electron wavefunctions. Originally the most popular basis functions used were the STO’s, but today STO’s are not used in most quantum chemistry calculations. However, they are often the functions to which more computationally efficient basis functions are fitted.
Physically, the $\zeta _j$ parameters account for the effective nuclear charge (often denoted with $Z_{eff}$. The use of several zeta values in the linear combination essentially allows the effective nuclear charge to vary with the distance of an electron from the nucleus. This variation makes sense physically. When an electron is close to the nucleus, the effective nuclear charge should be close to the actual nuclear charge. When the electron is far from the nucleus, the effective nuclear charge should be much smaller. See Slater's rules for a rule-of-thumb approach to evaluate $Z_{eff}$ values.
A term in Equation $\ref{9-37}$ with a small $\zeta$ will decay slowly with distance from the nucleus. A term with a large $\zeta$ will decay rapidly with distance and not contribute at large distances. The need for such a linear combination of exponentials is a consequence of the electron-electron repulsion and its effect of screening the nucleus for each electron due to the presence of the other electrons.
Exercise $4$
Make plots of $\varphi$ in Equation $\ref{9-37}$ using three equally weighted terms with $\zeta$ = 1.0, 2.0, and 5.0. Also plot each term separately.
Computational procedures in which an exponential parameter like $\zeta$ is varied are more precisely called the Nonlinear Variational Method because the variational parameter is part of the wavefunction and the change in the function and energy caused by a change in the parameter is not linear. The optimum values for the zeta parameters in any particular calculation are determined by doing a variational calculation for each orbital to minimize the ground-state energy. When this calculation involves a nonlinear variational calculation for the zetas, it requires a large amount of computer time. The use of the variational method to find values for the coefficients, $\{c_j\}$, in the linear combination given by Equation $\ref{9-37}$ above is called the Linear Variational Method because the single-electron function whose energy is to be minimized (in this case $\varphi _{1s}$) depends linearly on the coefficients. Although the idea is the same, it usually is much easier to implement the linear variational method in practice.
Nonlinear variational calculations are extremely costly in terms of computer time because each time a zeta parameter is changed, all of the integrals need to be recalculated. In the linear variation, where only the coefficients in a linear combination are varied, the basis functions and the integrals do not change. Consequently, an optimum set of zeta parameters were chosen from variational calculations on many small multi-electron systems, and these values, which are given in Table $1$, generally can be used in the STOs for other and larger systems.
Table $1$: Orbital Exponents for Slater Orbitals
Atom $\zeta _{1s}$ $\zeta _{2s,2p}$
H 1.24
He 1.69
Li 2.69 0.80
Be 3.68 1.15
B 4.68 1.50
C 5.67 1.72
N 6.67 1.95
O 7.66 2.25
F 8.56 2.55
Ne 9.64 2.88
Exercise $5$
Compare the value $\zeta _{1s}$ = 1.24 in Table $1$ for hydrogen with the value you obtained in Exercise $3$. and comment on possible reasons for any difference. Why are the zeta values larger for 1s than for 2s and 2p orbitals? Why do the $\zeta _{1s}$ values increase by essentially one unit for each element from He to Ne while the increase for the $\zeta _{2s, 2p}$ values is much smaller?
The discussion above gives us some new ideas about how to write flexible, useful single-electron wavefunctions that can be used to construct multi-electron wavefunctions for variational calculations. Single-electron functions built from the basis function approach are flexible because they have several adjustable parameters, and useful because the adjustable parameters still have clear physical interpretations. Such functions will be needed in the Hartree-Fock method discussed elsewhere. | textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Book%3A_Quantum_States_of_Atoms_and_Molecules_(Zielinksi_et_al)/09%3A_The_Electronic_States_of_the_Multielectron_Atoms/9.05%3A_Single-electron_Wavefunctions_and_Basis_Functi.txt |
To discuss the electronic states of atoms we need a system of notation for multi-electron wavefunctions. As we saw in Chapter 8, the assignment of electrons to orbitals is called the electron configuration of the atom. One creates an electronic configuration representing the electronic structure of a multi-electron atom or ion in its ground or lowest-energy state as follows. First, obey the Pauli Exclusion Principle, which requires that each electron in an atom or molecule must be described by a different spin-orbital. Second, assign the electrons to the lowest energy spin-orbitals, then to those at higher energy. This procedure is called the Aufbau Principle (which translates from German as build-up principle). The mathematical analog of this process is the construction of the approximate multi-electron wavefunction as a product of the single-electron atomic orbitals.
For example, the configuration of the boron atom, shown schematically in the energy level diagram in Figure $1$, is written in shorthand form as 1s22s22p1. As we saw in previously, the degeneracy of the 2s and 2p orbitals is broken by the electron-electron interactions in multi-electron systems.
Rather than showing the individual spin-orbitals in the diagram or in the shorthand notation, we commonly say that up to two electrons can be described by each spatial orbital, one with spin function $\alpha$ (electron denoted by an arrow pointing up) and the other with spin function $\beta$ (arrow pointing down). This restriction is a manifestation of the Pauli Exclusion Principle mentioned above. An equivalent statement of the Pauli Exclusion Principle is that each electron in an atom has a unique set of quantum numbers (n,$l , m_l , m_s$). Since the two spin functions are degenerate in the absence of a magnetic field, the energy of the two electrons with different spin functions in a given spatial orbital is the same, and they are shown on the same line in the energy diagram.
Exercise $1$
Write the electronic configuration of the carbon atom and draw the corresponding energy level diagram.
Exercise $2$
Write the values for the quantum numbers (n, $l , m_l , m_s$) for each of the six electrons in carbon.
We can deepen our understanding of the quantum mechanical description of multi-electron atoms by examining the concepts of electron indistinguishability and the Pauli Exclusion Principle in detail. We will use the following statement as a guide to keep our explorations focused on the development of a clear picture of the multi-electron atom: “When a multi-electron wavefunction is built as a product of single-electron wavefunctions, the corresponding concept is that exactly one electron’s worth of charge density is described by each atomic spin-orbital.”
A subtle, but important part of the conceptual picture is that the electrons in a multi-electron system are not distinguishable from one another by any experimental means. Since the electrons are indistinguishable, the probability density we calculate by squaring the modulus of our multi-electron wavefunction also cannot change when the electrons are interchanged (permuted) between different orbitals. In general, if we interchange two identical particles, the world does not change. As we will see below, this requirement leads to the idea that the world can be divided into two types of particles based on their behavior with respect to permutation or interchange.
For the probability density to remain unchanged when two particles are permuted, the wavefunction itself can change only by a factor of $e^{i\varphi}$, which represents a complex number, when the particles described by that wavefunction are permuted. As we will show below, the $e^{i\varphi}$ factor is possible because the probability density depends on the absolute square of the function and all expectation values involve $\psi \psi ^*$. Consequently $e^{i\varphi}$ disappears in any calculation that relates to the real world because $e^{i\varphi} e^{-i\varphi} = 1$.
We could symbolically write an approximate two-particle wavefunction as $\psi (r_1, r_2)$. This could be, for example, a two-electron wavefunction for helium. To exchange the two particles, we simply substitute the coordinates of particle 1 ($r_l$) for the coordinates of particle 2 ($r_2$) and vice versa, to get the new wavefunction $\psi (r_1, r_2)$. This new wavefunction must have the property that
$|\psi (r_1, r_2)|^2 = \psi (r_2, r_1)^*\psi (r_2, r_1) = \psi (r_1, r_2)^* \psi (r_1, r_2) \label {9-38}$
since the probability density of the electrons in the atom does not change upon permutation of the electrons.
Exercise $3$
Permute the electrons in Equation $\ref{9-13}$ (the product function for He wavefunction.)
Equation $\ref{9-38}$ will be true only if the wavefunctions before and after permutation are related by a factor of $e^{i\varphi}$,
$\psi (r_1, r_2) = e^{i\varphi} \psi (r_1, r_2) \label {9-39}$
so that
$\left ( e^{-i\varphi} \psi (r_1, r_2) ^*\right ) \left ( e^{i\varphi} \psi (r_1, r_2) ^*\right ) = \psi (r_1 , r_2 ) ^* \psi (r_1 , r_2) \label {9-40}$
If we exchange or permute two identical particles twice, we are (by definition) back to the original situation. If each permutation changes the wavefunction by $e^{i \varphi}$, the double permutation must change the wavefunction by $e^{i\varphi} e^{i\varphi}$. Since we then are back to the original state, the effect of the double permutation must equal 1; i.e.,
$e^{i\varphi} e^{i\varphi} = e^{i 2\varphi} = 1 \label {9-41}$
which is true only if $\varphi = 0$ or an integer multiple of π. The requirement that a double permutation reproduce the original situation limits the acceptable values for $e^{i\varphi}$ to either +1 (when $\varphi = 0$) or -1 (when $\varphi = \pi$). Both possibilities are found in nature.
Exercise $4$
Use Euler’s Equality to show that $e^{12\varphi} = 1$ when $\varphi = 0$ or $n \pi$ and consequently $e^{i \varphi} = \pm 1$.
Wavefunctions for which $e^{i \varphi} = +1$ are defined as symmetric with respect to permutation, because the wavefunction is identical before and after a single permutation. Wavefunctions that are symmetric with respect to interchange of the particles obey the following mathematical relationship:
$e^{i\varphi} e^{i\varphi} = e^{i 2\varphi} = 1 \label {9-42}$
The behavior of some particles requires that the wavefunction be symmetric with respect to permutation. These particles are called bosons and have integer spin such as deuterium nuclei, photons, and gluons.
The behavior of other particles requires that the wavefunction be antisymmetric with respect to permutation $(e^{i\varphi} = -1)$. A wavefunction that is antisymmetric with respect to electron interchange is one whose output changes sign when the electron coordinates are interchanged, as shown below:
$\psi (r_2 , r_1) = e^{i\varphi} \psi (r_1, r_2) = - \psi (r_1, r_2) \label {9-43}$
These particles, called fermions, have half-integer spin and include electrons, protons, and neutrinos.
Exercise $5$
Explain without any equations why there are only two kinds of particles in the world: bosons and fermions.
In fact, an elegant statement of the Pauli Exclusion Principle is simply “electrons are fermions.” This statement means that any wavefunction used to describe multiple electrons must be antisymmetric with respect to permutation of the electrons, providing yet another statement of the Pauli Exclusion Principle. The requirement that the wavefunction be antisymmetric applies to all multi-electron functions $\psi (r_1, r_2, \cdots r_i)$, including those written as products of single electron functions $\varphi _1 (r_1) \varphi _2 (r_2) \cdots \varphi _i (r_i)$.
Another way to simply restate the Pauli Exclusion Principle is that “electrons are fermions.”
The first statement of the Pauli Exclusion Principle was that two electrons could not be described by the same spin orbital. To see the relationship between this statement and the requirement that the wavefunction be antisymmetric for electrons, try to construct an antisymmetric wavefunction for two electrons that are described by the same spin-orbital. We can try to do so for helium. Write the He approximate two-electron wavefunction as a product of identical 1s spin-orbitals for each electron,$\varphi _{1s_{\alpha}} (r_1)$ and $\varphi _{1s_{\alpha}} (r_2)$:
$\psi (r_1, r_2 ) = \varphi _{1s\alpha} (r_1) \varphi _{1s\alpha} (r_2) \label {9-44}$
To permute the electrons in this two-electron wavefunction, we simply substitute the coordinates of electron 1 ($r_l$) for the coordinates of electron 2 ($r_2$) and vice versa, to get
$\psi (r_2, r_1 ) = \varphi _{1s\alpha} (r_2) \varphi _{1s\alpha} (r_1) \label {9-45}$
This is identical to the original function (Equatin $\ref{9-44}$) since the two single-electron component functions commute. The two-electron function has not changed sign, as it must for fermions. We can construct a wavefunction that is antisymmetric with respect to permutation symmetry only if each electron is described by a different function.
Exercise $6$
What is meant by the term permutation symmetry?
Exercise $7$
Explain why the product function $\varphi (r_1) \varphi (r_2)$ could describe two bosons (deuterium nuclei) but can not describe two fermions (e.g. electrons).
Let’s try to construct an antisymmetric function that describes the two electrons in the ground state of helium. Blindly following the first statement of the Pauli Exclusion Principle, that each electron in a multi-electron atom must be described by a different spin-orbital, we try constructing a simple product wavefunction for helium using two different spin-orbitals. Both have the 1s spatial component but one has spin function $\alpha$ and the other has spin function $\beta$ so the product wavefunction matches the form of the ground state electron configuration for He, $1s^2$.
$\psi (r_1, r_2 ) = \varphi _{1s\alpha} (r_1) \varphi _{1s\beta} (r_2) \label {9-46}$
After permutation of the electrons, this becomes
$\psi (r_2, r_1 ) = \varphi _{1s\alpha} (r_2) \varphi _{1s\beta} (r_1) \label {9-47}$
which is different from the starting function (Equation $\ref{9-46}$) since $\varphi _{1s\alpha}$ and $\varphi _{1s\beta}$ are different functions. However, an antisymmetric function must produce the same function multiplied by (–1) after permutation, and that is not the case here. We must try something else.
To avoid getting a totally different function when we permute the electrons, we can make a linear combination of functions. A very simple way of taking a linear combination involves making a new function by simply adding or subtracting functions. The function that is created by subtracting the right-hand side of Equation $\ref{9-47}$ from the right-hand side of Equation $\ref{9-46}$ has the desired antisymmetric behavior.
$\psi (r_1, r_2) = \dfrac {1}{\sqrt {2}} [ \varphi _{1s\alpha}(r_1) \varphi _{1s\beta}(r_2) - \varphi _{1s\alpha}(r_2) \varphi _{1s\beta}(r_1)] \label {9-48}$
The constant on the right-hand side accounts for the fact that the total wavefunction must be normalized.
Exercise $8$
Show that the linear combination in Equation $\ref{9-48}$ is antisymmetric with respect to permutation of the two electrons. Replace the minus sign with a plus sign (i.e. take the positive linear combination of the same two functions) and show that the resultant linear combination is symmetric.
Exercise $9$
Write a similar linear combination to describe the $1s^12s^1$ excited configuration of helium.
A linear combination that describes an appropriately antisymmetrized multi-electron wavefunction for any desired orbital configuration is easy to construct for a two-electron system. However, interesting chemical systems usually contain more than two electrons. For these multi-electron systems a relatively simple scheme for constructing an antisymmetric wavefunction from a product of one-electron functions is to write the wavefunction in the form of a determinant. John Slater introduced this idea so the determinant is called a Slater determinant.
The Slater determinant for the two-electron wavefunction of helium is
$\psi (r_1, r_2) = \frac {1}{\sqrt {2}} \begin {vmatrix} \varphi _{1s} (1) \alpha (1) & \varphi _{1s} (1) \beta (1) \ \varphi _{1s} (2) \alpha (2) & \varphi _{1s} (2) \beta (2) \end {vmatrix} \label {9-49}$
and a shorthand notation for this determinant is
$\psi (r_1 , r_2) = 2^{-\frac {1}{2}} Det | \varphi _{1s} (r_1) \varphi _{1s} (r_2) | \label {9-50}$
The determinant is written so the electron coordinate changes in going from one row to the next, and the spin orbital changes in going from one column to the next. The advantage of having this recipe is clear if you try to construct an antisymmetric wavefunction that describes the orbital configuration for uranium! Note that the normalization constant is $(N!)^{-\frac {1}{2}}$ for N electrons.
Exercise $10$
Show that the determinant form is the same as the form for the helium wavefunction that is given in Equation $\ref{9-48}$.
Exercise $11$
Expand the Slater determinant in Equation $\ref{9-49}$ for the He atom.
Exercise $12$
Write and expand the Slater determinant for the electronic wavefunction of the Li atom.
Exercise $13$
Write the Slater determinant for the carbon atom. If you expanded this determinant, how many terms would be in the linear combination of functions?
Exercise $14$
Write the Slater determinant for the $1s^12s^1$ excited state orbital configuration of the helium atom.
Now that we have seen how acceptable multi-electron wavefunctions can be constructed, it is time to revisit the “guide” statement of conceptual understanding with which we began our deeper consideration of electron indistinguishability and the Pauli Exclusion Principle. What does a multi-electron wavefunction constructed by taking specific linear combinations of product wavefunctions mean for our physical picture of the electrons in multi-electron atoms? Overall, the antisymmetrized product function describes the configuration (the orbitals, regions of electron density) for the multi-electron atom. Because of the requirement that electrons be indistinguishable, we can’t visualize specific electrons assigned to specific spin-orbitals. Instead, we construct functions that allow each electron’s probability distribution to be dispersed across each spin-orbital. The total charge density described by any one spin-orbital cannot exceed one electron’s worth of charge, and each electron in the system is contributing a portion of that charge density.
Exercise $13$
Critique the energy level diagram and shorthand electron configuration notation from the perspective of the indistinguishability criterion. Can you imagine a way to represent the wavefunction expressed as a Slater determinant in a schematic or shorthand notation that more accurately represents the electrons? (This is not a solved problem!) | textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Book%3A_Quantum_States_of_Atoms_and_Molecules_(Zielinksi_et_al)/09%3A_The_Electronic_States_of_the_Multielectron_Atoms/9.06%3A_Electron_Configurations%2C_The_Pauli_Exclusion.txt |
In this section we consider a method for finding the best possible one-electron wavefunctions that was published by Hartree in 1948 and improved two years later by Fock. For the Schrödinger equation to be solvable, the variables must be separable. The variables are the coordinates of the electrons. In order to separate the variables in a way that retains information about electron-electron interactions, the Coulomb repulsion term, e.g. $\dfrac {e^2}{4 \pi \epsilon _0 r_{12}}$ for helium, must be approximated so it depends only on the coordinates of one electron. Such an approximate Hamiltonian can account for the interaction of the electrons in an average way. The exact one-electron eigenfunctions of this approximate Hamiltonian then can be found by solving the Schrödinger equation. These functions are the best possible one-electron functions.
The best possible one-electron wavefunctions, by definition, will give the lowest possible total energy for a multi-electron system when combined into a Slater determinant and used with the complete multielectron Hamiltonian to calculate the expectation value for the total energy of the system. These wavefunctions are called the Hartree-Fock wavefunctions and the calculated total energy is the Hartree-Fock energy of the system. Application of the variational method to the problem of minimizing the total energy leads to the following set of Schrödinger-like equations called Hartree-Fock equations,
$\hat {F} \varphi _i = \epsilon _i \varphi _i \label {9-51}$
where $\hat {F}$ is called the Fock operator. The Fock operator is a one-electron operator and solving a Hartree-Fock equation gives the energy and Hartree-Fock orbital for one electron. For a system with 2N electrons, the variable i will range from 1 to N; i.e there will be one equation for each orbital. The reason for this is that only the spatial wavefunctions are used in Equation $\ref{9-51}$. Since the spatial portion of an orbital can be used to describe two electrons, each of the energies and wavefunctions found by solving $\ref{9-51}$ will be used to describe two electrons.
The nature of the Fock operator reveals how the Hartree-Fock (HF) or Self-Consistent Field (SCF) Method accounts for the electron-electron interaction in atoms and molecules while preserving the idea of atomic and molecular orbitals. The full antisymmetrized wavefunction written as a Slater determinant of spin-orbitals is necessary to derive the form of the Fock operator, which is
$\hat {F} = \hat {H} ^0 + \sum _{j=1}^N ( 2 \hat {J} _j - \hat {K} _j ) = -\dfrac {\hbar ^2}{2m} \nabla ^2 - \dfrac {Ze^2}{4 \pi \epsilon _0 r} + \sum _{j=1}^N (2\hat {J}_j - \hat {K} _j ) \label {9-52}$
As shown by the expanded version on the far right, the first term in this equation, $\hat {H}^0$, is the familiar hydrogen-like operator that accounts for the kinetic energy of an electron and the potential energy of this electron interacting with the nucleus. For electron 1 in helium, for example,
$\hat {H}^0 (1) = - \dfrac {\hbar ^2}{2m} \nabla ^2_1 - \dfrac {2e^2}{4 \pi \epsilon _0 r_1} \label {9-53}$
The second term in Equation $\ref{9-52}$, $\sum _{j=1}^N (2 \hat {J} _j - \hat {K} _j )$, accounts for the potential energy of one electron in an average field created by all the other electrons in the system. The Fock operator is couched in terms of the coordinates of the one electron whose perspective we are taking (which we’ll call electron 1 throughout the following discussion), and the average field created by all the other electrons in the system is built in terms of the coordinates of a generic “other electron” (which we’ll call electron 2) that is considered to occupy each orbital in turn during the summation over the N spatial orbitals.
The operators $\hat {j}$ and $\hat {K}$ result from the electron-electron repulsion terms in the full Hamiltonian for a multi-electron system. These operators involve the one-electron orbitals as well as the electron-electron interaction energy, $\dfrac {e^2}{4 \pi \epsilon _0 r_{12}}$, which in atomic units simplifies to 1/r12. Atomic units are used in the rest of this discussion to simplify the notation by removing fundamental constants.
The operators $\hat {J}$ and $\hat {K}$ are most conveniently defined by examining how they operate on a wavefunction, $\varphi _i$, which describes electron 1.
$\hat {J}_j (1) \varphi _i (1) = \left [ \int \varphi ^*_i (2) \dfrac {1}{r_{12}} \varphi _i (2) d \tau _2 \right ] \varphi _i (1) \label {9-54}$
$\hat {K}_j (1) \varphi _i (1) = \left [ \int \varphi ^*_j (2) \dfrac {1}{r_{12}} \varphi _i (2) d \tau _2 \right ] \varphi _j (1) \label {9-55}$
$\hat {J}$ is called a Coulomb operator. As mentioned above, the specific coordinates 1 and 2 are used here to underline the fact that $\hat {J}$ operates on a function of one electron in an orbital (here, electron 1 in $\varphi _i$ using the results of an expectation value integral over the coordinates of a different electron (electron 2 in $\varphi _j$. The second electron can be described by the same spatial orbital (if i = j) or by a different spatial orbital (if i ≠ j). $\hat {J}$ takes the complex conjugate of an orbital describing electron 2, $\varphi ^*_j (2) \varphi _j (2)$, multiplies by 1/r12, and integrates over the coordinates of electron 2. The quantity $d \tau _2 \varphi ^*_j (2) \varphi _j (2)$ represents the charge distribution in space due to electron 2 in orbital j. The quantity $d \tau _2 \varphi ^*_j (2) \varphi _j (2) \dfrac {1}{r_{12}}$ thus represents the potential energy at $r_1$ due to the charge density at $r_2$ where $r_{12}$ is the distance between $r_1$ and $r_2$. Evaluation of the integral gives the total potential energy at r1 due to the overall, or average, charge density produced by electron 2 in orbital j. Since the part of the Fock operator containing $\hat {J}$ involves a sum over all the orbitals, and a multiplicative factor of 2 to account for the presence of two electrons in each orbital, solution of the Hartree-Fock equation produces a spatial orbital $\varphi _i$ that is determined by the average potential energy or Coulomb field of all the other electrons.
The other operator under the summation in the Fock operator is $\hat {K}$, the exchange operator. Equation $\ref{9-55}$ reveals that this operator involves a change in the labels on the orbitals. In analogy with the Coulomb operator, $d \tau _2 \varphi ^*_j (2) \varphi _i (2) \dfrac {1}{r_{12}}$ represents the potential energy at $r_1$ due to the overlap charge distribution at $r_2$ associated with orbitals i and j. The integral is the potential energy due to the total overlap charge density associated with electron 2. The term exchange operator is used because the electron is exchanged between the two orbitals i and j. This overlap contribution to the charge density and potential energy is a quantum mechanical effect. It is a consequence of the wave-like properties of electrons. Wave-like properties means the electrons are described by wavefunctions. While difficult to understand in a concrete physical way, the effects of the exchange operator are important contributors to the total energy of the orbitals and the system as a whole. There is no classical analog to this interaction energy, and a classical theory is unable to calculate correctly the energies of multi-electron systems.
For the ground state of helium, electrons 1 and 2 are both described by spatial orbital $\varphi _i$, so N = 1 and the sum in Equation $\ref{9-52}$ includes only j = 1. Furthermore, since i = j = 1, the exchange and Coulomb integrals are identical in this case. As a result, the summation in the Fock operator takes a very simple form and the complete Fock operator for electron 1 in helium is given by
$\hat {F} (1) = -\dfrac {\hbar ^2}{2m} \nabla ^2_1 - \dfrac {2e^2}{4 \pi \epsilon _0 r_1} + \hat {U} (1) \label {9-56}$
where $\hat {U} (1)$ is given by the summation in the Fock operator.
$\hat {U} (1) = \sum ^1_{j=1} (2 \hat {j} _j - \hat {K} _j ) = \int \varphi _1 (2) \dfrac {1}{r_{12}} \varphi _1 (2) d \tau _2 \label {9-57}$
Exercise $30$
Show that $\sum ^1_{j=1} (2 \hat {j} _j - \hat {K} _j ) = \int \varphi _1 (2) \dfrac {1}{r_{12}} \varphi _1 (2) d \tau _2$ by substituting the definitions of $\hat {J} (1)$ and $\hat {K} (1)$ for helium into the summation and evaluating the summation over the only occupied spatial orbital.
The interaction of electron 1 with electron 2 is averaged over all positions of electron 2 to produce $\hat {U} (1)$. By integrating over the coordinates of electron 2, the explicit dependence of the potential energy on the coordinates of electron 2 is removed. This approach makes it possible to account for the electron-electron repulsion in terms of the spatial distribution of the two electrons using only single-electron terms in the Fock operator and one-electron wavefunctions.
Since both electrons in helium are described by the same spatial wavefunction, the Fock equation given by $\ref{9-58}$ and $\ref{9-56}$ describes either electron equally well. Solving the Fock equation therefore will give us the spatial wavefunction and the one-electron energy associated with either of the electrons.
The energy of an electron in the spatial orbital $\varphi _1$ can be calculated either by solving the Fock equation
$\hat {F} (1) \varphi _1 (1) = \epsilon _1 (1) \varphi _1 (1) \label {9-58}$
or by using an expectation value expression.
$\epsilon _1(1) = \int \varphi ^*_1 (r_1) \hat {F} (r_1) \varphi _1 (r_1) d \tau \label {9-59}$
In order to solve either of these equations for the energy $\epsilon _1$, we need to evaluate the potential energy function $\hat {U} (1)$ that is part of the Fock operator $\hat {F} (1)$. In order to evaluate $\hat {U} (1)$, the forms of all of the occupied spatial orbitals, $\varphi _i (2)$ must be known. For the simple case of helium, only the $\varphi _1 (2)$ function is required, but for larger multi-electron systems, the forms of occupied orbitals $\epsilon _1, \epsilon _2$ etc. will be needed to specify $\hat {U} (1)$. For helium, we know that $\varphi _1 (2)$ will have the same form as $\varphi _1 (1)$, and the $\varphi _1$ functions can be obtained by solving the Fock equation (9-58). However, we are now caught in a circle because the Fock operator depends upon the $\varphi _1$ function.
The problem with solving Equation $\ref{9-58}$ to obtain the Fock orbitals is that the Fock operator, as we have seen, depends on the Fock orbitals. In other words, we need to know the solution to this equation in order to solve the equation. We appear to be between a rock and a hard place. A procedure has been invented to wiggle out of this situation. One makes a guess at the orbitals, e.g. one inserts some adjustable parameters into hydrogenic wavefunctions, for example. These orbitals are used to construct the Fock operator that is used to solve for new orbitals. The new orbitals then are used to construct a new Fock operator, and the process is repeated until no significant change in the orbital energies or functions occurs. At this end point, the orbitals produced by the Fock operator are the same as the orbitals that are used in the Fock operator to describe the average Coulomb and overlap (or exchange) potentials due to the electron-electron interactions. The solution therefore is self-consistent, and the method therefore is called the self-consistent field (SCF) method.
The objective of the Hartree-Fock method is to produce the best possible one-electron wavefunctions for use in approximating the exact wavefunction for a multi-electron system, which can be an atom or a molecule. So what kind of guess functions should we write to get the best possible one-electron wavefunctions? Answers to this question have spawned a huge area of research in computation chemistry over the past 40 years, including a Nobel Prize in 1998. In Chapter 10 we examine in detail the various alternatives for constructing one-electron wavefunctions from basis functions.
Exercise $31$
Write a paragraph without using any equations that describes the essential features of the Hartree-Fock method. Create a block diagram or flow chart that shows the steps involved in the Hartree-Fock method.
The expectation value of the Fock operator gives us the energy of an electron in a particular orbital.
$\epsilon _i (1) = \int d\tau _1 \varphi ^*_i (1) \hat {F} (1) \varphi _i (1) \label {9-60}$
Using the definition of the Fock operator and representing the integrals with bracket notation gives
$\epsilon _i (1) = \left \langle H^0_i \right \rangle + \sum ^N_{j=1} (2 J_{ij} - K_{ij}) \label {9-61}$
where
$\left \langle H^0_i \right \rangle = \left \langle \varphi _i | - \dfrac {1}{2} \nabla ^2_1 | \varphi _i \right \rangle - \left \langle \varphi _i | \dfrac {Z}{r_1} | \varphi _i \right \rangle \label {9-62}$
The kinetic and potential energy terms in the operator $\hat {H} ^0$, defined in Equation $\ref{9-53}$, are written here in atomic units for simplicity of notation. The sum involving the Coulomb and exchange integrals, $J$ and $K$, accounts for the electron-electron interaction energy between the electron in orbital i and all the other electrons in the system. We now want to examine the meaning and the nature of the sum over all the orbitals in Equation $\ref{9-61}$.
Exercise $32$
Describe the contributions to the orbital energy or single-electron energy $epsilon$ in words as represented by Equation $\ref{9-61}$.
For the case j = k one has for $2 J_{ik} - K_{ik}$
$2 \left \langle \varphi _i (1) \varphi _k (2) | \dfrac {1}{r_{12}} |\varphi _i (1) \varphi _k (2) \right \rangle - \left \langle \varphi _i (1) \varphi _k (2) | \dfrac {1}{r_{12}} |\varphi _k (1) \varphi _i (2) \right \rangle \label {9-63}$
The first term with the factor of 2 is the average potential energy due to the charge distribution caused by an electron in orbital i with the charge distribution caused by the two electrons in orbital k. The factor of 2 accounts for the two electrons in orbital k. The second term is the average potential energy due to the overlap charge distribution caused by electrons 1 and 2 in orbitals i and k. The second term appears only once, i.e. without a factor of 2, because only one of the electrons in orbital k has the same spin as the electron in orbital i and can exchange with it. The minus sign results from the wavefunction being antisymmetric with respect to electron exchange.
Exercise $33$
Rewrite Equation $\ref{9-63}$ including the spin functions $\alpha$ and $\beta$ explicitly for each electron. Since these two spin functions form an orthonormal set and factor out of the spatial integrals, show that that the exchange integral is zero if the two electrons have different spin.
For the case j = i one has
$2 \left \langle \varphi _i (1) \varphi _i (2) | \dfrac {1}{r_{12}} |\varphi _i (1) \varphi _i (2) \right \rangle - \left \langle \varphi _i (1) \varphi _i (2) | \dfrac {1}{r_{12}} |\varphi _i (1) \varphi _i (2) \right \rangle \label {9-64}$
revealing that $J_{ii} = K_{ii}$, and $2 J_{ii} - K_{ii} = J_{ii}$, which corresponds to the Coulomb repulsion between the two electrons in orbital i. Because these electrons have opposite spin, there is no exchange energy. This was the case for our helium example, above.
If the single-electron orbital energies are summed to get the total electronic energy, the Coulomb and exchange energies for each pair of electrons are counted twice: once for each member of the pair. These additional $2J - K$ contributions to the single-electron energies must be subtracted from the sum of the single-electron energies to get the total electronic energy, as shown in Equation $\ref{9-65}$. The factor of 2 accounts for the fact that two electrons occupy each spatial orbital, and is the energy of a single electron in a spatial orbital.
$E_{elec} = \sum ^N_{i=1} \left [ 2 \epsilon _i - \sum ^N_{j=1} (2J_{ii} - K_{ij}) \right ] \label {9-65}$
Exercise $34$
Use Equations $\ref{9-61}$ and $\ref{9-65}$ to show that the total electronic energy also can be expressed in the following
$E_{elec} = \sum ^N_{i=1} \left [ 2\left \langle H^0_i \right \rangle - \sum ^N_{j=1} (2J_{ii} - K_{ij}) \right ]$
$E_{elec} = \sum ^N_{i=1} \left ( \epsilon _i + \left \langle H^0_i \right \rangle \right )$
forms.
Exercise $35$
Write out all the terms in Equation $\ref{9-65}$ for the case of 4 electrons in 2 orbitals with different energies.
As we increase the flexibility of wavefunctions by adding additional parameters to the guess orbitals used in Hartree-Fock calculations, we expect to get better and better energies. The variational principle says that any approximate energy calculated using the exact Hamiltonian is an upper bound to the exact energy of a system, so the lowest energy that we calculate using the Hartree-Fock method will be the most accurate. At some point, the improvements in the energy will be very slight. This limiting energy is the lowest that can be obtained with a single Slater determinant wavefunction. This limit is called the Hartree-Fock limit, the energy is the Hartree-Fock energy, the orbitals producing this limit are by definition the best single-electron orbitals that can be constructed and are called Hartree-Fock orbitals, and the Slater determinant is the Hartree-Fock wavefunction.
Contributors and Attributions
David M. Hanson, Erica Harvey, Robert Sweeney, Theresa Julia Zielinski ("Quantum States of Atoms and Molecules") | textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Book%3A_Quantum_States_of_Atoms_and_Molecules_(Zielinksi_et_al)/09%3A_The_Electronic_States_of_the_Multielectron_Atoms/9.07%3A_The_Self-Consistent_Field_Approximation_%28Har.txt |
The best energies obtained at the Hartree-Fock level are still not accurate, because they use an average potential for the electron-electron interactions. Configuration interaction (CI) methods help to overcome this limitation. Because electrons interact and repel each other, their motion in atoms is correlated. When one electron is close to the nucleus, the other tends to be far away. When one is on one side, the other tends to be on the other side. This motion is related to that of two people playing tag around a house. As we said before, the exact wavefunction must depend upon the coordinates of both electrons simultaneously. We have shown that it is a reasonable approximation in calculating energies to neglect this correlation and use wavefunctions that only depend upon the coordinates of one electron, which assumes the electrons move independently. This "orbital approximation" is similar to playing tag without keeping track of the other person. This independent-electron approximation gives reasonable, even good values, for the energy, and correlation can be taken into account to improve this description even more. The method for taking correlation into account is called Configuration Interaction.
In describing electrons in atoms, it is not necessary to be restricted to only a single orbital configuration given by a Slater determinant. We developed the Slater determinant as a way to create correctly antisymmetrized product wavefunctions that approximate the exact multi-electron function for an atom. By using more than one configuration and putting electrons in different orbitals, spatial correlations in the electron motion can be taken into account. This procedure is called Configuration Interaction (CI).
For example, for the two-electron Slater determinant wavefunction of helium, we could write
$\psi (r_1, r_2) = \underbrace{c_1Det | \varphi _{1s} (r_1) \varphi _{1s} (r_2) |}_{\text{ground state: }1s^2} + \underbrace{c_2 Det | \varphi _{1s} (r_1) \varphi _{2s} (r_2) | }_{\text{excited state: }1s^12s^1} \label{9-66}$
where $c_1$ and $c_2) are coefficients (that can be varied in variational method). This wavefunction adds the excited (higher energy) configuration 1s12s1 to the ground (lowest energy) configuration 1s2. The lowest energy configuration corresponds to both electrons being in the same region of space at the same time; the higher energy configuration represents one electron being close to the nucleus and the other electron being further away. This makes sense because the electrons repel each other. When one electron is in one region of space, the other electron will be in another region. Configuration interaction is a way to account for this correlation. Exercise \(1$
Write a paragraph without using any equations that describes the essential features of configuration interaction.
9.09: Chemical Applications of Atomic Structure Theo
In this section we examine how the results of the various approximation methods considered in this chapter can be used to understand and predict the physical properties of multi-electron atoms. Our results include total electronic energies, orbital energies and single-electron wavefunctions that describe the spatial distribution of electron density. Physical properties that can be used to describe multi-electron atoms include total energies, atomic sizes and electron density distributions, ionization energies and electron affinities. Trends in these properties as Z increases form the basis of the periodic table and, as we see in Chapter 10, control chemical reactivity. Spectroscopic properties are considered in a link that includes a development of term symbols for multi-electron systems.
• 9.9.9A: Total Electronic Energies
Using the results of variation calculations, perturbation theory, Hartree-Fock calculations, and/or configuration interaction, we can solve for the total energies of atoms with excellent accuracy.
• 9.9.9B: Orbital Energies
Orbital energies are not physical properties. They are constructs that arise from our approximate approach to a true multi-electron wavefunction using products of single-electron wavefunctions called atomic orbitals. Nevertheless, a great deal can be learned by considering orbital energies.
• 9.9.9C: Atomic Sizes and Electron Density Distributions
Knowledge of the relative sizes of atoms is important because their chemistry often correlates with size. For example, substituting one element for another in a crystal to modify the properties of the crystal often works if the two elements have essentially the same atomic size. Understanding electron density distributions is also important in understanding chemical properties.
• 9.9.9D: Ionization Potentials
The energy it takes to remove an electron from an atom to infinity is called the ionization potential or the ionization energy.
• 9.9.9E: Electron Affinity
The inverse of ionization, i.e. bringing an electron from infinity to occupy the lowest-energy vacancy in an atomic orbital, produces an energy change called the electron affinity. | textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Book%3A_Quantum_States_of_Atoms_and_Molecules_(Zielinksi_et_al)/09%3A_The_Electronic_States_of_the_Multielectron_Atoms/9.08%3A_Configuration_Interaction.txt |
Q9.1
In the hydrogen atom, what is the energy of the $2p^1$ configuration? How many states result from this configuration? What are the term symbols for these states?
Q9.2
Determine the term symbols for the $1s^22s^22p^13s^1$ electron configuration of carbon.
Q9.3
Determine the term symbol for the ground state of the sodium atom. Write the term symbols for the excited state where the valence electron is in a 3p orbital. Use this information to account for the appearance of the doublet known as the sodium D-line.
Q9.4
A helium atom is in a $1s^12p^1$ electron configuration. What are the term symbols for the states that result from this configuration? Write antisymmetric (with respect to permutation symmetry) wavefunctions for these states. Show that only a transition from the ground state to one of these states is allowed through the electric-dipole-field interaction.
Q9.5
Rewrite Equation (9-13) so the wavefunction is antisymmetric with respect to permutation of the two electrons. Repeat Exercise 9.6 using your antisymmetric wavefunction to show that the same result is obtained, and, in because the electron-electron interaction is neglected, the energy does not depend on whether the wavefunction is symmetric or antisymmetric.
Q9.6
An ionization energy or ionization potential is the difference in energy between the energy of an atom and the energy of the corresponding ion. It also can be the difference in energy between an ion and the next higher charged ion. These different possibilities are referred to as the first, second, third, etc. ionization energies.
1. Calculate the ground state energy of He+ and show that the calculated value of 54.4 eV in Table 9.1 for the first ionization energy is correct.
2. Use your insight to explain why the first ionization energy is just half the helium binding energy.
Q9.7
Compute the second ionization energy for Li ($Li^+ \rightarrow Li^{2+} + e^-$) neglecting the electron-electron potential energy term in the Hamiltonian. The experimental value is 75.6 eV. Explain why the computed value differs from the experimental value.
Q9.8
For the ground state of hydrogen, the electron is in a 1s orbital where $l = 0$ so the Hamiltonian operator is $\hat {H} = - \frac {\hbar ^2}{2 \mu r^2} \frac {d}{dr} \left ( r^2 \frac {d}{dr} \right ) - \frac {e^2}{4 \pi \epsilon _0 r }$
Q9.9
Obtain an expression for the energy of the ground state of the hydrogen atom as a function of $\alpha$, where $\alpha$ is an adjustable parameter in the trial wavefunction $\varphi (r) = e^{-\alpha r^2}$, which is a Gaussian function.
Q9.10
Find the value for that minimizes the energy and calculate a value for the energy.
Q9.11
Compare this minimum energy with the exact value. What is the percent error?
Q9.12
Do you consider this Gaussian function to be a reasonable approximation to the exact hydrogen 1s atomic orbital?
Q9.13
What is the difference between this Gaussian function and the exact hydrogen 1s function? Illustrate the difference with a computer-generated graph.
Q9.14
Consider a one-dimensional anharmonic oscillator for which the potential function is $V (x) = \frac {1}{2} k x^2 + a x^4$
1. Write the full Hamiltonian for this oscillator.
2. What system would serve as the most reasonable zero-order approximation for this oscillator in order to use perturbation theory most efficiently and effectively?
3. Identify the zero and first order perturbation terms in your Hamiltonian.
4. What is the zero order energy of the lowest energy state for this oscillator?
5. Write the integral for evaluating the first order correction to this energy and compute the first order correction to this energy.
Q9.15
Consider the particle in a box. Use first-order perturbation theory to determine how much the energy levels are shifted by an external electric field of V volts/cm.
Q9.16
Consider two electrons in a one-dimensional box.
1. What is the zero order energy of the lowest energy state for these electrons?
2. What is the first order correction to this energy due to the electron-electron interaction?
David M. Hanson, Erica Harvey, Robert Sweeney, Theresa Julia Zielinski ("Quantum States of Atoms and Molecules") | textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Book%3A_Quantum_States_of_Atoms_and_Molecules_(Zielinksi_et_al)/09%3A_The_Electronic_States_of_the_Multielectron_Atoms/9.0E%3A_9.E%3A_The_Electronic_States_of_the_Multielect.txt |
In this chapter we have used the independent particle model for electrons (i.e, the idea that multi-electron wavefunctions can be approximated as products of single-electron wavefunctions) to approximate multi-electron atoms. State energies are calculated to be as accurate as possible through use of the Variational Method and Perturbation Theory. A large number of basis functions can be used with the SCF method to get the best possible one-electron functions. Although we only considered the helium atom explicitly, the method has been applied to all atoms of the Periodic Table.
We also introduced the idea of configuration interaction to account for electron correlation. Configuration interaction gives us a mathematical way to describe the electrons as they try to avoid each other. In the CI method excited state electron configurations are used to let the electrons avoid each other.
The various ideas presented here require computers to evaluate all the variational parameters and integrals. Many person-years of time and energy were required to write the computer code to do the calculations and assess the results. At this point in time, only the speed and capacity of the computers, our financial resources to pay for them, and the person-time to write the code limit the accuracy of the energies and wavefunctions. Once these demands were severe, and only dedicated experts could carry out such calculations. Today meaningful calculations can be done with desktop PC’s or relatively inexpensive workstations using software that can be obtained commercially at little cost.
As we have seen, the single electron orbitals have associated energies and physically interpretable parameters such as the effective nuclear charge. It is precisely this independent particle picture that leads to an understanding of the ordering of atomic orbitals of atoms, the structure of the periodic table, the periodic trends in ionization energies and other properties of atoms, the chemical properties of the elements, and the nature of the chemical bond.
In modern quantum mechanical calculations in chemistry, the focus of research is primarily on molecules and their electronic properties and how the electronic properties determine chemical reactivity and molecular structure. Undergraduate students can do today with small computers what research scientists were doing a mere 10 years ago.
The following chapter will focus on the quantum mechanical concepts used in calculations to study molecules. We include an older method, the Hückel Molecular Orbital method, because it contains the elements found in more sophisticated computational approaches and provides the insight needed to relate the results of calculations to molecular properties.
Some Key Questions for Self Study
• What is a Slater determinant and why is it useful?
• What is the independent electron approximation?
• Why must each electron in an atom have a different set of quantum numbers (\(n , l , m_l , m_s \))?
• How do you add angular momentum to obtain term symbols?
• How do you use term symbols to decide on the relative energies of states?
• How may states result from the carbon configuration \(1s^22s^22p^14p^1\)? What are the term symbols for these states?
• How many states result from the carbon configuration \(1s^22s^22p^2\)? What is the term symbol for the ground state of carbon?
• What happens to the energies of states of an atom in a magnetic field and why? Describe an example. | textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Book%3A_Quantum_States_of_Atoms_and_Molecules_(Zielinksi_et_al)/09%3A_The_Electronic_States_of_the_Multielectron_Atoms/9.0S%3A_9.S%3A_The_Electronic_States_of_the_Multielect.txt |
Solving the Schrödinger equation for a molecule first requires specifying the Hamiltonian and then finding the wavefunctions that satisfy the equation. Since the wavefunctions involve the coordinates of all the nuclei and electrons that comprise the molecule, the complete molecular Hamiltonian consists of several terms. The nuclear and electronic kinetic energy operators account for the motion of all of the nuclei and electrons. The Coulomb potential energy terms account for the interactions between the nuclei, the electrons, and the nuclei and electrons.
• 10.1: The Born-Oppenheimer Approximation
The Born-Oppenheimer approximation is one of the basic concepts underlying the description of the quantum states of molecules. This approximation makes it possible to separate the motion of the nuclei and the motion of the electrons.
• 10.2: The Orbital Approximation and Orbital Configurations
To describe the electronic states of molecules, we construct wavefunctions for the electronic states by using molecular orbitals. These wavefunctions are approximate solutions to the Schrödinger equation with each electron described by a product of a spin-orbitals Since electrons are fermions, the electronic wavefunction must be antisymmetric with respect to the permutation of any two electrons. A Slater determinant containing the molecular spin orbitals produces the antisymmetric wavefunction.
• 10.3: Basis Functions
The molecular spin-orbitals that are used in the Slater determinant usually are expressed as a linear combination of some chosen functions, which are called basis functions. This set of functions is called the basis set. The fact that one function can be represented by a linear combination of other functions is a general property. All that is necessary is that the basis functions span-the-space, which means that the functions must form a complete set and must be describing the same thing.
• 10.4: The Case of H₂⁺
One can develop an intuitive sense of molecular orbitals and what a chemical bond is by considering the simplest molecule, H₂⁺, which consists of two protons held together by the electrostatic force of a single electron. Clearly the two protons, two positive charges, repeal each other.
• 10.5: Homonuclear Diatomic Molecules
The LCAO-MO method that we used for H₂⁺ can be applied qualitatively to homonuclear diatomic molecules to provide additional insight into chemical bonding. A more quantitative approach also is helpful, especially for more complicated situations, like heteronuclear diatomic molecules and polyatomic molecules. Quantitative theories are described in subsequent sections.
• 10.6: Semi-Empirical Methods- Extended Hückel
Hückel Molecular Orbital Theory is one of the first semi-empirical methods to be developed to describe molecules containing conjugated double bonds. This theory considered only electrons in pi orbitals and ignored all other electrons in a molecule and was successful because it could address a number of issues associated with a large group of molecules at a time when calculations were done on mechanical calculators.
• 10.7: Mulliken Populations
Mulliken populations can be used to characterize the electronic charge distribution in a molecule and the bonding, antibonding, or nonbonding nature of the molecular orbitals for particular pairs of atoms. To develop the idea of these populations, consider a real, normalized molecular orbital composed from two normalized atomic orbitals.
• 10.8: The Self-Consistent Field and the Hartree-Fock Limit
In a modern ab initio electronic structure calculation on a closed shell molecule, the electronic Hamiltonian is used with a single determinant wavefunction. This wavefunction, Ψ , is constructed from molecular orbitals, ψ that are written as linear combinations of contracted Gaussian basis functions.
• 10.9: Correlation Energy and Configuration Interaction
The Hartree-Fock energy is not as low as the exact energy. The difference is due to electron correlation effects and is called the correlation energy. The Hartree-Fock wavefunction does not include these correlation effects because it describes the electrons as moving in the average potential field of all the other electrons. The instantaneous influence of electrons that come close together at some point is not taken into account.
• 10.E: Theories of Electronic Molecular Structure (Exercises)
Exercises for the "Quantum States of Atoms and Molecules" TextMap by Zielinksi et al.
• 10.S: Theories of Electronic Molecular Structure (Summary)
In general, electronic wavefunctions for molecules are constructed from approximate one-electron wavefunctions. These one-electron functions are called molecular orbitals. The expectation value expression for the energy is used to optimize these functions, i.e. make them as good as possible. The criterion for quality is the energy of the ground state. According to the Variational Principle, an approximate ground state energy always is higher than the exact energy.
• 10.10: Electronic States
The electronic configuration of an atom or molecule is a concept imposed by the orbital approximation. While a single determinant wavefunction generally is adequate for closed-shell systems (i.e. all electrons are paired in spatial orbitals), the best descriptions of the electronic states, especially for excited states and free radicals that have unpaired electrons, involve configuration interaction using multiple determinants.
10: Theories of Electronic Molecular Structure
The Born-Oppenheimer approximation is one of the basic concepts underlying the description of the quantum states of molecules. This approximation makes it possible to separate the motion of the nuclei and the motion of the electrons. This is not a new idea for us. We already made use of this approximation in the particle-in-a-box model when we explained the electronic absorption spectra of cyanine dyes without considering the motion of the nuclei. Then we discussed the translational, rotational and vibrational motion of the nuclei without including the motion of the electrons. In this chapter we will examine more closely the significance and consequences of this important approximation. Note, in this discussion nuclear refers to the atomic nuclei as parts of molecules not to the internal structure of the nucleus.
The Born-Oppenheimer approximation neglects the motion of the atomic nuclei when describing the electrons in a molecule. The physical basis for the Born-Oppenheimer approximation is the fact that the mass of an atomic nucleus in a molecule is much larger than the mass of an electron (more than 1000 times). Because of this difference, the nuclei move much more slowly than the electrons. In addition, due to their opposite charges, there is a mutual attractive force of
$\dfrac{Ze^2}{r^2}$
acting on an atomic nucleus and an electron. This force causes both particles to be accelerated. Since the magnitude of the acceleration is inversely proportional to the mass, a = f/m, the acceleration of the electrons is large and the acceleration of the atomic nuclei is small; the difference is a factor of more than 1000. Consequently, the electrons are moving and responding to forces very quickly, and the nuclei are not. You can imagine running a 100-yard dash against someone whose acceleration is a 1000 times greater than yours. That person could literally run circles around you. So a good approximation is to describe the electronic states of a molecule by thinking that the nuclei aren't moving, i.e. that they are stationary. The nuclei, however, can be stationary at different positions so the electronic wavefunction can depend on the positions of the nuclei even though their motion is neglected.
Now we look at the mathematics to see what is done in solving the Schrödinger equation after making the Born-Oppenheimer approximation. For a diatomic molecule as an example, the Hamiltonian operator is grouped into three terms
$\hat {H} (r, R) = \hat {T}_{nuc} (R) + \dfrac {e^2}{4\pi \epsilon _0} \dfrac {Z_A Z_B}{R} + \hat {H} _{elec} (r,R) \label {10-1}$
where
$T_{nuc} (R) = -\dfrac {\hbar^2}{2m_A} \nabla ^2_A - \dfrac {\hbar ^2}{2m_B} \nabla ^2_B \label {10-2}$
and
$\hat {H} _{elec} (r, R) = \dfrac {- \hbar ^2}{2m} \sum \limits _i \nabla ^2_i + \dfrac {e^2}{4 \pi \epsilon _0} \left ( -\sum \limits _i \dfrac {Z_A}{r_{Ai}} - \sum \limits _i \dfrac {Z_B}{r_{Bi}} + \dfrac {1}{2} \sum \limits _i \sum \limits _{j \ne i} \dfrac {1}{r_{ij}}\right ) \label {10-3}$
In Equation \ref{10-1}, the first term represents the kinetic energy of the nuclei, the second term represents the Coulomb repulsion of the two nuclei, and the third term represents the contribution to the energy from the electrons, which consists of their kinetic energy, mutual repulsion for each other, and attraction for the nuclei. Bold-face type is used to represent that $r$ and $R$ are vectors specifying the positions of all the electrons and all the nuclei, respectively.
Exercise $1$
Define all the symbols in Equations \ref{10-1} through \ref{10-3}.
Exercise $2$
Explain why the factor of 1/2 appears in the last term in Equation \ref{10-3}.
The Born-Oppenheimer approximation says that the nuclear kinetic energy terms in the complete Hamiltonian, Equation \ref{10-1}, can be neglected in solving for the electronic wavefunctions and energies. Consequently, the electronic wavefunction $\varphi _e (r,R)$ is found as a solution to the electronic Schrödinger equation:
$\hat {H} _{elec} (r, R) \varphi _e (r, R) = E_e (R) \varphi _e (r, R) \label {10-4}$
Even though the nuclear kinetic energy terms are neglected, the Born-Oppenheimer approximation still takes into account the variation in the positions of the nuclei in determining the electronic energy and the resulting electronic wavefunction depends upon the nuclear positions, $R$.
As a result of the Born-Oppenheimer approximation, the molecular wavefunction can be written as a product the Born-Oppenheimer Approximation
$\psi _{ne} (r, R) = X_{ne} (R) \varphi _e (r, R) \label {10-5}$
This product wavefunction is called the Born-Oppenheimer wavefunction. The function $X_{ne} (R)$ is the vibrational wavefunction, which is a function of the nuclear coordinates $R$ and depends upon both the vibrational and electronic quantum numbers or states, $n$ and $e$, respectively. The electronic function, $\varphi _e (r, R)$, is a function of both the nuclear and electronic coordinates, but only depends upon the electronic quantum number or electronic state, $e$. Translational and rotational motion is not included here. The translational and rotational wavefunctions simply multiply the vibrational and electronic functions in Equation \ref{10-5} to give he complete molecular wavefunction when the translational and rotational motions are not coupled to the vibrational and electronic motion.
Crude Born-Oppenheimer Approximation
In the Crude Born-Oppenheimer Approximation, $R$ is set equal to $R_o$, the equilibrium separation of the nuclei, and the electronic wavefunctions are taken to be the same for all positions of the nuclei.
The electronic energy, $E_e (R)$, in Equation \ref{10-4} combines with the repulsive Coulomb energy of the two nuclei, to form the potential energy function that controls the nuclear motion as shown in Figure $1$.
$V_e (R) = E_e (R) + \dfrac {e^2}{4\pi \epsilon _0} \dfrac {Z_A Z_B}{R} \label {10-6}$
Consequently the Schrödinger equation for the vibrational motion is
$( \hat {T} _{nuc} (R) + V (R) )_{X_{ne}} (R) = E_{ne X_{ne}} (R) \label {10-7}$
In Chapter 6, the potential energy was approximated as a harmonic potential depending on the displacement, $Q$, of the nuclei from their equilibrium positions.
In practice the electronic Schrödinger equation is solved using approximations at particular values of $R$ to obtain the wavefunctions $\varphi _e (r,R)$ and potential energies $V_e (R)$. The potential energies can be graphed as illustrated in Figure $1$.
The graph in Figure $1$ is the energy of a diatomic molecule as a function of internuclear separation, which serves as the potential energy function for the nuclei. When $R$ is very large there are two atoms that are weakly interacting. As $R$ becomes smaller, the interaction becomes stronger, the energy becomes a large negative value, and we say a bond is formed between the atoms. At very small values of $R$, the internuclear repulsion is very large so the energy is large and positive. This energy function controls the motion of the nuclei. Previously, we approximated this function by a harmonic potential to obtain the description of vibrational motion in terms of the harmonic oscillator model. Other approximate functional forms could be used as well, e.g. the Morse potential. The equilibrium position of the nuclei is where this function is a minimum, i.e. at $R = R_0$. If we obtain the wavefunction at $R = R_0$, and use this function for all values of $R$, we have employed the Crude Born-Oppenheimer approximation.
Exercise $3$
Relate Equation \ref{10-7} to the one previously used in our description of molecular vibrations in terms of the harmonic oscillator model.
In this section we started with the Schrödinger equation for a diatomic molecule and separated it into two equations, an electronic Schrödinger equation and a nuclear Schrödinger equation. In order to make the separation, we had to make an approximation. We had to neglect the effect of the nuclear kinetic energy on the electrons. The fact that this assumption works can be traced to the fact that the nuclear masses are much larger than the electron mass. We then used the solution of the electronic Schrödinger equation to provide the potential energy function for the nuclear motion. The solution to the nuclear Schrödinger equation provides the vibrational wavefunctions and energies.
Exercise $4$
Explain the difference between the Born-Oppenheimer approximation and the Crude Born-Oppenheimer approximation. | textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Book%3A_Quantum_States_of_Atoms_and_Molecules_(Zielinksi_et_al)/10%3A_Theories_of_Electronic_Molecular_Structure/10.01%3A_The_Born-Oppenheimer_Approximation.txt |
You should be able to recognize from the form of the electronic Hamiltonian, Equation that the electronic Schrödinger equation, Equation, cannot be solved. The problem, as for the case of atoms, is the electron-electron repulsion terms. Approximations must be made, and these approximations are based on the idea of using one-electron wavefunctions to describe multi-electron systems, in this case molecules just as is done for multi-electron atoms. Initially two different approaches were developed. Heitler and London originated one in 1927, called the Valence Bond Method, and Robert Mulliken and others developed the other somewhat later, called the Molecular Orbital Method. By using configuration interaction, both methods can provide equivalent electronic wavefunctions and descriptions of bonding in molecules, although the basic concepts of the two methods are different. We will develop only the molecular orbital method because this is the method that is predominantly employed now. The wavefunction for a single electron in a molecule is called a molecular orbital in analogy with the one-electron wavefunctions for atoms being called atomic orbitals.
To describe the electronic states of molecules, we construct wavefunctions for the electronic states by using molecular orbitals. These wavefunctions are approximate solutions to the Schrödinger equation. A mathematical function for a molecular orbital is constructed, $\psi _i$, as a linear combination of other functions, $\varphi _j$, which are called basis functions because they provide the basis for representing the molecular orbital.
$\psi _i = \sum _j c_{ij} \varphi _j \label {10-8}$
The variational method is used to find values for parameters in the basis functions and for the constant coefficients in the linear combination that optimize these functions, i.e. make them as good as possible. The criterion for quality in the variational method is making the ground state energy of the molecule as low as possible. Here and in the rest of this chapter, the following notation is used: $\sigma$ is a general spin function (can be either $\alpha$ or $\beta$), $\varphi$ is the basis function (this usually represents an atomic orbital), $\psi$ is a molecular orbital, and $\psi$ is the electronic state wavefunction (representing a single Slater determinant or linear combination of Slater determinants).
The ultimate goal is a mathematical description of electrons in molecules that enables chemists and other scientists to develop a deep understanding of chemical bonding and reactivity, to calculate properties of molecules, and to make predictions based on these calculations. For example, an active area of research in industry involves calculating changes in chemical properties of pharmaceutical drugs as a result of changes in chemical structure.
Just as for atoms, each electron in a molecule can be described by a product of a spatial orbital and a spin function. These product functions are called spin orbitals. Since electrons are fermions, the electronic wavefunction must be antisymmetric with respect to the permutation of any two electrons. A Slater determinant containing the molecular spin orbitals produces the antisymmetric wavefunction. For example for two electrons,
$\psi (r_1, r_2) = \dfrac{1}{\sqrt{2}} \begin {vmatrix} \psi _A (r_1) \alpha (1) & \psi _B (r_1) \beta (1) \ \psi _A (r_2) \alpha (2) & \psi _B (r_2) \beta (2) \end {vmatrix} \label {10-9}$
Solving the Schrödinger equation in the orbital approximation will produce a set of spatial molecular orbitals, each with a specific energy, $\epsilon$. Following the Aufbau Principle, 2 electrons with different spins ($\alpha$ and $\beta$, consistent with the Pauli Exclusion Principle) are assigned to each spatial molecular orbital in order of increasing energy. For the ground state of the 2n electron molecule, the n lowest energy spatial orbitals will be occupied, and the electron configuration will be given as $\psi ^2_1 \psi ^2_2 \psi ^2_3 \dots \psi ^2_n$. The electron configuration also can be specified by an orbital energy level diagram as shown in Figure $1$. Higher energy configurations exist as well, and these configurations produce excited states of molecules. Some examples are shown in Figure $1$.
Molecular orbitals usually are identified by their symmetry or angular momentum properties. For example, a typical symbol used to represent an orbital in an electronic configuration of a diatomic molecule is $2\sigma ^2_g$. The superscript in symbol means that this orbital is occupied by two electrons; the prefix means that it is the second sigma orbital with gerade symmetry.
Diatomic molecules retain a component of angular momentum along the internuclear axis. The molecular orbitals of diatomic molecule therefore can be identified in terms of this angular momentum. A Greek letter, e.g. $\sigma$ or $\pi$, encodes this information, as well as information about the symmetry of the orbital. A $\sigma$ means the component of angular momentum is 0, and there is no node in any plane containing the internuclear axis, so the orbital must be symmetric with respect to reflection in such a plane. A $\pi$ means there is a node and the wavefunction is antisymmetric with respect to reflection in a plane containing the internuclear axis. For homonuclear diatomic molecules, a g or a u is added as a subscript to designate whether the orbital is symmetric or antisymmetric with respect to the center of inversion of the molecule.
A homonuclear diatomic molecule has a center of inversion in the middle of the bond. This center of inversion means that $\psi (x, y, z) = \pm \psi (-x, -y, -z)$ with the origin at the inversion center. Inversion takes you from $(x, y, z )$ to $(-x, -y, -z )$. For a heteronuclear diatomic molecule, there is no center of inversion so the symbols g and u are not used. A prefix 1, 2, 3, etc. simply means the first, second, third, etc. orbital of that type. We can specify an electronic configuration of a diatomic molecule by these symbols by using a superscript to denote the number of electrons in that orbital, e.g. the lowest energy configuration of $\ce{N2}$ is
$1 \sigma ^2_g 1 \sigma ^2_u 2 \sigma ^2_g 2 \sigma ^2_u 1 \pi ^4_u 3 \sigma ^2_g \nonumber$ | textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Book%3A_Quantum_States_of_Atoms_and_Molecules_(Zielinksi_et_al)/10%3A_Theories_of_Electronic_Molecular_Structure/10.02%3A_The_Orbital_Approximation_and_Orbital_Configuration.txt |
The molecular spin-orbitals that are used in the Slater determinant usually are expressed as a linear combination of some chosen functions, which are called basis functions. This set of functions is called the basis set. The fact that one function can be represented by a linear combination of other functions is a general property. All that is necessary is that the basis functions span-the-space, which means that the functions must form a complete set and must be describing the same thing. For example, spherical harmonics cannot be used to describe a hydrogen atom radial function because they do not involve the distance r, but they can be used to describe the angular properties of anything in three-dimensional space.
This span-the-space property of functions is just like the corresponding property of vectors. The unit vectors $(\overrightarrow {x}, \overrightarrow {y}, \overrightarrow {z})$ describe points in space and form a complete set since any position in space can be specified by a linear combination of these three unit vectors. These unit vectors also could be called basis vectors.
Exercise $1$
Explain why the unit vectors $(\overrightarrow {x}, \overrightarrow {y})$ do not form a complete set to describe your classroom.
Just as we discussed for atoms, parameters in the basis functions and the coefficients in the linear combination can be optimized in accord with the Variational Principle to produce a self-consistent field (SCF) for the electrons. This optimization means that the ground state energy calculated with the wavefunction is minimized with respect to variation of the parameters and coefficients defining the function. As a result, that ground state energy is larger than the exact energy, but is the best value that can be obtained with that wavefunction.
Slater-type atomic orbitals (STOs)
Intuitively one might select hydrogenic atomic orbitals as the basis set for molecular orbitals. After all, molecules are composed of atoms, and hydrogenic orbitals describe atoms exactly if the electron-electron interactions are neglected. At a better level of approximation, the nuclear charge that appears in these functions can be used as a variational parameter to account for the shielding effects due to the electron-electron interactions. Also, the use of atomic orbitals allows us to interpret molecular properties and charge distributions in terms of atomic properties and charges, which is very appealing since we picture molecules as composed of atoms. As described in the previous chapter, calculations with hydrogenic functions were not very efficient so other basis functions, Slater-type atomic orbitals (STOs), were invented.
A minimal basis set of STOs for a molecule includes only those STOs that would be occupied by electrons in the atoms forming the molecule. A larger basis set, however, improves the accuracy of the calculations by providing more variable parameters to produce a better approximate wavefunction, but at the expense of increased computational time.
For example, one can use more than one STO to represent one atomic orbital, as shown in Equation \ref{10.11}, and rather than doing a nonlinear variational calculation to optimize each zeta, use two STOs with different values for zeta. The linear variation calculation then will produce the coefficients ($C_1$ and $C_2$) for these two functions in the linear combination that best describes the charge distribution in the molecule. The function with the large zeta accounts for charge near the nucleus, while the function with the smaller zeta accounts for the charge distribution at larger values of the distance from the nucleus. This expanded basis set is called a double-zeta basis set.
$R_{2s} (r) = C_1re^{-\zeta _1r} + C_2 r e^{-\zeta _2 r} \label {10.11}$
Example $1$
• Plot the normalized radial probability density for a 2s hydrogenic orbital for lithium using an effective nuclear charge of 1.30.
• Fit that radial probability density with the radial probability density for 1 STO by varying the zeta parameter in the STO.
• Also fit the radial probability density for the hydrogenic orbital with that for the sum of 2 STOs, as in Equation ($\ref{10.11}$), by varying the zeta parameters for each and their coefficients in the sum.
• Report your values for the zeta parameters and the coefficients and provide graphs of these functions and the corresponding radial probability densities. What are your conclusions regarding the utility of using STOs with single or double zeta values to describe the charge distributions in atoms and molecules?
The use of double zeta functions in basis sets is especially important because without them orbitals of the same type are constrained to be identical even though in the molecule they may be chemically inequivalent. For example, in acetylene the pz orbital along the internuclear axis is in a quite different chemical environment and is being used to account for quite different bonding than the px and py orbitals. With a double zeta basis set the pz orbital is not constrained to be the same size as the px and py orbitals.
Example $2$
Explain why the $p_x$, $p_y$, and $p_z$ orbitals in a molecule might be constrained to be the same in a single-zeta basis set calculation, and how the use of a double-zeta basis set would allow the $p_x$, $p_y$, and $p_z$ orbitals to differ.
The use of a minimal basis set with fixed zeta parameters severely limits how much the electronic charge can be changed from the atomic charge distribution in order to describe molecules and chemical bonds. This limitation is removed if STOs with larger n values and different spherical harmonic functions, the $Y^m_l (\theta , \varphi )$ in the definition of STO’s in Chapter 9, are included. Adding such functions is another way to expand the basis set and obtain more accurate results. Such functions are called polarization functions because they allow for charge polarization away form the atomic distribution to occur.
Gaussian Basis Function
While the STO basis set was an improvement over hydrogenic orbitals in terms of computational efficiency, representing the STOs with Gaussian functions produced further improvements that were needed to accurately describe molecules. A Gaussian basis function has the form shown in Equation \ref{10.12}. Note that in all the basis sets, only the radial part of the orbital changes, and the spherical harmonic functions are used in all of them to describe the angular part of the orbital.
$G_{nlm} (r, \theta , \psi ) = N_n r^{n-1} e^{-\alpha r^2} Y^m_l (\theta, \psi) \label {10.12}$
Unfortunately Gaussian functions do not match the shape of an atomic orbital very well. In particular, they are flat rather than steep near the atomic nucleus at r = 0, and they fall off more rapidly at large values of r.
Example $4$
Make plots of the following two functions
• $y=(1.108) e^{-r_2/3}$
• $y=(2.000) e^{-r}$
to illustrate how Gaussian functions differ from hydrogenic orbitals and Slater-type orbitals. The constants multiplying the exponentials normalize these functions. Describe the differences you observe between a Gaussian and a Slater-type function.
To compensate for this problem, each STO is replaced with a number of Gaussian functions with different values for the exponential parameter $\alpha$. These Gaussian functions form a primitive Gaussian basis set. Linear combinations of the primitive Gaussians are formed to approximate the radial part of an STO. This linear combination is not optimized further in the energy variational calculation but rather is frozen and treated as a single function. The linear combination of primitive Gaussian functions is called a contracted Gaussian function. Although more functions and more integrals now are part of the calculation, the integrals involving Gaussian functions are quicker to compute than those involving exponentials so there is a net gain in the efficiency of the calculation.
Gaussian basis sets are identified by abbreviations such as N-MPG*. N is the number of Gaussian primitives used for each inner-shell orbital. The hyphen indicates a split-basis set where the valence orbitals are double zeta. The M indicates the number of primitives that form the large zeta function (for the inner valence region), and P indicates the number that form the small zeta function (for the outer valence region). G identifies the set a being Gaussian. The addition of an asterisk to this notation means that a single set of Gaussian 3d polarization functions is included. A double asterisk means that a single set of Gaussian 2p functions is included for each hydrogen atom.
For example, 3G means each STO is represented by a linear combination of three primitive Gaussian functions. 6-31G means each inner shell (1s orbital) STO is a linear combination of 6 primitives and each valence shell STO is split into an inner and outer part (double zeta) using 3 and 1 primitive Gaussians, respectively.
Example $5$
The 1s Slater-type orbital $S_1 (r) = \sqrt {4 \zeta _1 e^{-\zeta _1 r}}$ with $\zeta _1 = 1.24$ is represented as a sum of three primitive Gaussian functions,
$S_G (r) = \sum _{j=1}^3 C_j e^{-\alpha _j r^2} \nonumber$
This sum is the contracted Gaussian function for the STO.
1. Make plots of the STO and the contracted Gaussian function on the same graph so they can be compared easily. All distances should be in units of the Bohr radius. Use the following values for the coefficients, C, and the exponential parameters, $\alpha$.
2. index j $\alpha _j$ Cj
1 0.1688 0.4
2 0.6239 0.7
3 3.425 1.3
3. Change the values of the coefficients and exponential parameters to see if a better fit can be obtained.
4. Comment on the ability of a linear combination of Gaussian functions to accurately describe a STO. | textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Book%3A_Quantum_States_of_Atoms_and_Molecules_(Zielinksi_et_al)/10%3A_Theories_of_Electronic_Molecular_Structure/10.03%3A_Basis_Functions.txt |
One can develop an intuitive sense of molecular orbitals and what a chemical bond is by considering the simplest molecule, $\ce{H_2^{+}}$. This ion consists of two protons held together by the electrostatic force of a single electron. Clearly the two protons, two positive charges, repeal each other. The protons must be held together by an attractive Coulomb force that opposes the repulsive Coulomb force. A negative charge density between the two protons would produce the required counter-acting Coulomb force needed to pull the protons together. So intuitively, to create a chemical bond between two protons or two positively charged nuclei, a high density of negative charge between them is needed. We expect the molecular orbitals that we find to reflect this intuitive notion.
The electronic Hamiltonian for $\ce{H_2^{+}}$ is
$\hat {H}_{elec} (r, R) = -\dfrac {\hbar ^2}{2m} \nabla ^2 - \dfrac {e^2}{4 \pi \epsilon _0 r_A} - \dfrac {e^2}{4 \pi \epsilon _0 r_B} + \dfrac {e^2}{4 \pi \epsilon _0 R} \label {10.13}$
where $r$ gives the coordinates of the electron, and $R$ is the distance between the two protons. Although the Schrödinger equation for $\ce{H_2^{+}}$ can be solved exactly because there is only one electron, we will develop approximate solutions in a manner applicable to other diatomic molecules that have more than one electron.
For the case where the protons in $\ce{H_2^{+}}$ are infinitely far apart, we have a hydrogen atom and an isolated proton when the electron is near one proton or the other. The electronic wavefunction would just be $1s_A(r)$ or $1s_B(r)$ depending upon which proton, labeled A or B, the electron is near. Here 1sA denotes a 1s hydrogen atomic orbital with proton A serving as the origin of the spherical polar coordinate system in which the position $r$ of the electron is specified. Similarly $1s_B(r)$ has proton B as the origin. A useful approximation for the molecular orbital when the protons are close together therefore is a linear combination of the two atomic orbitals. The general method of using
$\psi (r) = C_A 1s_A (r) + C_B1s_B (r) \label {10.14}$
i.e. of finding molecular orbitals as linear combinations of atomic orbitals is called the Linear Combination of Atomic Orbitals - Molecular Orbital (LCAO-MO) Method. In this case we have two basis functions in our basis set, the hydrogenic atomic orbitals 1sA and lsB.
For $\ce{H_2^{+}}$, the simplest molecule, the starting function is given by Equation $\ref{10.14}$. We must determine values for the coefficients, $C_A$ and $C_B$. We could use the variational method to find a value for these coefficients, but for the case of $\ce{H_2^{+}}$ evaluating these coefficients is easy. Since the two protons are identical, the probability that the electron is near A must equal the probability that the electron is near B. These probabilities are given by $|C_A|^2$ and $|C_B|^2$, respectively. Consider two possibilities that satisfy the condition $|C_A|^2 = |C_B|^2$; namely, $C_A = C_B = C_{+} \text {and} C_A = -C_B = C_{-}$. These two cases produce two molecular orbitals:
$\psi _+ = C_+(1s_A + 1s_B)$
$\psi _{-} = C_{-}(1s_A - 1s_B) \label {10.15}$
The probability density for finding the electron at any point in space is given by $|{\psi}^2|$ and the electronic charge density is just $|e{\psi}^2|$. The important difference between $\psi _+$ and $\psi _{-}$ is that the charge density for $\psi _+$ is enhanced between the two protons, whereas it is diminished for $\psi _{-}$ as shown in Figures $1$. $\psi _{-}$ has a node in the middle while $\psi _+$ corresponds to our intuitive sense of what a chemical bond must be like. The electronic charge density is enhanced in the region between the two protons. So $\psi _+$ is called a bonding molecular orbital. If the electron were described by $\psi _{-}$, the low charge density between the two protons would not balance the Coulomb repulsion of the protons, so $\psi _{-}$ is called an antibonding molecular orbital.
Now we want to evaluate $C_+$ and $C_-$ and then calculate the energy. The bonding and antibonding character of $\psi _+$ and $\psi _{-}$ also should be reflected in the energy. If $\psi _+$ indeed describes a bonding orbital, then the energy of this state should be less than that of a proton and hydrogen atom that are separated. The calculation of the energy will tell us whether this simple theory predicts $\ce{H_2^{+}}$ to be stable or not and also how much energy is required to dissociate this molecule.
Exercise $1$
From the information in Figure $1$ for $\ce{H_2^{+}}$, calculate the difference in the electronic charge density (C/pm3) at a point halfway between the two nuclei for an electron in the bonding molecular orbital compared to one in the antibonding molecular orbital.
The constants $C_+$ and $C_-$ are evaluated from the normalization condition. Bracket notation, $<|>$, is used in Equation $\ref{10.16}$ to represent integration over all the coordinates of the electron for both functions $\psi _+$ and $\psi _-$. The right bracket represents a function, the left bracket represents the complex conjugate of the function, and the two together mean integrate over all the coordinates.
$\int \psi ^*_{\pm} \psi _{\pm} d\tau = \left \langle \psi _{\pm} | \psi _{\pm} \right \rangle = 1 \label {10.16}$
$\left \langle C_{\pm} [ 1s_A \pm 1s_B ] | C_{\pm} [ 1s_A \pm 1s_B ]\right \rangle = 1 \label {10.17}$
$|C_\pm|^2 [ (1s_A | 1s_A) + (1s_B | 1s_B) \pm (1s_B | 1s_A) \pm (1s_A | 1s_B)] = 1 \label {10.18}$
Since the atomic orbitals are normalized, the first two integrals are just 1. The last two integrals are called overlap integrals and are symbolized by S and S*, respectively, since one is the complex conjugate of the other.
Exercise $2$
Show that for two arbitrary functions $\left \langle \varphi _B | \varphi _A \right \rangle$ is the complex conjugate of $\left \langle \varphi _A | \varphi _B \right \rangle$ and that these two integrals are equal if the functions are real.
The overlap integrals are telling us to take the value of lsB at a point multiply by the value of lsA at that point and sum (integrate) such a product over all of space. If the functions don’t overlap, i.e. if one is zero when the other one isn’t and vice versa, these integrals then will be zero. It also is possible in general for such integrals to be zero even if the functions overlap because of the cancellation of positive and negative contributions, as was discussed in Section 4.4.
If the overlap integral is zero, for whatever reason, the functions are said to be orthogonal. Notice that the overlap integral ranges from 0 to 1 as the separation between the protons varies from $R = ∞$ to $R = 0$. Clearly when the protons are infinite distance apart, there is no overlap, and when $R = 0$ both functions are centered on one nucleus and $\left \langle 1s_A | 1s_B \right \rangle$ becomes identical to $\left \langle 1s_A | 1s_B \right \rangle$, which is normalized to 1, because then $1s_A = 1s_B$.
With these considerations and using the fact that $1s$ wavefunctions are real so
$\left \langle 1s_A | 1s_B \right \rangle = \left \langle 1s_B | 1s_A \right \rangle = S \label {10.19}$
Equation $\ref{10.18}$ becomes
$|C_{\pm}|^2 (2 \pm 2S ) = 1 \label {10.20}$
The solution to Equation $\ref{10.20}$ is given by
$C_{\pm} = [2(1 \pm S )]^{-1/2} \label {10.21}$
The energy is calculated from the expectation value integral,
$E_{\pm} = \left \langle \psi _{\pm} | \hat {H} _{elec} | \psi _{\pm} \right \rangle \label {10.22}$
which expands to give
$E_{\pm} = \dfrac {1}{2(1 \pm s)} [ \left \langle 1s_A |\hat {H} _{elec} | 1s_A \right \rangle + \left \langle 1s_B |\hat {H} _{elec} | 1s_B \right \rangle \pm \left \langle 1s_A |\hat {H} _{elec} | 1s_B \right \rangle \pm \left \langle 1s_B |\hat {H} _{elec} | 1s_A \right \rangle ] \label {10.23}$
Exercise $3$
Show that Equation $\ref{10.22}$ expands to give Equation $\ref{10.23}$.
The four integrals in Equation $\ref{10.23}$ can be represented by $H_{AA}$, $H_{BB}$, $H_{AB}$, and $H_{BA}$, respectively. Notice that A and B appear equivalently in the Hamiltonian operator, Equation $\ref{10.13}$. This equivalence means that integrals involving $1s_A$ must be the same as corresponding integrals involving $ls_B$, i.e.
$H_{AA} = H_{BB} \label {10.24}$
and since the wavefunctions are real,
$H_{AB} = H_{BA} \label {10.25}$
giving
$E_{\pm} = \dfrac {1}{1 \pm S} (H_{AA} \pm H_{AB}) \label {10.26}$
Now examine the details of HAA after inserting Equation $\ref{10.13}$ for the Hamiltonian operator.
$H_{AA} = \left \langle 1s_A | - \dfrac {\hbar ^2}{2m} \nabla ^2 - \dfrac {e^2}{4\pi \epsilon _0 r_A}| 1s_A \right \rangle + \dfrac {e^2}{4\pi \epsilon _0 R} \left \langle 1s_A | 1s_A \right \rangle - \left \langle 1s_A | \dfrac {e^2}{4 \pi \epsilon _0 r_B } | 1s_A \right \rangle \label {10.27}$
The first term is just the integral for the energy of the hydrogen atom, $E_H$. The second integral is equal to 1 by normalization; the prefactor is just the Coulomb repulsion of the two protons. The last integral, including the minus sign, is represented by $J$ and is called the Coulomb integral. Physically $J$ is the potential energy of interaction of the electron located around proton A with proton B. It is negative because it is an attractive interaction. It is the average interaction energy of an electron described by the 1sA function with proton B.
Now consider $H_{AB}$.
$H_{AB} = \left \langle 1s_A | - \dfrac {\hbar ^2}{1m} \nabla ^2 - \dfrac {e^2}{4\pi \epsilon _0 r_B}| 1s_B \right \rangle + \dfrac {e^2}{4\pi \epsilon _0 R} \left \langle 1s_A | 1s_B \right \rangle - \left \langle 1s_A | \dfrac {e^2}{4 \pi \epsilon _0 r_A } | 1s_B \right \rangle \label {10.28}$
In the first integral we have the hydrogen atom Hamiltonian and the H atom function 1sB. The function lsB is an eigenfunction of the operator with eigenvalue EH. Since EH is a constant it factors out of the integral, which then becomes the overlap integral, S. The first integral therefore reduces to EHS. The second term is just the Coulomb energy of the two protons times the overlap integral. The third term, including the minus sign, is given the symbol $K$ and is called the exchange integral. It is called an exchange integral because the electron is described by the 1sA orbital on one side and by the lsB orbital on the other side of the operator. The electron changes or exchanges position in the molecule. In a Coulomb integral the electron always is in the same orbital; whereas, in an exchange integral, the electron is in one orbital on one side of the operator and in a different orbital on the other side.
Using the expressions for $H_{AA}$ and $H{AB}$ and substituting into Equation $\ref{10.26}$ produces:
\begin{align} E_{\pm} &= \dfrac {1}{1 \pm S} \left[ (E_H + \dfrac {e^2}{4\pi \epsilon_0 R}) (1 \pm S ) + J \pm K \right] \label {10.29} \[4pt] &= \underbrace{E_H}_{\text{H Atom Energy}} + \underbrace{\dfrac {e^2}{4\pi \epsilon _0 R}}_{\text{Proton-Proton repulsion}} + \underbrace{\dfrac {J \pm K}{1 \pm S}}_{\text{Bonding Energy}} \label {10.30} \end{align}
The difference in energies of the two states $\Delta E_{\pm}$ is then:
\begin{align} \Delta E_{\pm} &= E_{\pm} - E_H \label {10.30B} \[4pt] &= \dfrac {e^2}{4\pi \epsilon _0 R} + \dfrac {J \pm K}{1 \pm S} \label {10.31}\end{align}
Equation $\ref{10.30}$ tells us that the energy of the $\ce{H_2^{+}}$ molecule is the energy of a hydrogen atom plus the repulsive energy of two protons plus some additional electrostatic interactions of the electron with the protons. These additional interactions are given by
$\dfrac {J \pm K}{1 \pm S}$
If the protons are infinitely far apart then only $E_H$ is nonzero. To get a chemical bond and a stable $\ce{H_2^{+}}$ molecule, $\Delta E_{\pm}$ (Equation \ref{10.30B}) must be less than zero and have a minimum, i.e.
$\dfrac {J \pm K}{1 \pm S}$
must be sufficiently negative to overcome the positive repulsive energy of the two protons
$\dfrac {e^2}{4 \pi \epsilon _0R }$
for some value of $R$. For large $R$ these terms are zero, and for small $R$, the Coulomb repulsion of the protons rises to infinity.
Exercise $4$
Show that Equation $\ref{10.13}$ follows from Equation $\ref{10.26}$.
We will examine more closely how the Coulomb repulsion term and the integrals $J$, $K$, and $S$ depend on the separation of the protons, but first we want to discuss the physical significance of $J$, the Coulomb integral, and $K$, the exchange integral.
Both $J$ and $K$ have been defined as
$J = \left \langle 1s_A | \dfrac {-e^2}{4 \pi \epsilon _0 r_B } |1s_A \right \rangle = - \int \varphi ^*_{1s_A} (r) \varphi _{1s_A} (r) \dfrac {e^2}{4 \pi \epsilon _0 r_B } d\tau \label {10.32}$
$K = \left \langle 1s_A | \dfrac {-e^2}{4 \pi \epsilon _0 r_A } |1s_B \right \rangle = - \int \varphi ^*_{1s_A} (r) \varphi _{1s_B} (r) \dfrac {e^2}{4 \pi \epsilon _0 r_A } d\tau \label {10.33}$
Note that both integrals are negative since all quantities in the integrand are positive. In the Coulomb integral, $e \varphi ^*_{1s_A} (r) \varphi _{1a_A} (r)$ is the charge density of the electron around proton A, since r represents the coordinates of the electron relative to proton A. Since rB is the distance of this electron to proton B, the Coulomb integral gives the potential energy of the charge density around proton A interacting with proton B. J can be interpreted as an average potential energy of this interaction because $e \varphi ^*_{1s_A} (r) \varphi _{1a_A} (r)$ is the probability density for the electron at point r, and $\dfrac {e^2}{4 \pi \epsilon _0 r_B }$ is the potential energy of the electron at that point due to the interaction with proton B. Essentially, $J$ accounts for the attraction of proton B to the electron density of hydrogen atom A. As the two protons get further apart, this integral goes to zero because all values for rB become very large and all values for $1/r_B$ become very small.
In the exchange integral, K, the product of the two functions is nonzero only in the regions of space where the two functions overlap. If one function is zero or very small at some point then the product will be zero or small. The exchange integral also approaches zero as internuclear distances increase because the both the overlap and the 1/r values become zero. The product $e \varphi ^*_{1s_A} (r) \varphi _{1a_B} (r)$ is called the overlap charge density. Since the overlap charge density is significant in the region of space between the two nuclei, it makes an important contribution to the chemical bond. The exchange integral, $K$, is the potential energy due to the interaction of the overlap charge density with one of the protons. While J accounts for the attraction of proton B to the electron density of hydrogen atom A, $K$ accounts for the added attraction of the proton due the build-up of electron charge density between the two protons.
Exercise $5$
Write a paragraph describing in your own words the physical significance of the Coulomb and exchange integrals for $\ce{H2^{+}}$.
Figure $2$ shows graphs of the terms contributing to the energy of $\ce{H_2^{+}}$. In this figure you can see that as the internuclear distance R approaches zero, the Coulomb repulsion of the two protons goes from near zero to a large positive number, the overlap integral goes for zero to one, and J and K become increasingly negative.
Figure $3$ shows the energy of $\ce{H_2^{+}}$ relative to the energy of a separated hydrogen atom and a proton as given by Equation $\ref{10.30}$. For the electron in the antibonding orbital, the energy of the molecule, $E_H(R)$, always is greater than the energy of the separated atom and proton.
For the electron in the bonding orbital, you can see that the big effect for the energy of the bonding orbital, E+(R), is the balance between the repulsion of the two protons $\dfrac {e^2}{4 \pi \epsilon _0R }$ and $J$ and $K$, which are both negative. $J$ and $K$ manage to compensate for the repulsion of the two protons until their separation is less than 100 pm (i.e the energy is negative up until this point), and a minimum in the energy is produced at 134 pm. This minimum represents the formation of a chemical bond. The effect of S is small. It only causes the denominator in Equation $\ref{10.30}$ to increase from 1 to 2 as $R$ approaches 0.
For the antibonding orbital, $-K$ is a positive quantity and essentially cancels $J$ so there is not sufficient compensation for the Coulomb repulsion of the protons. The effect of the $-K$ in the expression, Equation $\ref{10.30}$, for $E_-$ is to account for the absence of overlap charge density and the enhanced repulsion because the charge density between the protons for $\psi _-$ is even lower than that given by the atomic orbitals.
This picture of bonding in $\ce{H_2^{+}}$ is very simple but gives reasonable results when compared to an exact calculation. The equilibrium bond distance is 134 pm compared to 106 pm (exact), and a dissociation energy is 1.8 eV compared to 2.8 eV (exact).
Exercise $6$
Write the final expressions for the energy of $\psi _-$ and $\psi _-$, explain what these expressions mean, and explain why one describes the chemical bond in H2+and the other does not.
Exercise $7$
Figure $2$ shows that $S = 1$ and $J = K =1$ hartree when $R = 0$. Explain why $S$ equals 1 and $J$ and $K$ equal -1 hartree when $R = 0$. | textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Book%3A_Quantum_States_of_Atoms_and_Molecules_(Zielinksi_et_al)/10%3A_Theories_of_Electronic_Molecular_Structure/10.04%3A_The_Case_of_H%E2%82%82%E2%81%BA.txt |
The LCAO-MO method that we used for H2+ can be applied qualitatively to homonuclear diatomic molecules to provide additional insight into chemical bonding. A more quantitative approach also is helpful, especially for more complicated situations, like heteronuclear diatomic molecules and polyatomic molecules. Quantitative theories are described in subsequent sections.
First consider diatomic carbon, C2. The first question to ask is, “Are the electrons paired or unpaired?” For example, if we start with acetylene and remove 2 hydrogen atoms, we get C2 with an unpaired electron on each carbon. On the other hand, it might be possible for these electrons to pair up and give C2 with a quadruple bond. Let's examine the molecular orbital theory of C2 to see what that theory predicts.
Just as for the hydrogen molecule, we combine the two corresponding atomic orbitals from each atom. We are using the smallest possible basis set for this discussion. From each combination, we get a bonding molecular orbital and an antibonding molecular orbital. We expect the pz orbitals on the two atoms to have more overlap than the px and py orbitals. We therefore expect the exchange integrals to be larger and the resulting molecular orbital $2p_z\sigma _g$ to have a lower energy, i.e. be more bonding, than the $2p_x\pi _u$ and $2p_y\pi _u$, which are degenerate since the x and y directions are equivalent. Using the Aufbau Principle, we assign 2 electrons to each orbital as shown in Figure $1$, and end up with two electrons to put in two degenerate orbitals. Because of electron-electron repulsion, the lowest energy state will have each electron in a different degenerate orbital where they can be further apart than if they were in the same orbital. This separation reduces the repulsive Coulomb potential energy. Thus in C2 we have 2 unpaired electrons, each in a bonding molecular orbital. The bond order, which is given by the number of electrons in bonding molecular orbitals minus the number of electrons in antibonding molecular orbitals divided by 2, is however 2, and each unpaired electron is not localized on a single C atom. So we see that the electronic structure of
$C_2 (1s\sigma ^2_g, 1s\sigma ^2_u, 2s\sigma ^2_g, 2s\sigma ^2_u, 2p\sigma ^2_g, 2p\pi ^2_u) \nonumber$
is quite different from what we would expect by thinking it is acetylene without the two H atoms. The acetylene structure naively predicts a triple bond and two nonbonded electrons on each carbon atom.
The two unpaired electrons in the two $2pπ_u$ orbitals of C2 predicted by this simple theory produce a singlet or a triplet ground state. The singlet ground state results if the electron spins are antiparallel $(\alpha \beta)$, and the triplet ground state results if the electron spins are parallel (the three triplet spin functions are $\alpha \alpha$ $\alpha \beta$ and $\beta \beta$ ). Hund’s rules predict the triplet state to have the lower energy, but the ground state of C2 is known experimentally to be a singlet state. The singlet state results from a configuration where the $2pσ_g$ orbital has a higher energy than the $2pπ_u$ orbitals, and all electrons are paired $(1s\sigma ^2_g, 1s\sigma ^2_u, 2s\sigma ^2_g, 2s\sigma ^2_u, 2p\pi ^4_u, 2p\sigma ^0_g)$. The bond order is still 2, but there are no unpaired electrons. All the molecular orbitals are doubly occupied.
This configuration is accounted for theoretically by a more complete theory that allows the molecular orbitals to be written as linear combinations of all the valence atomic orbitals not just a pair of atomic orbitals. The $2\sigma _g$ molecular orbital, which in the simple scheme is $2s_A+2s_B$, is stabilized by mixing with $2p_{zA}+2p_{zB}$, which is the $3\sigma _g$ orbital. As a result of this mixing, the $3\sigma _g$ orbital is destabilized and pushed to higher energy, above the $2p\pi _u$ orbitals. This mixing is just an example of hybridization. Better wavefunctions and better energies are obtained by using hybrid functions, which in this case is a linear combination of 1s and 2pz functions. The relative energies of these hybrid orbitals also are shown on the right hand side of Figure $1$.
For such mixing to be important, the orbitals must have the same symmetry and be close to each other in energy. Because of these constraints, this mixing is most important for the $2s\sigma _g$ and $2p\sigma _g$ orbitals, both of which have symmetry. This ordering is found for all the diatomic molecules of the first row elements except O2 and F2. For these two molecules, the energy separation between the 2s and 2p orbitals is larger, and consequently the mixing is not strong enough to alter the energy level structure from that predicted by using the simple two-function basis set. | textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Book%3A_Quantum_States_of_Atoms_and_Molecules_(Zielinksi_et_al)/10%3A_Theories_of_Electronic_Molecular_Structure/10.05%3A_Homonuclear_Diatomic_Molecules.txt |
An electronic structure calculation from first principles (ab initio) presents a number of challenges. Many integrals must be evaluated followed by a self-consistent process for assessing the electron-electron interaction and then electron correlation effects must be taken into account. Semi-empirical methods do not proceed analytically in addressing these issues, but rather uses experimental data to facilitate the process. Several such methods are available. These methods are illustrated here by the approaches built on the work of Hückel.
Extended Hückel Molecular Orbital Method (EH)
One of the first semi-empirical methods to be developed was Hückel Molecular Orbital Theory (HMO). HMO was developed to describe molecules containing conjugated double bonds. HMO considered only electrons in $\pi$ orbitals and ignored all other electrons in a molecule. It was successful because it could address a number of issues associated with a large group of molecules at a time when calculations were done on mechanical calculators.
The Extended Hückel Molecular Orbital Method (EH) grew out of the need to consider all valence electrons in a molecular orbital calculation. By considering all valence electrons, chemists could determine molecular structure, compute energy barriers for rotation about bonds, and even determine energies and structures of transition states for reactions. The computed energies could be used to choose between proposed transitions states to clarify reaction mechanisms.
In the EH method, only the $n$ valence electrons are considered. The total valence electron wavefunction is described as a product of the one-electron wavefunctions.
$\psi _{valence} = \psi _1(1) \psi _2(2) \psi _3(3) \psi _3(3) \dots \psi _j(n) \label {10.34}$
where $n$ is the number of electrons and $j$ identifies the molecular orbital. Each molecular orbital is written as an linear combination of atomic orbitals (LCAO).
$\psi _j = \sum \limits ^N_{r = 1} c_{jr} \varphi_j \label {10.35}$
where now the $\varphi _j$ are the valance atomic orbitals chosen to include the 2s, 2px, 2py, and 2pz of the carbons and heteroatoms in the molecule and the 1s orbitals of the hydrogen atoms. These orbitals form the basis set. Since this basis set contains only the atomic-like orbitals for the valence shell of the atoms in a molecule, it is called a minimal basis set.
Each $\psi _j$, with $j = 1…N$, represents a molecular orbital, i.e. a wavefunction for one electron moving in the electrostatic field of the nuclei and the other electrons. Two electrons with different spins are placed in each molecular orbital so that the number of occupied molecular orbitals N is half the number of electrons, $n$, i.e. $N = n/2$.
The number of molecular orbitals that one obtains by this procedure is equal to the number of atomic orbitals. Consequently, the indices j and r both run from 1 to N. The $c_{jr}$ are the weighting coefficients for the atomic orbitals in the molecular orbital. These coefficients are not necessarily equal, or in other words, the orbital on each atom is not used to the same extent to form each molecular orbital. Different values for the coefficients give rise to different net charges at different positions in a molecule. This charge distribution is very important when discussing spectroscopy and chemical reactivity.
The energy of the jth molecular orbital is given by a one-electron Schrödinger equation using an effective one electron Hamiltonian, heff, which expresses the interaction of an electron with the rest of the molecule.
$h_{eff} \psi _j = \epsilon _j \psi _j \label {10.36}$
is the energy eigenvalue of the jth molecular orbital, corresponding to the eigenfunction $\psi _j$. The beauty of this method, as we will see later, is that the exact form of heff is not needed. The total energy of the molecule is the sum of the single electron energies.
$E_{\pi} = \sum \limits _{j} n_j \epsilon _j \label {10.37}$
where $n_j$ is the number of electrons in orbital $j$.
The expectation value expression for the energy for each molecular orbital is used to find and then $E_{\pi}$
$\epsilon _j = \dfrac {\int \psi _j \times h_{eff} \psi _j d\tau}{\int \psi _j \times \psi _j d\tau} = \dfrac {\left \langle \psi _j | h_{eff} | \psi _j \right \rangle}{\left \langle \psi _j | \psi _J \right \rangle} \label {10.38}$
The notation $\left \langle | | \right \rangle$, which is called a bra-ket, just simplifies writing the expression for the integral. Note that the complex conjugate now is identified by the left-side position and the bra notation $< |$ and not by an explicit *.
After substituting Equation $\ref{10.35}$ into $\ref{10.38}$, we obtain for each molecular orbital
$\displaystyle \epsilon _j = \dfrac {\left \langle \sum \limits ^N_{r = 1} c_{jr}\psi _r | h_{eff} | \sum \limits ^N_{s = 1} c_{js} \psi _s\right \rangle}{\left \langle \sum \limits ^N_{r = 1} c_{jr}\psi _r | \sum \limits ^N_{s = 1} c_{js}\psi _s \right \rangle} \label {10.39}$
which can be rewritten as
$\displaystyle \epsilon = \dfrac {\sum \limits ^N_{r=1} \sum \limits ^N_{s=1} c^*_r c_s \left \langle \psi _r |h_{eff}| \psi _s \right \rangle}{\sum \limits ^N_{r=1} \sum \limits ^N_{s=1} c^*_r c_s \left \langle \psi _r | \psi _s \right \rangle} \label {10.40}$
where the index $j$ for the molecular orbital has been dropped because this equation applies to any of the molecular orbitals.
Exercise $1$
Consider a molecular orbital made up of three atomic orbitals, e.g. the three carbon $2p_z$ orbitals of the allyl radical, where the internuclear axes lie in the xy-plane. Write the LCAO for this MO. Derive the full expression, starting with Equation $\ref{10.38}$ and writing each term explicitly, for the energy expectation value for this LCAO in terms of heff. Compare your result with Equation $\ref{10.40}$ to verify that Equation $\ref{10.40}$ is the general representation of your result.
Exercise $2$
Write a paragraph describing how the Variational Method could be used to find values for the coefficients cjr in the linear combination of atomic orbitals.
To simplify the notation we use the following definitions. The integrals in the denominator of Equation $\ref{10.40}$ represent the overlap between two atomic orbitals used in the linear combination. The overlap integral is written as $S_{rs}$. The integrals in the numerator of Equation $\ref{10.40}$ are called either resonance integrals or coulomb integrals depending on the atomic orbitals on either side of the operator heff as described below.
• $S_{Rs} = \left \langle \psi _r |\psi _s \right \rangle$ is the overlap integral. $S_{rr} = 1$ because we use normalized atomic orbitals. For atomic orbitals r and s on different atoms, $S_{rs}$ has some value between 1 and 0: the further apart the two atoms, the smaller the value of $S_{rs}$.
• $H_{rr} = \left \langle \psi _r |h_{eff}| \psi _s \right \rangle$ is the Coulomb Integral. It is the kinetic and potential energy of an electron in, or described by, an atomic orbital, $\varphi _r$, experiencing the electrostatic interactions with all the other electrons and all the positive nuclei.
• $H_{rs} = \left \langle \psi _r |h_{eff} |\psi _s\right \rangle$ is the Resonance Integral or Bond Integral. This integral gives the energy of an electron in the region of space where the functions $\varphi _r$ and $\varphi _s$ overlap. This energy sometimes is referred to as the energy of the overlap charge. If $r$ and $s$ are on adjacent bonded atoms, this integral has a finite value. If the atoms are not adjacent, the value is smaller, and assumed to be zero in the Hückel model.
In terms of this notation, Equation $\ref{10.49}$ can be written as
$\epsilon = \dfrac {\sum ^N_{r=1} \sum ^N_{s=1} c ^*_r c_s H_{rs}}{\sum ^N_{r=1} \sum ^N_{s=1} c ^*_r c_s S_{rs}} \label {10.41}$
We now must find the coefficients, the c's. One must have a criterion for finding the coefficients. The criterion used is the Variational Principle. Since the energy depends linearly on the coefficients in Equation $\ref{10.41}$, the method we use to find the best set of coefficients is called the Linear Variational Method.
Linear Variational Method
The task is to minimize the energy with respect to all the coefficients by solving the N simultaneous equations produced by differentiating Equation $\ref{10.41}$ with respect to each coefficient.
$\dfrac {\partial \epsilon}{\partial c_t} = 0 \label {10.42}$
for $t = 1, 2, 3, \dots N$
Actually we also should differentiate Equation $\ref{10.41}$ with respect to the $c^*_t$, but this second set of N equations is just the complex conjugate of the first and produces no new information or constants.
To carry out this task, rewrite Equation $\ref{10.41}$ to obtain Equation $\ref{10.43}$ and then take the derivative of Equation $\ref{10.43}$ with respect to each of the coefficients.
$\epsilon \sum \limits _r \sum \limits _s c^*_r c_s S_{rs} = \sum \limits _r \sum \limits _s c^*_r c_s H_{rs} \label {10.43}$
Actually we do not want to do this differentiation N times, so consider the general case where the coefficient is. Here t represents any number between 1 and N.
This differentiation is relatively easy, and the result, which is shown by Equation $\ref{10.44}$, is relatively simple because some terms in Equation $\ref{10.43}$ do not involve and others depend linearly on. The derivative of the terms that do not involve ct is zero (e.g.
$\dfrac {\partial c^*_3 c_4 H_{34}}{\partial c_2} = 0.$
The derivative of terms that contain is just the constant factor that multiples the, (e.g. $\dfrac {\partial c^*_3 c_2 H_{32}}{\partial c_2} = c^*_3 H_{32}$ ). Consequently, only terms in Equation $\ref{10.43}$ that contain contribute to the result, and whenever a term contains, that term appears in Equation $\ref{10.44}$ without the because we are differentiating with respect to. The result after differentiating is
$\epsilon \sum \limits _r c^*_r S_{rt} = \sum \limits _r c^*_r H_{rt} \label {10.44}$
If we take the complex conjugate of both sides, we obtain
Since
$\epsilon = \epsilon ^*, S^*_{rt} = S_{tr}$
and
$H^*_{rt} = H_{tr},$
then Equation $\ref{10.45}$ can be reversed and written as
$\sum \limits _r c_r H_{tr} = \epsilon \sum \limits _r c_r S_{tr} \label {10.46}$
or upon rearranging as
$\sum \limits _r c_r (H_{tr} - S_{tr}\epsilon ) = 0 \label {10.47}$
There are $N$ simultaneous equations that look like this general one; N is the number of coefficients in the LCAO. Each equation is obtained by differentiating Equation $\ref{10.43}$ with respect to one of the coefficients.
Exercise $3$
Explain why the energy $\epsilon = \epsilon^*$, show that $S^*_{rt} = S_{tr}$ (write out the integral expressions and take the complex conjugate of , and show that $H^*_{rt} = H_{tr}$ (write out the integral expressions, take the complex conjugate of , and use the Hermitian property of quantum mechanical operators).
Exercise $4$
Rewrite your solution to Exercise $3$ for the 3-carbon $\pi$ system found in the allyl radical in the form of Equation $\ref{10.43}$ and then derive the set of three simultaneous equations for the coefficients. Compare your result with Equation $\ref{10.47}$ to verify that Equation $\ref{10.47}$ is a general representation of your result.
This method is called the linear variational method because the variable parameters affect the energy linearly unlike the shielding parameter in the wavefunction that was discussed in Chapter 9. The shielding parameter appears in the exponential part of the wavefunction and the effect on the energy is nonlinear. A nonlinear variational calculation is more laborious than a linear variational calculation.
Equations \ref{10.46} and \ref{10.47} represent a set of homogeneous linear equations. As we discussed for the case of normal mode analysis in Chapter 6, a number of methods can be used for solving these equations to obtain values for the energies, $\epsilon ' s$, and the coefficients, the $c'_r s$.
Matrix methods are the most convenient and powerful. First we write more explicitly the set of simultaneous equations that is represented by Equation . The first equation has t = 1, the second t = 2, etc. N represents the index of the last atomic orbital in the linear combination.
\begin{align} c_1H_{11} + c_2H_{12} + \dots c_nH_{1N} &= c_1S_{11}\epsilon +c_2S_{12}\epsilon + \dots c_NS_{1N}\epsilon \nonumber \[6pt] c_1H_{21} + c_2H_{22} + \dots c_nH_{2N} &= c_1S_{21}\epsilon +c_2S_{22}\epsilon + \dots c_NS_{2N}\epsilon \nonumber \[6pt] \vdots \vdots &= \vdots \vdots \nonumber \[6pt] c_1H_{N1} + c_2H_{N22} + \dots c_nH_{NN} &= c_1S_{N1}\epsilon +c_2S_{N2}\epsilon + \dots c_NS_{NN}\epsilon \nonumber \end{align} \label {10.48}
This set of equations can be represented in matrix notation.
$HC' = SC' \epsilon \label {10.49}$
Here we have square matrix H and S multiplying a column vector C' and a scalar $\epsilon$. Rearranging produces
$HC' - SC' \epsilon = 0$
$(H - S\epsilon )C' = 0 \label {10.50}$
Exercise $5$
For the three atomic orbitals you used in Exercises $\ref{10.18}$ and $\ref{10.6}$, write the Hamiltonian matrix H, the overlap matrix S, and the vector C'. Show by matrix multiplication according to Equation $\ref{10.49}$ that you produce the same Equations that you obtained in Exercise $21$.
The problem is to solve these simultaneous equations, or the matrix equation, and find the orbital energies, which are the $\epsilon ' s$, and the atomic orbital coefficients, the $c's$, that define the molecular orbitals.
Exercise $6$
Identify two methods for solving simultaneous equations and list the steps in each.
In the EH method we use an effective one electron Hamiltonian, and then proceed to determine the energy of a molecular orbital where $H_{rs} = \left \langle \psi _r |h_{eff} |\psi _s\right \rangle$ and $S_{rs} = \left \langle \psi _r |\psi _s\right \rangle$.
Minimization of the energy with respect to each of the coefficients again yields a set of simultaneous equations just like Equation $\ref{10.47}$.
$\sum \limits _r c_r (H_{tr} - S_{tr}\epsilon) =0 \label {10.52}$
As before, these equations can be written in matrix form in Equation \ref{10.49}
Equation $\ref{10.49}$ accounts for one molecular orbital. It has energy $\epsilon$, and it is defined by the elements in the C' column vector, which are the coefficients that multiply the atomic orbital basis functions in the linear combination of atomic orbitals.
We can write one matrix equation for all the molecular orbitals.
$HC = SCE \label {10.53}$
where H is a square matrix containing the Hrs, the one electron energy integrals, and C is the matrix of coefficients for the atomic orbitals. Each column in C is the C' that defines one molecular orbital in terms of the basis functions. In extended Hückel theory, the overlap is not neglected, and S is the matrix of overlap integrals. E is the diagonal matrix of orbital energies. All of these are square matrices with a size that equals the number of atomic orbitals used in the LCAO for the molecule under consideration.
Equation $\ref{10.53}$ represents an eigenvalue problem. For any extended Hückel calculation, we need to set up these matrices and then find the eigenvalues and eigenvectors. The eigenvalues are the orbital energies, and the eigenvectors are the atomic orbital coefficients that define the molecular orbital in terms of the basis functions.
Exercise $7$
What is the size of the H matrix for HF? Write out the matrix elements in the H matrix using symbols for the wavefunctions appropriate to the HF molecule. Consider this matrix and determine if it is symmetric by examining pairs of off-diagonal elements. In a symmetric matrix, pairs of elements located by reflection across the diagonal are equal, i.e. Hrc = Hcr where r and c represent the row and column, respectively. Why are such pairs of elements equal? Write out the S matrix in terms of symbols, showing the diagonal and the upper right portion of the matrix. This matrix also is symmetric, so if you compute the diagonal and the upper half of it, you know the values for the elements in the lower half. Why are pairs of S matrix elements across the diagonal equal?
The elements of the H matrix are assigned using experimental data. This approach makes the extended Hückel method a semi-empirical molecular orbital method. The basic structure of the method is based on the principles of physics and mathematics while the values of certain integrals are assigned by using educated guesses and experimental data. The Hrr are chosen as valence state ionization potentials with a minus sign to indicate binding. The values used by R. Hoffmann when he developed the extended Hückel technique were those of H.A. Skinner and H.O. Pritchard (Trans. Faraday Soc. 49 (1953), 1254). These values for C and H are listed in Table $1$. The values for the heteroatoms (N, O, and F) are taken from Pople and Beveridge (Approximate Molecular Orbital Theory, McGraw-Hill Book Company, New York, 1970).
Table $1$: Ionization potentials of various atomic orbitals.
Atomic orbital Ionization potential (eV)
H 1s 13.6
C 2s 21.4
C 2p 11.4
N 2s 25.58
N 2p 13.9
O 2s 32.38
O 2p 15.85
F 2s 40.20
F 2p 18.66
The Hrs values are computed from the ionization potentials according to
$H_{rs} = \dfrac {1}{2} K (H_{rr} + H_{ss})S_{rs} \label {10.54}$
The rationale for this expression is that the energy should be proportional to the energy of the atomic orbitals, and should be greater when the overlap of the atomic orbitals is greater. The contribution of these effects to the energy is scaled by the parameter $K$. Hoffmann assigned the value of $K$ after a study of the effect of this parameter on the energies of the occupied orbitals of ethane. The conclusion was that a good value for K is $K = 1.75$.
Exercise $8$
Fill in numerical values for the diagonal elements of the Extended Hückel Hamiltonian matrix for HF using the ionization potentials given in Table $1$.
The overlap matrix also must be determined. The matrix elements are computed using the definition $S_{rs} = \left \langle \psi _r |\psi _s\right \rangle$ where $\varphi _k$ and $\psi _s$ are the atomic orbitals. Slater-type orbitals (STO’s) are used for the atomic orbitals rather than hydrogenic orbitals because integrals involving STO's can be computed more quickly on computers. Slater type orbitals have the form
$\phi _{1s} (r) = 2\zeta ^{3/2} \text {exp} (- \zeta r)$
$\phi _{2s} (r) = \phi _2p (r) = \left (\dfrac {4\zeta ^5}{3} \right )^{1/2} \text {rexp} (- \zeta r) \label {10.55}$
where zeta, $\zeta$, is a parameter describing the screened nuclear charge. In the extended Hückel calculations done by Hoffmann, the Slater orbital parameter $\zeta$ was 1.0 for the H1s and 1.652 for the C2s and C2p orbitals.
Exercise $9$
Describe the difference between Slater-type orbitals and hydrogenic orbitals.
Overlap integrals involve two orbitals on two different atoms or centers. Such integrals are called two-center integrals. In such integrals there are two variables to consider, corresponding to the distances from each of the atomic centers, rA and rB. Such integrals can be represented as
$S_{A_{2s}B_{2s}} = \left (\dfrac {4\zeta ^5}{3}\right ) \int r_A \text {exp} (- \zeta r_A) r_B \text {exp} (- \zeta r_B) d\tau \label {10.56}$
but elliptical coordinates must be used for the actual integration. Fortunately the software that does extended Hückel calculations contains the programming code to do overlap integrals. The interested reader will find sufficient detail on the evaluation of overlap integrals and the creation of the programmable mathematical form for any pair of Slater orbitals in Appendix B4 (pp. 199 - 200) of the book Approximate Molecular Orbital Theory by Pople and Beveridge. The values of the overlap integrals for HF are given in Table $2$.
Exercise $10$
Using the information in Table $2$, identify which axis (x, y, or z) has been defined as the internuclear axis. Fill in the missing values in Table $2$. This requires no calculation, only insight.
Table $2$: Overlap Integrals for HF
F 2s F 2px F 2py F 2pz H 1s
F 2s 0.47428
F 2px 0
F 2py 0.38434
F 2pz 0
H 1s
Exercise $11$: Hydrogen Fluoride
Using the information in Tables $1$ and $2$, write the full Hückel $H$ matrix and the $S$ matrix that appears in Equation $\ref{10.53}$ for HF.
Our goal is to find the coefficients in the linear combinations of atomic orbitals and the energies of the molecular orbitals. For these results, we need to transform Equation $\ref{10.53}$
$HC = SCE \nonumber$
into a form that allows us to use matrix diagonalization techniques. We are hampered here by the fact that the overlap matrix is not diagonal because the orbitals are not orthogonal. Mathematical methods do exist that can be used to transform a set of functions into an orthogonal set. Essentially these methods apply a transformation of the coordinates from the local coordinate system describing the molecule into one where the atomic orbitals in the LCAO are all orthogonal. Such a transformation can be accomplished through matrix algebra, and computer algorithms for this procedure are part of all molecular orbital programs. The following paragraph describes how this transformation can be accomplished.
If the matrix $M$ has an inverse $M^{-1}$ Then
$MM^{-1} = 1 \label{10.57}$
and we can place this product in a matrix equation without changing the equation. When this is done for Equation $\ref{10.53}$, we obtain
$HMM^{-1}C = SMM^{-1} CE \label {10.58}$
Next multiply on the left by $M^{-1}$ and determine $M$ so the product $M^{-1}SM$ is the identity matrix, i.e. a matrix that has 1's on the diagonal and 0's off the diagonal is the case for an orthogonal basis set.
$M^{-1}HMM^{-1}C = M^{-1}SMM^{-1}CE \label {10.59}$
which then can be written as
$H''C'' = C''E'' \label {10.60}$
where
$C' = M^{-1}C \label {10.61}$
The identity matrix is not included because multiplying by the identity matrix is just like multiplying by the number 1. It doesn’t change anything. The $H''$ matrix can be diagonalized by multiplying on the left by the inverse of $C''$ to find the energies of the molecular orbitals in the resulting diagonal matrix $E$.
$E = C''^{-1}H''C'' \label {10.62}$
The matrix $C''$ obtained in the diagonalization step is finally back transformed to the original coordinate system with the $M$ matrix, $C = MC''$ since $C'' = M^{-1}C$.
Fortunately this process is automated in some computer software. For example, in Mathcad, the command genvals(H,S) returns a list of the eigenvalues for Equation $\ref{10.53}$. These eigenvalues are the diagonal elements of $E$. The command genvecs(H,S) returns a matrix of the normalized eigenvectors corresponding to the eigenvalues. The ith eigenvalue in the list goes with the ith column in the eigenvector matrix. This problem, where $S$ is not the identity matrix, is called a general eigenvalue problem, and gen in the Mathcad commands refers to general.
Exercise $12$
Using your solution to Exercise $11$, find the orbital energies and wavefunctions for HF given by an extended Hückel calculation. Construct an orbital energy level diagram, including both the atomic and molecular orbitals, and indicate the atomic orbital composition of each energy level. Draw lines from the atomic orbital levels to the molecular orbital levels to show which atomic orbitals contribute to which molecular orbitals. What insight does your calculation provide regarding the ionic or covalent nature of the chemical bond in HF? | textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Book%3A_Quantum_States_of_Atoms_and_Molecules_(Zielinksi_et_al)/10%3A_Theories_of_Electronic_Molecular_Structure/10.06%3A_Semi-Empirical_Methods-_Extended_H%C3%BCckel.txt |
Mulliken populations (R.S. Mulliken, J. Chem. Phys. 23, 1833, 1841, 23389, 2343 (1955)) can be used to characterize the electronic charge distribution in a molecule and the bonding, antibonding, or nonbonding nature of the molecular orbitals for particular pairs of atoms. To develop the idea of these populations, consider a real, normalized molecular orbital composed from two normalized atomic orbitals.
$\psi _i = c_{ij}\phi _j + c_{ik}\phi_k \label{10-63}$
The charge distribution is described as a probability density by the square of this wavefunction.
$\psi ^2_i = c^2_{ij} \phi^2_j + c^2_{ik} \phi^2_k + 2c_{ik} \phi_i \phi_j \label{10-64}$
Integrating over all the electronic coordinates and using the fact that the molecular orbital and atomic orbitals are normalized produces
$1 = c^2_{ij} + c^2_{ik} + 2C_{ij}c_{ik}S_{jk} \label{10-65}$
where $S_{jk}$ is the overlap integral involving the two atomic orbitals.
Mulliken's interpretation of this result is that one electron in molecular orbital $\psi _t$ contributes $c^2_{ij}$ to the electronic charge in atomic orbital $\varphi _j, c^2_{ik}$ to the electronic charge in atomic orbital $\varphi_k$, and $2c_{ij}c_{ik}S_{jk}$ to the electronic charge in the overlap region between the two atomic orbitals. He therefore called $c^2_{ij}$ and $c^2_{ik}$, the atomic-orbital populations, and $2c_{ij}c_{ik}S_{jk}$, the overlap population. The overlap population is >0 for a bonding molecular orbital, <0 for an antibonding molecular orbital, and 0 for a nonbonding molecular orbital.
It is convenient to tabulate these populations in matrix form for each molecular orbital. Such a matrix is called the Mulliken population matrix. If there are two electrons in the molecular orbital, then these populations are doubled. Each column and each row in a population matrix is corresponds to an atomic orbital, and the diagonal elements give the atomic-orbital populations, and the off-diagonal elements give the overlap populations. For our example, Equation $\ref{10-63}$, the population matrix is
$P_i = \begin {pmatrix} c^2_{ij} & 2c_{ij}c_{ik}S_{jk} \ 2c_{ij}c_{ik}S_{jk} & c^2_{ik} \end {pmatrix} \label{10-66}$
Since there is one population matrix for each molecular orbital, it generally is difficult to deal with all the information in the population matrices. Forming the net population matrix decreases the amount of data. The net population matrix is the sum of all the population matrices for the occupied orbitals.
$NP = \sum \limits_{i = occupied} P_i \label{10-67}$
The net population matrix gives the atomic-orbital populations and overlap populations resulting from all the electrons in all the molecular orbitals. The diagonal elements give the total charge in each atomic orbital, and the off-diagonal elements give the total overlap population, which characterizes the total contribution of the two atomic orbitals to the bond between the two atoms.
The gross population matrix condenses the data in a different way. The net population matrix combines the contributions from all the occupied molecular orbitals. The gross population matrix combines the overlap populations with the atomic orbital populations for each molecular orbital. The columns of the gross population matrix correspond to the molecular orbitals, and the rows correspond to the atomic orbitals. A matrix element specifies the amount of charge, including the overlap contribution, that a particular molecular orbital contributes to a particular atomic orbital. Values for the matrix elements are obtained by dividing each overlap population in half and adding each half to the atomic-orbital populations of the participating atomic orbitals. The matrix elements provide the gross charge that a molecular orbital contributes to the atomic orbital. Gross means that overlap contributions are included. The gross population matrix therefore also is called the charge matrix for the molecular orbitals. An element of the gross population matrix (in the jth row and ith column) is given by
$GP_{ji} = Pi_{jj} + \frac {1}{2} \sum \limits _{k \ne j} Pi_{jk} \label{10-68}$
where $P_i$ is the population matrix for the ith molecular orbital, $Pi_{jj}$ is the atomic-orbital population and the $Pi_{jk}$ is the overlap population for atomic orbitals j and k in the ith molecular orbital.
Further condensation of the data can be obtained by considering atomic and overlap populations by atoms rather than by atomic orbitals. The resulting matrix is called the reduced-population matrix. The reduced population is obtained from the net population matrix by adding the atomic orbital populations and the overlap populations of all the atomic orbitals of the same atom. The rows and columns of the reduced population matrix correspond to the atoms.
Atomic-orbital charges are obtained by adding the elements in the rows of the gross population matrix for the occupied molecular orbitals. Atomic charges are obtained from the atomic orbital charges by adding the atomic-orbital charges on the same atom. Finally, the net charge on an atom is obtained by subtracting the atomic charge from the nuclear charge adjusted for complete shielding by the 1s electrons.
Exercise $1$
Using your results from Exercise $29$ for HF, determine the Mulliken population matrix for each molecular orbital, the net population matrix, the charge matrix for the molecular orbitals, the reduced population matrix, the atomic orbital charges, the atomic charges, the net charge on each atom, and the dipole moment. Note: The bond length for HF is 91.7 pm and the experimental value for the dipole moment is $6.37 \times 10^{-30}\, C \cdot m$. | textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Book%3A_Quantum_States_of_Atoms_and_Molecules_(Zielinksi_et_al)/10%3A_Theories_of_Electronic_Molecular_Structure/10.07%3A_Mulliken_Populations.txt |
In a modern ab initio electronic structure calculation on a closed shell molecule, the electronic Hamiltonian is used with a single determinant wavefunction. This wavefunction, $\psi$, is constructed from molecular orbitals, $\psi$ that are written as linear combinations of contracted Gaussian basis functions, $\varphi$
$\varphi _j = \sum \limits _k c_{jk} \psi _k \label {10.69}$
The contracted Gaussian functions are composed from primitive Gaussian functions to match Slater-type orbitals (STOs). The exponential parameters in the STOs are optimized by calculations on small molecules using the nonlinear variational method and then those values are used with other molecules. The problem is to calculate the electronic energy from
$E = \dfrac { \displaystyle \int \psi ^* \hat {H} \psi d \tau }{\displaystyle \int \psi ^* \psi d \tau} \label {10.70}$
and find the optimum coefficients $c_{jk}$ for each molecular orbital in Equation $\ref{10.69}$ by using the Self Consistent Field Method and the Linear Variational Method to minimize the energy as was described in the previous chapter for the case of atoms.
To obtain the total energy of the molecule, we need to add the internuclear repulsion to the electronic energy calculated by this procedure. The total energy of the molecule can be calculated for different geometries (i.e. bond lengths and angles) to find the minimum energy configuration. Also, the total energies of possible transition states can be calculated to find the lowest energy pathway to products in chemical reactions.
$V_{rs} = \sum \limits _{r=1}^{N-1} \sum \limits _{s=r+1}^{N} \dfrac {Z_r Z_s}{r_{rs}} \label {10.71}$
Exercise $1$
For a molecule with three nuclei, show that the sums in Equation $\ref{10.71}$ correctly include all the pairwise potential energy terms without including any twice.
As we improve the basis set used in calculations by adding more and better functions, we expect to get better and better energies. The variational principle says an approximate energy is an upper bound to the exact energy, so the lowest energy that we calculate is the most accurate. At some point, the improvements in the energy will be very slight. This limiting energy is the lowest that can be obtained with a single determinant wavefunction. This limit is called the Hartree-Fock limit, the energy is the Hartree-Fock energy, the molecular orbitals producing this limit are called Hartree-Fock orbitals, and the determinant is the Hartree-Fock wavefunction.
Exercise $2$
Write a one-sentence definition of the Hartree-Fock wavefunction that captures all the essential features of this function.
Restricted vs. Unrestricted Hartree-Fock
You may encounter the terms restricted and unrestricted Hartree-Fock. The above discussion pertains to a restricted HF calculation. In a restricted HF calculation, electrons with $\alpha$ spin are restricted or constrained to occupy the same spatial orbitals as electrons with $\beta$ spin. This constraint is removed in an unrestricted calculation. For example, the spin orbital for electron 1 could be $\psi _A (r_1) \alpha (1)$, and the spin orbital for electron 2 in a molecule could be $\psi _B (r_2) \beta (2)$, where both the spatial molecular orbital and the spin function differ for the two electrons. Such spin orbitals are called unrestricted. If both electrons are constrained to have the same spatial orbital, e.g. $\psi _A (r_1) \alpha (1)$ and $\psi _A (r_2) \beta (2)$, then the spin orbital is said to be restricted. While unrestricted spin orbitals can provide a better description of the electrons, twice as many spatial orbitals are needed, so the demands of the calculation are much higher. Using unrestricted orbitals is particular beneficial when a molecule contains an odd number of electrons because there are more electrons in one spin state than in the other.
Carbon Dioxide
Now consider the results of a self-consistent field calculation for carbon monoxide, $\ce{CO}$. It is well known that carbon monoxide is a poison that acts by binding to the iron in hemoglobin and preventing oxygen from binding. As a result, oxygen is not transported by the blood to cells. Which end of carbon monoxide, carbon or oxygen, do you think binds to iron by donating electrons? We all know that oxygen is more electron-rich than carbon (8 vs 6 electrons) and more electronegative. A reasonable answer to this question therefore is oxygen, but experimentally it is carbon that binds to iron.
A quantum mechanical calculation done by Winifred M. Huo, published in J. Chem. Phys. 43, 624 (1965), provides an explanation for this counter-intuitive result. The basis set used in the calculation consisted of 10 functions: the ls, 2s, 2px, 2py, and 2pz atomic orbitals of C and O. Ten molecular orbitals (mo’s) were defined as linear combinations of the ten atomic orbitals, which are written as
$\psi _k = \sum \limits _{j=1}^{10} C_{kj} \varphi _j \label {10.72}$
where $k$ identifies the mo and $j$ identifies the atomic orbital basis function. The ground state wavefunction $\psi$ is written as the Slater Determinant of the five lowest energy molecular orbitals $\psi _k$. Equation $\ref{10.73}$ gives the energy of the ground state,
$E = \dfrac {\left \langle \psi |\hat {H} | \psi \right \rangle}{\left \langle \psi | \psi \right \rangle} \label {10.73}$
where the denominator accounts for the normalization requirement. The coefficients $C_{kj}$ in the linear combination are determined by the variational method to minimize the energy. The solution of this problem gives the following equations for the molecular orbitals. Only the largest terms have been retained here. These functions are listed and discussed in order of increasing energy.
• $1s \approx 0.94 1s_o$. The 1 says this is the first $\sigma$ orbital. The $\sigma$ says it is symmetric with respect to reflection in the plane of the molecule. The large coefficient, 0.94, means this is essentially the 1s atomic orbital of oxygen. The oxygen 1s orbital should have a lower energy than that of carbon because the positive charge on the oxygen nucleus is greater.
• $2s \approx 0.92 1s_c$. This orbital is essentially the 1s atomic orbital of carbon. Both the $1\sigma$ and $2 \sigma$ are “nonbonding” orbitals since they are localized on a particular atom and do not directly determine the charge density between atoms.
• $3s \approx (0.72 2s_o + 0.18 2p_{zo}) + (0.28 2s_c + 0.16 2p_{zc})$. This orbital is a “bonding” molecular orbital because the electrons are delocalized over C and O in a way that enhances the charge density between the atoms. The 3 means this is the third $\sigma$ orbital. This orbital also illustrates the concept of hybridization. One can say the 2s and 2p orbitals on each atom are hybridized and the molecular orbital is formed from these hybrids although the calculation just obtains the linear combination of the four orbitals directly without the à priori introduction of hybridization. In other words, hybridization just falls out of the calculation. The hybridization in this bonding LCAO increases the amplitude of the function in the region of space between the two atoms and decreases it in the region of space outside of the bonding region of the atoms.
• $4s \approx (0.37 2s_c + 0.1 2p_{zc}) + (0.54 2p_{zo} - 0.43 2s_{0})$. This molecular orbital also can be thought of as being a hybrid formed from atomic orbitals. The hybridization of oxygen atomic orbitals, because of the negative coefficient with 2sO, decreases the electron density between the nuclei and enhances electron density on the side of oxygen facing away from the carbon atom. If we follow how this function varies along the internuclear axis, we see that near carbon the function is positive whereas near oxygen it is negative or possibly small and positive. This change means there must be a node between the two nuclei or at the oxygen nucleus. Because of the node, the electron density between the two nuclei is low so the electrons in this orbital do not serve to shield the two positive nuclei from each other. This orbital therefore is called an “antibonding” mo and the electrons assigned to it are called antibonding electrons. This orbital is the antibonding partner to the $3 \sigma$ orbital.
• $1\pi \approx 0.32 2p_{xc} + 0.44 2p_{xo} \text {and} 2\pi \approx 0.32 2p_{yc} + 0.44 2p_{yo}$. These two orbitals are degenerate and correspond to bonding orbitals made up from the px and py atomic orbitals from each atom. These orbitals are degenerate because the x and y directions are equivalent in this molecule. $\pi$ tells us that these orbitals are antisymmetric with respect to reflection in a plane containing the nuclei.
• $5\sigma \approx 0.38 2_{sC} - 0.38 2_{pC} - 0.29 2p_{zO}$. This orbital is the sp hybrid of the carbon atomic orbitals. The negative coefficient for 2pC puts the largest amplitude on the side of carbon away from oxygen. There is no node between the atoms. We conclude this is a nonbonding orbital with the nonbonding electrons on carbon. This is not a “bonding” orbital because the electron density between the nuclei is lowered by hybridization. It also is not an antibonding orbital because there is no node between the nuclei. When carbon monoxide binds to Fe in hemoglobin, the bond is made between the C and the Fe. This bond involves the donation of the $5\sigma$ nonbonding electrons on C to empty d orbitals on Fe. Thus mo theory allows us to understand why the C end of the molecule is involved in this electron donation when we might naively expect O to be more electron-rich and capable of donating electrons to iron.
Exercise $3$
Summarize how Quantum Mechanics is used to describe bonding and the electronic structure of molecules.
Exercise $4$
Construct an energy level diagram for CO that shows both the atomic orbitals and the molecular orbitals. Show which atomic orbitals contribute to each molecular orbital by drawing lines to connect the mo’s to the ao’s. Label the molecular orbitals in a way that reveals their symmetry. Use this energy level diagram to explain why it is the carbon end of the molecule that binds to hemoglobin rather than the oxygen end. | textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Book%3A_Quantum_States_of_Atoms_and_Molecules_(Zielinksi_et_al)/10%3A_Theories_of_Electronic_Molecular_Structure/10.08%3A_The_Self-Consistent_Field_and_the_Hartree-Fock_Limi.txt |
The Hartree-Fock energy is not as low as the exact energy. The difference is due to electron correlation effects and is called the correlation energy. The Hartree-Fock wavefunction does not include these correlation effects because it describes the electrons as moving in the average potential field of all the other electrons. The instantaneous influence of electrons that come close together at some point is not taken into account. Electrons repel each other, and they will try to stay away from each other. Their motion therefore is correlated, and this correlation reduces the energy of the system because it reduces the electron-electron repulsion. The Hartree-Fock wavefunction does not account for this correlation and therefore produces an energy that is too high.
One method for accounting for these correlation effects and the correlation energy is called configuration interaction (CI). In configuration interaction, Slater determinants are formed from two or more orbital occupation configurations. The CI wavefunction then is written as a linear combination of these determinants, and the coefficients are determined to minimize the energy.
$\psi _{CI} = c_1D_1 + c_2D_2 \label {10-72}$
Good quality one-electron molecular orbitals are obtained by using a large basis set, by optimizing the parameters in the functions with the variational method, and by accounting for the electron-electron repulsion using the self-consistent field method. Electron correlation effects are taken into account with configuration interaction. The CI methodology means that a wavefunction is written as a series of Slater Determinants involving different configurations, just as we discussed for the case of atoms. The limitation in this approach is that computer speed and capacity limit the size of the basis set and the number of configurations that can be used.
Exercise $1$
Define correlation energy and explain why it is omitted in a SCF calculation and how it is included in a CI calculation.
Exercise $2$
Write a CI wavefunction for helium using Slater determinants for the 1s2 and 1s12s1 configurations. Explain how addition of the 1s12s1 configuration to the wavefunction accounts for electron correlation in terms of keeping the electrons apart in different regions of space
10.0E: 10.E: Theories of Electronic Molecular Structure
Q10.1
1. Plot the amplitude of the atomic and molecular orbitals along the inter-nuclear axis (defined as the z-axis) of the H2+ molecule.
1. Plot the four basis functions for the H2+ molecule: $\varphi _{1sA}, \varphi _{1sB}, \varphi _{2p_zA}, \varphi _{2p_zB}$
2. Construct and graph a bonding molecular orbital using these basis functions with a parameter $\lambda$ multiplying the 2pz functions, for a few values of the parameter $\lambda$ between 0 and 1. Determine the normalization constant N for each value of $\lambda$ by assuming that the atomic overlap integrals are either 0 or 1. $\psi = \frac {1}{N} [ \varphi _{1sA} + \varphi _{1sB} + \lambda (\varphi _{2p_zA} + \varphi _{2p_zB}) ]$
3. Explain why the molecular orbital you graphed is a bonding orbital.
4. Explain why a value for $\lambda$ greater than 0 should improve the description of a bonding orbital.
Q10.2
Construct energy level diagrams for B2 and O2 that show both the atomic orbitals and the molecular orbitals and use these diagrams to explain why both molecules are paramagnetic. Label the molecular orbitals to reveal both their symmetry and their atomic orbital parentage. Note: one diagram and labeling does not apply to both molecules.
Q10.3
Defend or shoot down the following statement. The Born-Oppenheimer approximation predicts that vibrational frequencies, vibrational force constants, and bond dissociation energies should be independent of isotopic substitution.
Q10.4
Explain in terms of both the electronic charge density and the electronic energy, why chemists describe the overlap of atomic orbitals as being important for bond formation.
Q10.5
Compare the extended Hückel calculation on HF with the SCF calculation reported in B.j. Ransil, Rev. Mod. Phys. 32, 239, 245 (1960) in J.A. Pople and D.L. Beveridge, Approximate Molecular Orbital Theory (McGraw-Hill, 1970) pp. 46-51.
Q10.6
From the following bond lengths and dipole moments, compute the charges on the hydrogen atom and the halide atom. Compare the results with the electronegativities predicted from the order of these elements in the Periodic Table. What do these charges tell you about the contribution of the hydrogen 1s atomic orbital to the molecular orbitals for each molecule? Use the insight you gained from this problem, to define ionic and covalent bonding.
Molecule
R0 in pm
$\mu$ in 10-30 C m
HF
91.7
6.37
HCl
127.5
3.44
HBr
141.4
2.64
HI
160.9
1.40 | textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Book%3A_Quantum_States_of_Atoms_and_Molecules_(Zielinksi_et_al)/10%3A_Theories_of_Electronic_Molecular_Structure/10.09%3A_Correlation_Energy_and_Configuration_Interaction.txt |
Other terms account for the interactions between all the magnetic dipole moments and the interactions with any external electric or magnetic fields. The charge distribution of an atomic nucleus is not always spherical and, when appropriate, this asymmetry must be taken into account as well as the relativistic effect that a moving electron experiences as a change in mass. This complete Hamiltonian is too complicated and is not needed for many situations. In practice, only the terms that are essential for the purpose at hand are included. Consequently in the absence of external fields, interest in spin-spin and spin-orbit interactions, and in electron and nuclear magnetic resonance spectroscopy (ESR and NMR), the molecular Hamiltonian usually is considered to consist only of the kinetic and potential energy terms, and the Born-Oppenheimer approximation is made in order to separate the nuclear and electronic motion.
In general, electronic wavefunctions for molecules are constructed from approximate one-electron wavefunctions. These one-electron functions are called molecular orbitals. The expectation value expression for the energy is used to optimize these functions, i.e. make them as good as possible. The criterion for quality is the energy of the ground state. According to the Variational Principle, an approximate ground state energy always is higher than the exact energy, so the best energy in a series of approximations is the lowest energy. In this chapter we describe how the variational method, perturbation theory, the self-consistent field method, and configuration interaction all are used to describe the electronic states of molecules. The ultimate goal is a mathematical description of electrons in molecules that enables chemists and other scientists to develop a deep understanding of chemical bonding, to calculate properties of molecules, and to make predictions based on these calculations. For example, an active area of research in industry involves calculating changes in chemical properties of pharmaceutical drugs as a result of changes in their chemical structure.
Study Guide
• What is meant by the expression ab initio calculation?
• List all the terms in a complete molecular Hamiltonian.
• Why are calculations on closed-shell systems more easily done than on open-shell systems?
• How is it possible to reduce a multi-electron Hamiltonian operator to a single-electron Fock operator?
• Why is the calculation with the Fock operator called a self-consistent field calculation?
• What is the physical meaning of a SCF one-electron energy?
• Why is the nonlinear variational method not used in every case to optimize basis functions, and what usually is done instead?
• Why is it faster for a computer to use the variational principle to determine the coefficients in a linear combination of functions than to determine the parameters in the functions?
• Identify the characteristics of hydrogenic, Slater, and Gaussian basis sets.
• What is meant by the Hartree-Fock wavefunction and energy?
• What is the difference between a restricted and unrestricted Hartree-Fock calculation?
• What is neglected that makes the Hartree-Fock energy necessarily greater than the exact energy?
• What is meant by correlation energy?
• What purpose is served by including configuration interaction in a calculation?
David M. Hanson, Erica Harvey, Robert Sweeney, Theresa Julia Zielinski ("Quantum States of Atoms and Molecules") | textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Book%3A_Quantum_States_of_Atoms_and_Molecules_(Zielinksi_et_al)/10%3A_Theories_of_Electronic_Molecular_Structure/10.0S%3A_10.S%3A_Theories_of_Electronic_Molecular_Structure_.txt |
The electronic configuration of an atom or molecule is a concept imposed by the orbital approximation. Spectroscopic transitions and other properties of atoms and molecules result from the states and not from the configurations, although it is useful to think about both the configuration and the state whenever possible. While a single determinant wavefunction generally is adequate for closed-shell systems (i.e. all electrons are paired in spatial orbitals), the best descriptions of the electronic states, especially for excited states and free radicals that have unpaired electrons, involve configuration interaction using multiple determinants. In these descriptions different configurations are mixed together and the picture of an orbital configuration disintegrates, and other properties, such as orbital and spin angular momentum and symmetry, are needed to identify and characterize the electronic states of molecules.
While a component of orbital angular momentum is preserved along the axis of a linear molecule, generally orbital angular momentum is quenched due to the irregular shapes of molecules. Angular momentum is quenched because circular motion is not possible when the potential energy function does not have circular symmetry.
The spin orbitals, however, still can be eigenfunctions of the spin angular momentum operators because the spin-orbit coupling usually is small. The resulting spin state depends on the orbital configuration. For a closed-shell configuration, the spin state is a singlet and the spin angular momentum is 0 because the contributions from the $\alpha$ and $\beta$ spins cancel. For an open shell configuration, which is characteristic of free radicals, there is an odd number of electrons and the spin quantum number $s = \frac {1}{2}$. This configuration produces a doublet spin state since $2S +1 = 2$. Excited configurations result when electromagnetic radiation or exposure to other sources of energy promotes an electron from an occupied orbital to a previously unoccupied orbital. An excited configuration for a closed shell system produces two states, a singlet state $(2S + 1 = 0)$ and a triplet state $(2S + 1 = 3)$ depending on how the electron spins are paired. The z-components of the angular momentum for 2 electrons can add to give +1, 0, or –1 in units of ħ. The three spin functions for a triplet state are
$\alpha (1) \alpha (2)$
$\dfrac {1}{\sqrt {2}} [ \alpha (1) \beta (2) + \alpha (2) \beta (1)]$
$\beta (1) \beta (2) \label {10-75}$
and the singlet spin function is
$\dfrac {1}{\sqrt {2}} [ \alpha (1) \beta (2) + \alpha (2) \beta (1)] \label {10-76}$
The singlet and triplet states differ in energy even though the electron configuration is the same. This difference results from the antisymmetry condition imposed on the wavefunctions. The antisymmetry condition reduces the electron-electron repulsion for triplet states, so triplet states have the lower energy.
The electronic states of molecules therefore are labeled and identified by their spin and orbital angular momentum and symmetry properties, as appropriate. For example, the ground state of the hydrogen molecule is designated as $X^1\sum ^+_g$. In this symbol, the $X$ identifies the state as the ground state, the superscript 1 identifies it as a singlet state, the sigma says the orbital angular momentum is 0, and the g identifies the wavefunction as symmetric with respect to inversion. Other states with the same symmetry and angular momentum properties are labeled as A, B, C, etc in order of increasing energy or order of discovery. States with different spin multiplicities from that of the ground state are labeled with lower case letters, a, b, c, etc.
For polyatomic molecules the symmetry designation and spin multiplicity are used. For example, an excited state of naphthalene is identified as $^1B_{1u}$. The superscript 1 identifies it as a singlet state, The letter $B$ and subscript 1 identifies the symmetry with respect to rotations, and the subscript u says the wavefunction is antisymmetric with respect to inversion.
Good quality descriptions of the electronic states of molecules are obtained by using a large basis set, by optimizing the parameters in the functions with the variational method, and by accounting for the electron-electron repulsion using the self-consistent field method. Electron correlation effects are taken into account with configuration interaction (CI). The CI methodology means that a wavefunction is written as a series of Slater Determinants involving different configurations, just as we discussed for the case of atoms. The limitation in this approach is that computer speed and capacity limit the size of the basis set and the number of configurations that can be used. | textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Book%3A_Quantum_States_of_Atoms_and_Molecules_(Zielinksi_et_al)/10%3A_Theories_of_Electronic_Molecular_Structure/10.10%3A_Electronic_States.txt |
• 1.1: Non-Ideal Gas Behavior
Gas molecules possess a finite volume and experience forces of attraction for one another. Consequently, gas behavior is not necessarily described well by the ideal gas law. Under conditions of low pressure and high temperature, these factors are negligible, the ideal gas equation is an accurate description of gas behavior, and the gas is said to exhibit ideal behavior. The van der Waals equation is a modified version of the ideal gas law that can be used to account for the non-ideal behavior.
• 1.2: Virial Equations
Expanding the compressibility factor to a polynomial in the pressure results in a better description of real gas behavior. The values for the parameters of this expansion are often tabulated for each gas independently.
01: Fundamental 1 - Measurable Properties
Learning Objectives
• Describe the physical factors that lead to deviations from ideal gas behavior
• Explain how these factors are represented in the van der Waals equation
• Define compressibility (Z) and describe how its variation with pressure reflects non-ideal behavior
• Quantify non-ideal behavior by comparing computations of gas properties using the ideal gas law and the van der Waals equation
The ideal gas law, PV = nRT, can been applied to a variety of different types of problems, ranging from reaction stoichiometry and empirical and molecular formula problems to determining the density and molar mass of a gas. However, the behavior of a gas is often non-ideal, meaning that the observed relationships between its pressure, volume, and temperature are not accurately described by the gas laws. In this section, the reasons for these deviations from ideal gas behavior are considered.
One way in which the accuracy of PV = nRT can be judged is by comparing the actual volume of 1 mole of gas (its molar volume, $\bar{V}$) to the molar volume of an ideal gas at the same temperature and pressure. This ratio is called the compressibility factor (Z) with:
$\mathrm{Z=\dfrac{molar\: volume\: of\: gas\: at\: same\:\mathit{T}\:and\:\mathit{P}}{molar\: volume\: of\: ideal\: gas\: at\: same\:\mathit{T}\:and\:\mathit{P}}}=\left(\dfrac{P \bar{V}}{RT}\right)_\ce{measured}$
Ideal gas behavior is therefore indicated when this ratio is equal to 1, and any deviation from 1 is an indication of non-ideal behavior. Figure $1$ shows plots of Z over a large pressure range for several common gases.
As is apparent from Figure $1$, the ideal gas law does not describe gas behavior well at relatively high pressures. To determine why this is, consider the differences between real gas properties and what is expected of a hypothetical ideal gas.
Particles of a hypothetical ideal gas have no significant volume and do not attract or repel each other. In general, real gases approximate this behavior at relatively low pressures and high temperatures. However, at high pressures, the molecules of a gas are crowded closer together, and the amount of empty space between the molecules is reduced. At these higher pressures, the volume of the gas molecules themselves becomes appreciable relative to the total volume occupied by the gas (Figure $2$). The gas therefore becomes less compressible at these high pressures, and although its volume continues to decrease with increasing pressure, this decrease is not proportional as predicted by Boyle’s law.
At relatively low pressures, gas molecules have practically no attraction for one another because they are (on average) so far apart, and they behave almost like particles of an ideal gas. At higher pressures, however, the force of attraction is also no longer insignificant. This force pulls the molecules a little closer together, slightly decreasing the pressure (if the volume is constant) or decreasing the volume (at constant pressure) (Figure $3$). This change is more pronounced at low temperatures because the molecules have lower KE relative to the attractive forces, and so they are less effective in overcoming these attractions after colliding with one another.
There are several different equations that better approximate gas behavior than does the ideal gas law. The first, and simplest, of these was developed by the Dutch scientist Johannes van der Waals in 1879. The van der Waals equation improves upon the ideal gas law by adding two terms: one to account for the volume of the gas molecules and another for the attractive forces between them.
The constant a corresponds to the strength of the attraction between molecules of a particular gas, and the constant b corresponds to the size of the molecules of a particular gas. The “correction” to the pressure term in the ideal gas law is $\dfrac{n^2a}{V^2}$, and the “correction” to the volume is nb. Note that when V is relatively large and n is relatively small, both of these correction terms become negligible, and the van der Waals equation reduces to the ideal gas law, PV = nRT. Such a condition corresponds to a gas in which a relatively low number of molecules is occupying a relatively large volume, that is, a gas at a relatively low pressure. Experimental values for the van der Waals constants of some common gases are given in Table $1$.
Table $1$: Values of van der Waals Constants for Some Common Gases
Gas a (L2 atm/mol2) b (L/mol)
N2 1.39 0.0391
O2 1.36 0.0318
CO2 3.59 0.0427
H2O 5.46 0.0305
He 0.0342 0.0237
CCl4 20.4 0.1383
At low pressures, the correction for intermolecular attraction, a, is more important than the one for molecular volume, b. At high pressures and small volumes, the correction for the volume of the molecules becomes important because the molecules themselves are incompressible and constitute an appreciable fraction of the total volume. At some intermediate pressure, the two corrections have opposing influences and the gas appears to follow the relationship given by PV = nRT over a small range of pressures. This behavior is reflected by the “dips” in several of the compressibility curves shown in Figure $1$. The attractive force between molecules initially makes the gas more compressible than an ideal gas, as pressure is raised (Z decreases with increasing P). At very high pressures, the gas becomes less compressible (Z increases with P), as the gas molecules begin to occupy an increasingly significant fraction of the total gas volume.
Strictly speaking, the ideal gas equation functions well when intermolecular attractions between gas molecules are negligible and the gas molecules themselves do not occupy an appreciable part of the whole volume. These criteria are satisfied under conditions of low pressure and high temperature. Under such conditions, the gas is said to behave ideally, and deviations from the gas laws are small enough that they may be disregarded—this is, however, very often not the case.
Example $1$: Comparison of Ideal Gas Law and van der Waals Equation
A 4.25-L flask contains 3.46 mol CO2 at 229 °C. Calculate the pressure of this sample of CO2:
1. from the ideal gas law
2. from the van der Waals equation
3. Explain the reason(s) for the difference.
Solution
(a) From the ideal gas law:
$P=\dfrac{nRT}{V}=\mathrm{\dfrac{3.46\cancel{mol}×0.08206\cancel{L}atm\cancel{mol^{−1}}\cancel{K^{−1}}×502\cancel{K}}{4.25\cancel{L}}=33.5\:atm}$
(b) From the van der Waals equation:
$\left(P+\dfrac{n^2a}{V^2}\right)×(V−nb)=nRT⟶P=\dfrac{nRT}{(V−nb)}−\dfrac{n^2a}{V^2}$
$P=\mathrm{\dfrac{3.46\:mol×0.08206\:L\:atm\:mol^{−1}\:K^{−1}×502\: K}{(4.25\:L−3.46\:mol×0.0427\:L\:mol^{−1})}−\dfrac{(3.46\:mol)^2×3.59\:L^2\:atm\:mol^2}{(4.25\:L)^2}}$
This finally yields P = 32.4 atm.
(c) This is not very different from the value from the ideal gas law because the pressure is not very high and the temperature is not very low. The value is somewhat different because CO2 molecules do have some volume and attractions between molecules, and the ideal gas law assumes they do not have volume or attractions.
Exercise $1$
A 560-mL flask contains 21.3 g N2 at 145 °C. Calculate the pressure of N2:
1. from the ideal gas law
2. from the van der Waals equation
3. Explain the reason(s) for the difference.
Answer a
46.562 atm
Answer b
46.594 atm
Answer c
The van der Waals equation takes into account the volume of the gas molecules themselves as well as intermolecular attractions.
Summary
Gas molecules possess a finite volume and experience forces of attraction for one another. Consequently, gas behavior is not necessarily described well by the ideal gas law. Under conditions of low pressure and high temperature, these factors are negligible, the ideal gas equation is an accurate description of gas behavior, and the gas is said to exhibit ideal behavior. However, at lower temperatures and higher pressures, corrections for molecular volume and molecular attractions are required to account for finite molecular size and attractive forces. The van der Waals equation is a modified version of the ideal gas law that can be used to account for the non-ideal behavior of gases under these conditions.
Key Equations
• $\mathrm{Z=\dfrac{molar\:volume\: of\: gas\: at\: same\:\mathit{T}\:and\:\mathit{P}}{molar\: volume\: of\: ideal\: gas\: at\: same\:\mathit{T}\:and\:\mathit{P}}}=\left(\dfrac{P×\bar{V}}{R×T}\right)_\ce{measured}$
• $\left(P+\dfrac{n^2a}{V^2}\right)×(V−nb)=nRT$
Glossary
compressibility factor (Z)
ratio of the experimentally measured molar volume for a gas to its molar volume as computed from the ideal gas equation
van der Waals equation
modified version of the ideal gas equation containing additional terms to account for non-ideal gas behavior
1.02: Virial Equations
It is often useful to fit accurate pressure-volume-temperature data to polynomial equations. The experimental data can be used to compute a quantity called the compressibility factor, $Z$, which is defined as the pressure–volume product for the real gas divided by the pressure–volume product for an ideal gas at the same temperature.
We have
${\left(PV\right)}_{ideal\ gas}=nRT \nonumber$
Letting P and V represent the pressure and volume of the real gas, and introducing the molar volume, $\overline{V}={V}/{n}$, we have
$Z=\frac{\left(PV\right)_{real\ gas}}{\left(PV\right)_{ideal\ gas}}=\frac{PV}{nRT}=\frac{P\overline{V}}{RT} \nonumber$
Since $Z=1$ if the real gas behaves exactly like an ideal gas, experimental values of Z will tend toward unity under conditions in which the density of the real gas becomes low and its behavior approaches that of an ideal gas. At a given temperature, we can conveniently ensure that this condition is met by fitting the Z values to a polynomial in P or a polynomial in ${\overline{V}}^{-1}$. The coefficients are functions of temperature. If the data are fit to a polynomial in the pressure, the equation is
$Z=1+B^*\left(T\right)P+C^*\left(T\right)P^2+D^*\left(T\right)P^3+\dots \nonumber$
For a polynomial in ${\overline{V}}^{-1}$, the equation is
$Z=1+\frac{B\left(T\right)}{\overline{V}}+\frac{C\left(T\right)}{\overline{V}^2}+\frac{D\left(T\right)}{\overline{V}^3}+\dots \nonumber$
These empirical equations are called virial equations. As indicated, the parameters are functions of temperature. The values of $B^*\left(T\right)$, $C^*\left(T\right)$, $D^*\left(T\right)$, , and $B\left(T\right)$, $C\left(T\right)$, $D\left(T\right)$,, must be determined for each real gas at every temperature. (Note also that $B^*\left(T\right)\neq B\left(T\right)$, $C^*\left(T\right)\neq C\left(T\right)$, $D^*\left(T\right)\neq D\left(T\right)$, etc. However, it is true that $B^*={B}/{RT}$.) Values for these parameters are tabulated in various compilations of physical data. In these tabulations, $B\left(T\right)$ and $C\left(T\right)$ are called the second virial coefficient and third virial coefficient, respectively. | textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Chemical_Thermodynamics_(Supplement_to_Shepherd_et_al.)/01%3A_Fundamental_1_-_Measurable_Properties/1.01%3A_Non-Ideal_Gas_Behavior.txt |
• 2.1: Kinetic Molecular Theory
The behavior of ideal gases is explained by the kinetic molecular theory of gases. Molecular motion, which leads to collisions between molecules and the container walls, explains pressure, and the large intermolecular distances in gases explain their high compressibility. Although all gases have the same average kinetic energy at a given temperature, they do not all possess the same root mean square speed. The actual values of speed and kinetic energy are not the same for all gas particles.
02: Extension 1.1 - Kinetic Molecular Theory
Learning Objectives
• To understand the significance of the kinetic molecular theory of gases.
The laws that describe the behavior of gases were well established long before anyone had developed a coherent model of the properties of gases. In this section, we introduce a theory that describes why gases behave the way they do. The theory we introduce can also be used to derive laws such as the ideal gas law from fundamental principles and the properties of individual particles.
A Molecular Description
The kinetic molecular theory of gases explains the laws that describe the behavior of gases. Developed during the mid-19th century by several physicists, including the Austrian Ludwig Boltzmann (1844–1906), the German Rudolf Clausius (1822–1888), and the Englishman James Clerk Maxwell (1831–1879), who is also known for his contributions to electricity and magnetism, this theory is based on the properties of individual particles as defined for an ideal gas and the fundamental concepts of physics. Thus the kinetic molecular theory of gases provides a molecular explanation for observations that led to the development of the ideal gas law. The kinetic molecular theory of gases is based on the following four postulates:
four postulates of Kinetic Molecular Theory
1. A gas is composed of a large number of particles called molecules (whether monatomic or polyatomic) that move randomly in straight-line, continuous motion.
2. Gas molecules collide with one another and with the walls of the container, but these collisions are perfectly elastic; that is, they do not change the average kinetic energy of the molecules.
3. Because the distance between gas molecules is much greater than the size of the molecules, the volume of the molecules is negligible. They are considered "point" particles.
4. Intermolecular interactions, whether repulsive or attractive, are so weak that they are also negligible.
Although the molecules of real gases have nonzero volumes and exert both attractive and repulsive forces on one another, for the moment we will focus on how the kinetic molecular theory of gases relates to the properties of gases we have been discussing. In the following sections, we explain how this theory must be modified to account for the behavior of real gases.
Postulates 1 and 2 state that gas molecules are in constant motion and collide frequently with the walls of their containers. The collision of molecules with their container walls results in a momentum transfer (impulse) from molecules to the walls (Figure $2$).
The momentum transfer to the wall perpendicular to $x$ axis as a molecule with an initial velocity $v_x$ in $x$ direction hits is expressed as:
$\Delta p_x=2mv_x \label{10.7.1}$
The collision frequency, a number of collisions of the molecules to the wall per unit area and per second, increases with the molecular speed and the number of molecules per unit volume.
$f\propto (v_x) \times \Big(\dfrac{N}{V}\Big) \label{10.7.2}$
The pressure the gas exerts on the wall is expressed as the product of impulse and the collision frequency.
$P\propto (2mv_x)\times(v_x)\times\Big(\dfrac{N}{V}\Big)\propto \Big(\dfrac{N}{V}\Big)mv_x^2 \label{10.7.3}$
At any instant, however, the molecules in a gas sample are traveling at different speed. Therefore, we must replace $v_x^2$ in the expression above with the average value of $v_x^2$, which is denoted by $\left\langle v_x^{2}\right\rangle$. The overbar designates the average value over all molecules.
The exact expression for pressure is given as :
$P=\dfrac{N}{V}m\left\langle v_x^{2}\right\rangle\label{10.7.4}$
Finally, we must consider that there is nothing special about $x$ direction. We should expect that
$\left\langle v_x^{2}\right\rangle= \left\langle v_y^{2}\right\rangle=\left\langle v_z^{2}\right\rangle=\dfrac{1}{3}\left\langle v^{2}\right\rangle$
Here the quantity $\left\langle v^{2}\right\rangle$ is called the mean-square speed defined as the average value of square-speed ($v^2$) over all molecules. Since
$v^2=v_x^2+v_y^2+v_z^2$
for each molecule, then
$\left\langle v^{2}\right\rangle=\left\langle v_x^{2}\right\rangle+\left\langle v_y^{2}\right\rangle+\left\langle v_z^{2}\right\rangle$
By substituting $\dfrac{1}{3}\left\langle v^{2}\right\rangle$ for $\left\langle v_x^{2}\right\rangle$ in the expression above, we can get the final expression for the pressure:
$P=\dfrac{1}{3}\dfrac{N}{V}m\left\langle v^{2}\right\rangle \label{10.7.5}$
Because volumes and intermolecular interactions are negligible, postulates 3 and 4 state that all gaseous particles behave identically, regardless of the chemical nature of their component molecules. This is the essence of the ideal gas law, which treats all gases as collections of particles that are identical in all respects except mass. Postulate 3 also explains why it is relatively easy to compress a gas; you simply decrease the distance between the gas molecules.
Additionally, the average kinetic energy of the molecules of any gas depends on only the temperature, and at a given temperature, all gaseous molecules have exactly the same average kinetic energy. This is sometimes considered Postulate 5 of Kinetic Molecular Theory. This postulate provides a molecular explanation for the temperature of a gas. It refers to the average translational kinetic energy of the molecules of a gas $\left\langle {E_k}\right\rangle$, which can be represented as and states that at a given Kelvin temperature $(T)$, all gases have the same value of
$\left\langle {E_k}\right\rangle=\dfrac{1}{2}m\left\langle v^{2}\right\rangle=\dfrac{3}{2}\dfrac{R}{N_A}T \label{10.7.6}$
where $N_A$ is the Avogadro's constant. The total translational kinetic energy of 1 mole of molecules can be obtained by multiplying the equation by $N_A$:
$N_A\left\langle {E_k}\right\rangle=\dfrac{1}{2}M\left\langle v^{2}\right\rangle=\dfrac{3}{2}RT \label{10.7.7}$
where $M$ is the molar mass of the gas molecules and is related to the molecular mass by $M=N_Am$. By rearranging the equation, we can get the relationship between the root-mean square speed ($v_{\rm rms}$) and the temperature. The rms speed ($v_{\rm rms}$) is the square root of the sum of the squared speeds divided by the number of particles:
$v_{\rm rms}=\sqrt{ \left\langle v^{2}\right\rangle }=\sqrt{\dfrac{v_1^2+v_2^2+\cdots v_N^2}{N}} \label{10.7.8}$
where $N$ is the number of particles and $v_i$ is the speed of particle $i$.
The relationship between $v_{\rm rms}$ and the temperature is given by:
$v_{\rm rms}=\sqrt{\dfrac{3RT}{M}} \label{10.7.9}$
In Equation $\ref{10.7.9}$, $v_{\rm rms}$ has units of meters per second; consequently, the units of molar mass $M$ are kilograms per mole, temperature $T$ is expressed in kelvins, and the ideal gas constant $R$ has the value 8.3145 J/(K•mol). Equation $\ref{10.7.9}$ shows that $v_{\rm rms}$ of a gas is proportional to the square root of its Kelvin temperature and inversely proportional to the square root of its molar mass. The root mean-square speed of a gas increase with increasing temperature. At a given temperature, heavier gas molecules have slower speeds than do lighter ones.
The rms speed and the average speed do not differ greatly (typically by less than 10%). The distinction is important, however, because the rms speed is the speed of a gas particle that has average kinetic energy. Particles of different gases at the same temperature have the same average kinetic energy, not the same average speed. In contrast, the most probable speed (vp) is the speed at which the greatest number of particles is moving. If the average kinetic energy of the particles of a gas increases linearly with increasing temperature, then Equation $\ref{10.7.8}$ tells us that the rms speed must also increase with temperature because the mass of the particles is constant. At higher temperatures, therefore, the molecules of a gas move more rapidly than at lower temperatures, and vp increases.
At a given temperature, all gaseous particles have the same average kinetic energy but not the same average speed.
Example $1$
The speeds of eight particles were found to be 1.0, 4.0, 4.0, 6.0, 6.0, 6.0, 8.0, and 10.0 m/s. Calculate their average speed ($v_{\rm av}$) root mean square speed ($v_{\rm rms}$), and most probable speed ($v_{\rm m}$).
Given: particle speeds
Asked for: average speed ($v_{\rm av}$), root mean square speed ($v_{\rm rms}$), and most probable speed ($v_{\rm m}$)
Strategy:
Use Equation $\ref{10.7.6}$ to calculate the average speed and Equation $\ref{10.7.8}$ to calculate the rms speed. Find the most probable speed by determining the speed at which the greatest number of particles is moving.
Solution:
The average speed is the sum of the speeds divided by the number of particles:
$v_{\rm av}=\rm\dfrac{(1.0+4.0+4.0+6.0+6.0+6.0+8.0+10.0)\;m/s}{8}=5.6\;m/s \nonumber$
The rms speed is the square root of the sum of the squared speeds divided by the number of particles:
$v_{\rm rms}=\rm\sqrt{\dfrac{(1.0^2+4.0^2+4.0^2+6.0^2+6.0^2+6.0^2+8.0^2+10.0^2)\;m^2/s^2}{8}}=6.2\;m/s oumber$
The most probable speed is the speed at which the greatest number of particles is moving. Of the eight particles, three have speeds of 6.0 m/s, two have speeds of 4.0 m/s, and the other three particles have different speeds. Hence $v_{\rm m}=6.0$ m/s. The $v_{\rm rms}$ of the particles, which is related to the average kinetic energy, is greater than their average speed.
Distributions of Molecular Speeds
At any given time, what fraction of the molecules in a particular sample has a given speed? Some of the molecules will be moving more slowly than average, and some will be moving faster than average, but how many in each situation? Answers to questions such as these can have a substantial effect on the amount of product formed during a chemical reaction. This problem was solved mathematically by Maxwell in 1866; he used statistical analysis to obtain an equation that describes the distribution of molecular speeds at a given temperature. Typical curves showing the distributions of speeds of molecules at several temperatures are displayed in Figure $3$. Increasing the temperature has two effects. First, the peak of the curve moves to the right because the most probable speed increases. Second, the curve becomes broader because of the increased spread of the speeds. Thus increased temperature increases the value of the most probable speed but decreases the relative number of molecules that have that speed. Although the mathematics behind curves such as those in Figure $3$ were first worked out by Maxwell, the curves are almost universally referred to as Boltzmann distributions, after one of the other major figures responsible for the kinetic molecular theory of gases.
The Relationships among Pressure, Volume, and Temperature
We now describe how the kinetic molecular theory of gases explains some of the important relationships we have discussed previously.
• Pressure versus Volume: At constant temperature, the kinetic energy of the molecules of a gas and hence the rms speed remain unchanged. If a given gas sample is allowed to occupy a larger volume, then the speed of the molecules does not change, but the density of the gas (number of particles per unit volume) decreases, and the average distance between the molecules increases. Hence the molecules must, on average, travel farther between collisions. They therefore collide with one another and with the walls of their containers less often, leading to a decrease in pressure. Conversely, increasing the pressure forces the molecules closer together and increases the density, until the collective impact of the collisions of the molecules with the container walls just balances the applied pressure.
• Volume versus Temperature: Raising the temperature of a gas increases the average kinetic energy and therefore the rms speed (and the average speed) of the gas molecules. Hence as the temperature increases, the molecules collide with the walls of their containers more frequently and with greater force. This increases the pressure, unless the volume increases to reduce the pressure, as we have just seen. Thus an increase in temperature must be offset by an increase in volume for the net impact (pressure) of the gas molecules on the container walls to remain unchanged.
• Pressure of Gas Mixtures: Postulate 4 of the kinetic molecular theory of gases states that gas molecules exert no attractive or repulsive forces on one another. If the gaseous molecules do not interact, then the presence of one gas in a gas mixture will have no effect on the pressure exerted by another, and Dalton’s law of partial pressures holds.
Example $2$
The temperature of a 4.75 L container of N2 gas is increased from 0°C to 117°C. What is the qualitative effect of this change on the
1. average kinetic energy of the N2 molecules?
2. rms speed of the N2 molecules?
3. average speed of the N2 molecules?
4. impact of each N2 molecule on the wall of the container during a collision with the wall?
5. total number of collisions per second of N2 molecules with the walls of the entire container?
6. number of collisions per second of N2 molecules with each square centimeter of the container wall?
7. pressure of the N2 gas?
Given: temperatures and volume
Asked for: effect of increase in temperature
Strategy:
Use the relationships among pressure, volume, and temperature to predict the qualitative effect of an increase in the temperature of the gas.
Solution:
1. Increasing the temperature increases the average kinetic energy of the N2 molecules.
2. An increase in average kinetic energy can be due only to an increase in the rms speed of the gas particles.
3. If the rms speed of the N2 molecules increases, the average speed also increases.
4. If, on average, the particles are moving faster, then they strike the container walls with more energy.
5. Because the particles are moving faster, they collide with the walls of the container more often per unit time.
6. The number of collisions per second of N2 molecules with each square centimeter of container wall increases because the total number of collisions has increased, but the volume occupied by the gas and hence the total area of the walls are unchanged.
7. The pressure exerted by the N2 gas increases when the temperature is increased at constant volume, as predicted by the ideal gas law.
Exercise $2$
A sample of helium gas is confined in a cylinder with a gas-tight sliding piston. The initial volume is 1.34 L, and the temperature is 22°C. The piston is moved to allow the gas to expand to 2.12 L at constant temperature. What is the qualitative effect of this change on the
1. average kinetic energy of the He atoms?
2. rms speed of the He atoms?
3. average speed of the He atoms?
4. impact of each He atom on the wall of the container during a collision with the wall?
5. total number of collisions per second of He atoms with the walls of the entire container?
6. number of collisions per second of He atoms with each square centimeter of the container wall?
7. pressure of the He gas?
Answer a
no change
Answer b
no change
Answer c
no change
Answer d
no change
Answer e
decreases
Answer f
decreases
Answer g
decreases
Summary
• The kinetic molecular theory of gases provides a molecular explanation for the observations that led to the development of the ideal gas law.
• Average kinetic energy:$\left\langle {E_k}\right\rangle =\dfrac{1}{2}m{v_{\rm rms}}^2=\dfrac{3}{2}\dfrac{R}{N_A}T n\nonumber$
• Root mean square speed: $v_{\rm rms}=\sqrt{\dfrac{v_1^2+v_2^2+\cdots v_N^2}{N}} \nonumber$
• Kinetic molecular theory of gases: $v_{\rm rms}=\sqrt{\dfrac{3RT}{M}} \nonumber$
The behavior of ideal gases is explained by the kinetic molecular theory of gases. Molecular motion, which leads to collisions between molecules and the container walls, explains pressure, and the large intermolecular distances in gases explain their high compressibility. Although all gases have the same average kinetic energy at a given temperature, they do not all possess the same root mean square (rms) speed (vrms). The actual values of speed and kinetic energy are not the same for all particles of a gas but are given by a Boltzmann distribution, in which some molecules have higher or lower speeds (and kinetic energies) than average. | textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Chemical_Thermodynamics_(Supplement_to_Shepherd_et_al.)/02%3A_Extension_1.1_-_Kinetic_Molecular_Theory/2.01%3A_Kinetic_Molecular_Theory.txt |
• 3.1: Van der Waals' Equation
We often assume that gas molecules do not interact with one another, but simple arguments show that this can be only approximately true. Real gas molecules must interact with one another. At short distances they repel one another. At somewhat longer distances, they attract one another. Van der Waals’ equation fits pressure-volume-temperature data for a real gas better than the ideal gas equation does. The improved fit is obtained by introducing two experimentally determined parameters.
03: Extension 1.2 - Microscopic Gas Models
An equation due to van der Waals extends the ideal gas equation in a straightforward way. Van der Waals’ equation is
$\left(P+\frac{an^2}{V^2}\right)\left(V-nb\right)=nRT \nonumber$
It fits pressure-volume-temperature data for a real gas better than the ideal gas equation does. The improved fit is obtained by introducing two parameters (designated “$a$” and “$b$”) that must be determined experimentally for each gas. Van der Waals’ equation is particularly useful in our effort to understand the behavior of real gases, because it embodies a simple physical picture for the difference between a real gas and an ideal gas.
In deriving Boyle’s law from Newton’s laws, we assume that the gas molecules do not interact with one another. Simple arguments show that this can be only approximately true. Real gas molecules must interact with one another. At short distances they repel one another. At somewhat longer distances, they attract one another. The ideal gas equation can also be derived from the basic assumptions that we make in §10 by an application of the theory of statistical thermodynamics. By making different assumptions about molecular properties, we can apply statistical thermodynamics to derive${}^{5}$ van der Waals’ equation. The required assumptions are that the molecules occupy a finite volume and that they attract one another with a force that varies as the inverse of a power of the distance between them. (The attractive force is usually assumed to be proportional to $r^{-6}$.)
To recognize that real gas molecules both attract and repel one another, we need only remember that any gas can be liquefied by reducing its temperature and increasing the pressure applied to it. If we cool the liquid further, it freezes to a solid. Now, two distinguishing features of a solid are that it retains its shape and that it is almost incompressible. We attribute the incompressibility of a solid to repulsive forces between its constituent molecules; they have come so close to one another that repulsive forces between them have become important. To compress the solid, the molecules must be pushed still closer together, which requires inordinate force. On the other hand, if we throw an ice cube across the room, all of its constituent water molecules fly across the room together. Evidently, the water molecules in the solid are attracted to one another, otherwise they would all go their separate ways—throwing the ice cube would be like throwing a handful of dry sand. But water molecules are the same molecules whatever the temperature or pressure, so if there are forces of attraction and repulsion between them in the solid, these forces must be present in the liquid and gas phases also.
In the gas phase, molecules are far apart; in the liquid or the solid phase, they are packed together. At its boiling point, the volume of a liquid is much less than the volume of the gas from which it is condensed. At the freezing point, the volume of a solid is only slightly different from the volume of the liquid from which it is frozen, and it is certainly greater than zero. These commonplace observations are readily explained by supposing that any molecule has a characteristic volume. We can understand this, in turn, to be a consequence of the nature of the intermolecular forces; evidently, these forces become stronger as the distance between a pair of molecules decreases. Since a liquid or a solid occupies a definite volume, the repulsive force must increase more rapidly than the attractive force when the intermolecular distance is small. Often it is useful to talk about the molar volume of a condensed phase. By molar volume, we mean the volume of one mole of a pure substance. The molar volume of a condensed phase is determined by the intermolecular distance at which there is a balance between intermolecular forces of attraction and repulsion.
Evidently molecules are very close to one another in condensed phases. If we suppose that the empty spaces between molecules are negligible, the volume of a condensed phase is approximately equal to the number of molecules in the sample multiplied by the volume of a single molecule. Then the molar volume is Avogadro’s number times the volume occupied by one molecule. If we know the density, D, and the molar mass, $\overline{M}$, we can find the molar volume, $\overline{V}$, as
$\overline{V}=\frac{\overline{M}}{D} \nonumber$
The volume occupied by a molecule, V${}_{molecule}$, becomes
$V_{molecule}=\frac{\overline{V}}{\overline{N}} \nonumber$
The pressure and volume appearing in van der Waals’ equation are the pressure and volume of the real gas. We can relate the terms in van der Waals’ equation to the ideal gas equation: It is useful to think of the terms $\left(P+{{an}^2}/{V^2}\right)$ and $\left(V-nb\right)$ as the pressure and volume of a hypothetical ideal gas. That is
\begin{align*} P_{ideal\ gas}V_{ideal\ gas} &=\left(P_{real\ gas}+\frac{an^2}{V^2_{real\ gas}}\right)\left(V_{real\ gas}-nb\right) \[4pt] &=nRT \end{align*}
Then we have
$V_{real\ gas}=V_{ideal\ gas}+nb \nonumber$
We derive the ideal gas equation from a model in which the molecules are non-interacting point masses. So the volume of an ideal gas is the volume occupied by a gas whose individual molecules have zero volume. If the individual molecules of a real gas effectively occupy a volume ${b}/{\overline{N}}$, then $n$ moles of them effectively occupy a volume
$\left({b}/{\overline{N}}\right)\left(n\overline{N}\right)=nb. \nonumber$
Van der Waals’ equation says that the volume of a real gas is the volume that would be occupied by non-interacting point masses, $V_{ideal\ gas}$, plus the effective volume of the gas molecules themselves. (When data for real gas molecules are fit to the van der Waals’ equation, the value of $b$ is usually somewhat greater than the volume estimated from the liquid density and molecular weight. See problem 24.)
Similarly, we have
$P_{\text{real gas}}=P_{\text{ideal gas}}-\frac{an^2}{V^2_{\text{real gas}}} \nonumber$
We can understand this as a logical consequence of attractive interactions between the molecules of the real gas. With $a>0$, it says that the pressure of the real gas is less than the pressure of the hypothetical ideal gas, by an amount that is proportional to ${\left({n}/{V}\right)}^2$. The proportionality constant is $a$. Since ${n}/{V}$ is the molar density (moles per unit volume) of the gas molecules, it is a measure of concentration. The number of collisions between molecules of the same kind is proportional to the square of their concentration. (We consider this point in more detail in Chapters 4 and 5.) So ${\left({n}/{V}\right)}^2$ is a measure of the frequency with which the real gas molecules come into close contact with one another. If they attract one another when they come close to one another, the effect of this attraction should be proportional to ${\left({n}/{V}\right)}^2$. So van der Waals’ equation is consistent with the idea that the pressure of a real gas is different from the pressure of the hypothetical ideal gas by an amount that is proportional to the frequency and strength of attractive interactions.
But why should attractive interactions have this effect; why should the pressure of the real gas be less than that of the hypothetical ideal gas? Perhaps the best way to develop a qualitative picture is to recognize that attractive intermolecular forces tend to cause the gas molecules to clump up. After all, it is these attractive forcesattractive force that cause the molecules to aggregate to a liquid at low temperatures. Above the boiling point, the ability of gas molecules to go their separate ways limits the effects of this tendency; however, even in the gas, the attractive forces must act in a way that tends to reduce the volume occupied by the molecules. Since the volume occupied by the gas is dictated by the size of the container—not by the properties of the gas itself—this clumping-up tendency finds expression as a decrease in pressure.
It is frequently useful to describe the interaction between particles or chemical moieties in terms of a potential energy versus distance diagram. The van der Waals’ equation corresponds to the case that the repulsive interaction between molecules is non-existent until the molecules come into contact. Once they come into contact, the energy required to move them still closer together becomes arbitrarily large. Often this is described by saying that they behave like “hard spheres”. The attractive force between two molecules decreases as the distance between them increases. When they are very far apart the attractive interaction is very small. We say that the energy of interaction is zero when the molecules are infinitely far apart. If we initially have two widely separated, stationary, mutually attracting molecules, they will spontaneously move toward one another, gaining kinetic energy as they go. Their potential energy decreases as they approach one another, reaching its smallest value when the molecules come into contact. Thus, van der Waals’ equation implies the potential energy versus distance diagram sketched in Figure 5. | textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Chemical_Thermodynamics_(Supplement_to_Shepherd_et_al.)/03%3A_Extension_1.2_-_Microscopic_Gas_Models/3.01%3A_Van_der_Waals%27_Equation.txt |
• 4.1: The Distribution Function as a Summary of Experimental Results
As we collect increasing amounts of data, the accumulation quickly becomes unwieldy unless we can reduce it to a mathematical model. We call the mathematical model we develop a distribution function, because it is a function that expresses what we are able to learn about the data source—the distribution. A distribution function is an equation that summarizes the results of many measurements; it is a mathematical model for a real-world source of data.
• 4.2: Outcomes, Events, and Probability
We also need to introduce the idea that a function that successfully models the results of past experiments can be used to predict some of the characteristics of future results.
• 4.3: Some Important Properties of Events
If we know the probabilities of the possible outcomes of a trial, we can calculate the probabilities for combinations of outcomes. These are based on two rules, which we call the laws of probability. If we partition the outcomes into exhaustive and mutually exclusive events, the laws of probability also apply. Since, as we define them, “events” is a more general term than “outcomes,” we call them the law of the probability of alternative events and the law of the probability of compound events.
• 4.4: Applying the Laws of Probability
The laws of probability apply to events that are independent. If the result of one trial depends on the result of another trial, we may still be able to use the laws of probability. However, to do so, we must know the nature of the interdependence.
• 4.5: Combinatorics and Multiplicity
Combinatorics is the branch of math related to counting events and outcomes, while multiplicity is the statistical thermodynamics variable equal to the number of possible outcomes. They are intricately connected.
04: Fundamental 2 - Counting Configurations
In Section 2.10, we derive Boyle’s law from Newton’s laws using the assumption that all gas molecules move at the same speed at a given temperature. This is a poor assumption. Individual gas molecules actually have a wide range of velocities. In Chapter 4, we derive the Maxwell–Boltzmann distribution law for the distribution of molecular velocities. This law gives the fraction of gas molecules having velocities in any range of velocities. Before developing the Maxwell–Boltzmann distribution law, we need to develop some ideas about distribution functions. Most of these ideas are mathematical. We discuss them in a non-rigorous way, focusing on understanding what they mean rather than on proving them.
The overriding idea is that we have a real-world source of data. We call this source of data the distribution. We can collect data from this source to whatever extent we please. The datum that we collect is called the distribution’s random variable. We call each possible value of the random variable an outcome. The process of gathering a set of particular values of the random variable from a distribution is often called sampling or drawing a sample. The set of values that is collected is called the sample. The set of values that comprise the sample is often called “the data.” In scientific applications, the random variable is usually a number that results from making a measurement on a physical system. Calling this process “drawing a sample” can be inappropriate. Often we call the process of getting a value for the random variable “doing an experiment”, “doing a test”, or “making a trial”.
As we collect increasing amounts of data, the accumulation quickly becomes unwieldy unless we can reduce it to a mathematical model. We call the mathematical model we develop a distribution function, because it is a function that expresses what we are able to learn about the data source—the distribution. A distribution function is an equation that summarizes the results of many measurements; it is a mathematical model for a real-world source of data. Specifically, it models the frequency of an event with which we obtain a particular outcome. We usually believe that we can make our mathematical model behave as much like the real-world data source as we want if we use enough experimental data in developing it.
Often we talk about statistics. By a statistic, we mean any mathematical entity that we can calculate from data. Broadly speaking a distribution function is a statistic, because it is obtained by fitting a mathematical function to data that we collect. Two other statistics are often used to characterize experimental data: the mean and the variance. The mean and variance are defined for any distribution. We want to see how to estimate the mean and variance from a set of experimental data collected from a particular distribution.
We distinguish between discrete and continuous distributions. A discrete distribution is a real-world source of data that can produce only particular data values. A coin toss is a good example. It can produce only two outcomes—heads or tails. A continuous distribution is a real-world source of data that can produce data values in a continuous range. The speed of an automobile is a good example. An automobile can have any speed within a rather wide range of speeds. For this distribution, the random variable is automobile speed. Of course we can generate a discrete distribution by aggregating the results of sampling a continuous distribution; if we lump all automobile speeds between 20 mph and 30 mph together, we lose the detailed information about the speed of each automobile and retain only the total number of automobiles with speeds in this interval. | textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Chemical_Thermodynamics_(Supplement_to_Shepherd_et_al.)/04%3A_Fundamental_2_-_Counting_Configurations/4.01%3A_The_Distribution_Function_as_a_Summary_of_Experimental_Results.txt |
We also need to introduce the idea that a function that successfully models the results of past experiments can be used to predict some of the characteristics of future results.
We reason as follows: We have results from drawing many samples of a random variable from some distribution. We suppose that a mathematical representation has been found that adequately summarizes the results of these experiences. If the underlying distribution—the physical system in scientific applications—remains the same, we expect that a long series of future results would give rise to essentially the same mathematical representation. If 25% of many previous results have had a particular characteristic, we expect that 25% of a large number of future trials will have the same characteristic. We also say that there is one chance in four that the next individual result will have this characteristic; when we say this, we mean that 25% of a large number of future trials will have this characteristic, and the next trial has as good a chance as any other to be among those that do. The probability that an outcome will occur in the future is equal to the frequency with which that outcome has occurred in the past.
Given a distribution, the possible outcomes must be mutually exclusive; in any given trial, the random variable can have only one of its possible values. Consequently, a discrete distribution is completely described when the probability of each of its outcomes is specified. Many distributions are comprised of a finite set of N mutually exclusive possible outcomes. If each of these outcomes is equally likely, the probability that we will observe any particular outcome in the next trial is $1/N$.
We often find it convenient to group the set of possible outcomes into subsets in such a way that each outcome is in one and only one of the subsets. We say that such assignments of outcomes to subsets are exhaustive, because every possible outcome is assigned to some subset; we say that such assignments are mutually exclusive, because no outcome belongs to more than one subset. We call each such subset an event. When we partition the possible outcomes into exhaustive and mutually exclusive events, we can say the same things about the probabilities of events that we can say about the probabilities of outcomes. In our discussions, the term “events” will always refer to an exhaustive and mutually exclusive partitioning of the possible outcomes. Distinguishing between outcomes and events just gives us some language conventions that enable us to create alternative groupings of the same set of real world observations.
Suppose that we define a particular event to be a subset of outcomes that we denote as U. If in a large number of trials, the fraction of outcomes that belong to this subset is F, we say that the probability is F that the outcome of the next trial will belong to this event. To express this in more mathematical notation, we write $P\left(U\right)=F$. When we do so, we mean that the fraction of a large number of future trials that belong to this subset will be F, and the next trial has as good a chance as any other to be among those that do. In a sample comprising M observations, the best forecast we can make of the number of occurrences of U is $M\times P(U)$, and we call this the expected number of occurrences of U in a sample of size M.
The idea of grouping real world observations into either outcomes or events is easy to remember if we keep in mind the example of tossing a die. The die has six faces, which are labeled with 1, 2, 3, 4, 5, or 6 dots. The dots distinguish one face from another. On any given toss, one face of the die must land on top. Therefore, there are six possible outcomes. Since each face has as good a chance as any other of landing on top, the six possible outcomes are equally probable. The probability of any given outcome is ${1}/{6}$. If we ask about the probability that the next toss will result in one of the even-numbered faces landing on top, we are asking about the probability of an event—the event that the next toss will have the characteristic that an even-numbered face lands on top. Let us call this event $X$. That is, event $X$ occurs if the outcome is a 2, a 4, or a 6. These are three of the six equally likely outcomes. Evidently, the probability of this event is ${3}/{6}={1}/{2}$.
Having defined event $X$ as the probability of an even-number outcome, we still have several alternative ways to assign the odd-number outcomes to events. One assignment would be to say that all of the odd-number outcomes belong to a second event—the event that the outcome is odd. The events “even outcome” and “odd outcome” are exhaustive and mutually exclusive. We could create another set of events by assigning the outcomes 1 and 3 to event $Y$, and the outcome 5 to event $Z$. Events $X$, $Y$, and $Z$ are also exhaustive and mutually exclusive.
We have a great deal of latitude in the way we assign the possible outcomes to events. If it suits our purposes, we can create many different exhaustive and mutually exclusive partitionings of the outcomes of a given distribution. We require that each partitioning of outcomes into events be exhaustive and mutually exclusive, because we want to apply the laws of probability to events. | textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Chemical_Thermodynamics_(Supplement_to_Shepherd_et_al.)/04%3A_Fundamental_2_-_Counting_Configurations/4.02%3A_Outcomes_Events_and_Probability.txt |
If we know the probabilities of the possible outcomes of a trial, we can calculate the probabilities for combinations of outcomes. These calculations are based on two rules, which we call the laws of probability. If we partition the outcomes into exhaustive and mutually exclusive events, the laws of probability also apply to events. Since, as we define them, “events” is a more general term than “outcomes,” we call them the law of the probability of alternative events and the law of the probability of compound events. These laws are valid so long as three conditions are satisfied. We have already discussed the first two of these conditions, which are that the outcomes possible in any individual trial must be exhaustive and mutually exclusive. The third condition is that, if we make more than one trial, the outcomes must be independent; that is, the outcome of one trial must not be influenced by the outcomes of the others.
We can view the laws of probability as rules for inferring information about combinations of events. The law of the probability of alternative events applies to events that belong to the same distribution. The law of the probability of compound events applies to events that can come from one or more distributions. An important special case occurs when the compound events are $N$ successive samplings of a given distribution that we identify as the parent distribution. If the random variable is a number, and we average the numbers that we obtain from $N$ successive samplings of the parent distribution, these “averages-of-$N$” themselves constitute a distribution. If we know certain properties of the parent distribution, we can calculate corresponding properties of the “distribution of averages-of-$N$ values obtained by sampling the parent distribution.” These calculations are specified by the central limit theorem, which we discuss in Section 3.11.
In general, when we combine events from two distributions, we can view the result as an event that belongs to a third distribution. At first encounter, the idea of combining events and distributions may seem esoteric. A few examples serve to show that what we have in mind is very simple.
Since an event is a set of outcomes, an event occurs whenever any of the outcomes in the set occurs. Partitioning the outcomes of tossing a die into “even outcomes” and “odd outcomes” illustrates this idea. The event “even outcome” occurs whenever the outcome of a trial is $2$, $4,$ or $6$. The probability of an event can be calculated from the probabilities of the underlying outcomes. We call the rule for this calculation the law of the probabilities of alternative events. (We create the opportunity for confusion here because we are illustrating the idea of alternative events by using an example in which we call the alternatives “alternative outcomes” rather than “alternative events.” We need to remember that “event” is a more general term than “outcome.” One possible partitioning is that which assigns every outcome to its own event.) We discuss the probabilities of alternative events further below.
To illustrate the idea of compound events, let us consider a first distribution that comprises “tossing a coin” and a second distribution that comprises “drawing a card from a poker deck.” The first distribution has two possible outcomes; the second distribution has $52$ possible outcomes. If we combine these distributions, we create a third distribution that comprises “tossing a coin and drawing a card from a poker deck.” The third distribution has $104$ possible outcomes. If we know the probabilities of the outcomes of the first distribution and the probabilities of the outcomes of the second distribution, and these probabilities are independent of one another, we can calculate the probability of any outcome that belongs to the third distribution. We call the rule for this calculation the law of the probability of compound events. We discuss it further below.
A similar situation occurs when we consider the outcomes of tossing two coins. We assume that we can tell the two coins apart. Call them coin $1$ and coin $2$. We designate heads and tails for coins $1$ and $2$ as $H_1$, $T_1$, $H_2$, and $T_2$, respectively. There are four possible outcomes in the distribution we call “tossing two coins:” $H_1H_2$, $H_1T_2$, $T_1H_2$, and $T_1T_2$. (If we could not tell the coins apart, $H_1T_2$ would be the same thing as $T_1H_2$; there would be only three possible outcomes.) We can view the distribution “tossing two coins” as being a combination of the two distributions that we can call “tossing coin $1$” and “tossing coin$\ 2$.” We can also view the distribution “tossing two coins” as a combination of two distributions that we call “tossing a coin a first time” and “tossing a coin a second time.” We view the distribution “tossing two coins” as being equivalent to the distribution “tossing one coin twice.” This is an example of repeated trials, which is a frequently encountered type of distribution. In general, we call such a distribution a “distribution of events from a trial repeated N times,” and we view this distribution as being completely equivalent to N simultaneous trials of the same kind. Chapter 19 considers the distribution of outcomes when a trial is repeated many times. Understanding the properties of such distributions is the single most essential element in understanding the theory of statistical thermodynamics. The central limit theorem relates properties of the repeated-trials distribution to properties of the parent distribution.
The Probability of Alternative Events
If we know the probability of each of two mutually exclusive events that belong to an exhaustive set, the probability that one or the other of them will occur in a single trial is equal to the sum of the individual probabilities. Let us call the independent events A and B, and represent their probabilities as $P(A)$ and $P(B)$, respectively. The probability that one of these events occurs is the same thing as the probability that either A occurs or B occurs. We can represent this probability as $P(A\ or\ B)$. The probability of this combination of events is the sum: $P(A)+P(B)$. That is,
$P\left(A\ or\ B\right)=P\left(A\right)+P(B) \nonumber$
Above we define Y as the event that a single toss of a die comes up either $1$ or $3$. Because each of these outcomes is one of six, mutually-exclusive, equally-likely outcomes, the probability of either of them is ${1}/{6}$: $P\left(tossing\ a\ 1\right)=P\left(tossing\ a\ 3\right)$$={1}/{6}$. From the law of the probability of alternative events, we have
\begin{align*} P\left(event\ Y\right) &=(tossing\ a\ 1\ or\ tossing\ a\ 3) \[4pt] &=P\left(tossing\ a\ 1\right)\ or P\left(tossing\ a\ 3\right) \[4pt] &= {1}/{6}+{1}/{6} \[4pt] &={2}/{6} \end{align*}
We define $X$ as the event that a single toss of a die comes up even. From the law of the probability of alternative events, we have
\begin{align*} P\left(event\ X\right) &=P\left(tossing\ 2\ or\ 4\ or\ 6\right) \[4pt] &=P\left(tossing\ a\ 2\right)+P\left(tossing\ a\ 4\right)+P\left(tossing\ a\ 6\right) \[4pt] &={3}/{6} \end{align*}
We define $Z$ as the event that a single toss comes up $5$.
$P\left(event\ Z\right)=P\left(tossing\ a\ 5\right)=1/6 \nonumber$
If there are $\omega$ independent events (denoted $E_1,E_2,\dots ,E_i,\dots ,E_{\omega }$), the law of the probability of alternative events asserts that the probability that one of these events will occur in a single trial is
\begin{align*} P\left(E_1\ or\ E_2\ or\dots E_i\dots or\ E_{\omega }\right) &=P\left(E_1\right)+P\left(E_2\right)+\dots +P\left(E_i\right)+\dots +P\left(E_{\omega }\right) \[4pt] &=\sum^{\omega }_{i=1} P\left(E_i\right) \end{align*}
If these $\omega$ independent events encompass all of the possible outcomes, the sum of their individual probabilities must be unity.
The Probability of Compound Events
Let us now suppose that we make two trials in circumstances where event $A$ is possible in the first trial and event $B$ is possible in the second trial. We represent the probabilities of these events by $P\left(A\right)$ and $P(B)$ and stipulate that they are independent of one another; that is, the probability that $B$ occurs in the second trial is independent of the outcome of the first trial. Then, the probability that $A$ occurs in the first trial and $B$ occurs in the second trial, $P(A\ and\ B)$, is equal to the product of the individual probabilities.
$P\left(A\ and\ B\right)=P\left(A\right)\times P(B) \nonumber$
To illustrate this using outcomes from die-tossing, let us suppose that event $A$ is tossing a $1$ and event $B$ is tossing a $3$. Then, $P\left(A\right)={1}/{6}$ and $P\left(B\right)={1}/{6}$. The probability of tossing a 1 in a first trial and tossing a $3$ in a second trial is then
\begin{align*} P\left( \text{tossing a 1 first and tossing a 3 second}\right) &=P\left(\text{tossing a 1}\right)\times P\left(\text{tossing a 3}\right) \[4pt] &={1}/{6}\times {1}/{6} \[4pt] &={1}/{36} \end{align*}
If we want the probability of getting one $1$ and one $3$ in two tosses, we must add to this the probability of tossing a $3$ first and a $1$ second.
If there are $\omega$ independent events (denoted $E_1,E_2,\dots ,E_i,\dots ,E_{\omega }$), the law of the probability of compound events asserts that the probability that $E_1$ will occur in a first trial, and $E_2$ will occur in a second trial, etc., is
\begin{align*} P\left(E_1\ and\ E_2\ and\dots E_i\dots and\ E_{\omega }\right) &=P\left(E_1\right)\times P\left(E_2\right)\times \dots \times P\left(E_i\right)\times \dots \times P\left(E_{\omega }\right)\[4pt] &=\prod^{\omega }_{i=1}{P(E_i)} \end{align*} | textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Chemical_Thermodynamics_(Supplement_to_Shepherd_et_al.)/04%3A_Fundamental_2_-_Counting_Configurations/4.03%3A_Some_Important_Properties_of_Events.txt |
The laws of probability apply to events that are independent. If the result of one trial depends on the result of another trial, we may still be able to use the laws of probability. However, to do so, we must know the nature of the interdependence.
If the activity associated with event C precedes the activity associated with event D, the probability of D may depend on whether C occurs. Suppose that the first activity is tossing a coin and that the second activity is drawing a card from a deck; however, the deck we use depends on whether the coin comes up heads or tails. If the coin is heads, we draw a card from an ordinary deck; if the coin is tails, we draw a coin from a deck with the face cards removed. Now we ask about the probability of drawing an ace. If the coin is heads, the probability of drawing an ace is ${4}/{52}={1}/{13}$. If the coin is tails, the probability of drawing an ace is ${4}/{40}={1}/{10}$. The combination coin is heads and card is ace has probability: $\left({1}/{2}\right)\left({1}/{13}\right)={1}/{26}$. The combination coin is tails and card is ace has probability $\left({1}/{2}\right)\left({1}/{10}\right)={1}/{20}$. In this
case, the probability of drawing an ace depends on the modification we make to the deck based on the outcome of the coin toss.
Applying the laws of probability is straightforward. An example that illustrates the application of these laws in a transparent way is provided by villages First, Second, Third, and Fourth, which are separated by rivers. (See Figure 1.) Bridges $1$, $2$, and $3$ span the river between First and Second. Bridges $a$ and $b$ span the river between Second and Third. Bridges $A$, $B$, $C$, and $D$ span the river between Third and Fourth. A traveler from First to Fourth who is free to take any route he pleases has a choice from among $3\times 2\times 4=24$ possible combinations. Let us consider the probabilities associated with various events:
• There are 24 possible routes. If a traveler chooses his route at random, the probability that he will take any particular route is ${1}/{24}$. This illustrates our assumption that each event in a set of $N$ exhaustive and mutually exclusive events occurs with probability ${1}/{N}$.
• If he chooses a route at random, the probability that he goes from First to Second by either bridge $1$ or bridge $2$ is $P\left(1\right)+P\left(2\right)=\ {1}/{3}+{1}/{3}={2}/{3}$. This illustrates the calculation of the probability of alternative events.
• The probability of the particular route $2\to a\to C$ is $P\left(2\right)\times P\left(a\right)\times P\left(C\right)=\left({1}/{3}\right)\left({1}/{2}\right)\left({1}/{4}\right)={1}/{24}$, and we calculate the same probability for any other route from First to Fourth. This illustrates the calculation of the probability of a compound event.
• If he crosses bridge $1$, the probability that his route will be $2\to a\to C$ is zero, of course. The probability of an event that has already occurred is 1, and the probability of any alternative is zero. If he crosses bridge $1,$ $P\left(1\right)=1$, and $P\left(2\right)=P\left(3\right)=0$.
• Given that a traveler has used bridge $1$, the probability of the route $1\to a\to C$ becomes the probability of path $a\to C$, which is $P\left(a\right)\times P\left(C\right)=\left({1}/{2}\right)\left({1}/{4}\right)={1}/{8}$. Since $P\left(1\right)=1$, the probability of the compound event $1\to a\to C$ is the probability of the compound event $a\to C$.
The outcomes of rolling dice, rolling provide more illustrations. If we roll two dice, we can classify the possible outcomes according to the sums of the outcomes for the individual dice. There are thirty-six possible outcomes. They are displayed in Table 1.
Table 1: Outcomes from tossing two dice
Outcome for first die
Outcome for second die 1 2 3 4 5 6
1 2 3 4 5 6 7
2 3 4 5 6 7 8
3 4 5 6 7 8 9
4 5 6 7 8 9 10
5 6 7 8 9 10 11
6 7 8 9 10 11 12
Let us consider the probabilities associated with various dice-throwing events:
• The probability of any given outcome, say the first die shows $2$ and the second die shows $3$, is ${1}/{36}$.
• Since the probability that the first die shows $3$ while the second die shows $2$ is also ${1}/{36}$, the probability that one die shows $2$ and the other shows $3$ is $P\left(3\right)\times P\left(2\right)+P\left(2\right)\times P\left(3\right) =\left({1}/{36}\right)+\left({1}/{36}\right) ={1}/{18}. \nonumber$
• Four different outcomes correspond to the event that the score is $5$. Therefore, the probability of rolling $5$ is $P\left(1\right)\times P\left(4\right)+P\left(2\right)\times P\left(3\right) +P\left(3\right)\times P\left(2\right)+P\left(4\right)\times P\left(1\right) ={1}/{9} \nonumber$
• The probability of rolling a score of three or less is the probability of rolling $2$, plus the probability of rolling $3$ which is $\left({1}/{36}\right)+\left({2}/{36}\right)={3}/{36}={1}/{12}$
• Suppose we roll the dice one at a time and that the first die shows $2$. The probability of rolling $7$ when the second die is thrown is now ${1}/{6}$, because only rolling a $5$ can make the score 7, and there is a probability of ${1}/{6}$ that a $5$ will come up when the second die is thrown.
• Suppose the first die is red and the second die is green. The probability that the red die comes up $2$ and the green die comes up $3$ is $\left({1}/{6}\right)\left({1}/{6}\right)={1}/{36}$.
Above we looked at the number of outcomes associated with a score of $3$ to find that the probability of this event is ${1}/{18}$. We can use another argument to get this result. The probability that two dice roll a score of three is equal to the probability that the first die shows $1$ or $2$ times the probability that the second die shows whatever score is necessary to make the total equal to three. This is:
\begin{align*} P\left(first\ die\ shows\ 1\ or\ 2\right)\times \left({1}/{6}\right) &= \left[\left({1}/{6}\right)+\left({1}/{6}\right)\right]\times {1}/{6} \[4pt] &={2}/{36} \[4pt]& ={1}/{18} \end{align*}
Application of the laws of probability is frequently made easier by recognizing a simple restatement of the requirement that events be mutually exclusive. In a given trial, either an event occurs or it does not. Let the probability that an event A occurs be $P\left(A\right)$. Let the probability that event A does not occur be $P\left(\sim A\right)$. Since in any given trial, the outcome must belong either to event A or to event $\sim A$, we have
$P\left(A\right)+P\left(\sim A\right)=1 \nonumber$
For example, if the probability of success in a single trial is ${2}/{3}$, the probability of failure is ${1}/{3}$. If we consider the outcomes of two successive trials, we can group them into four events.
• Event SS: First trial is a success; second trial is a success.
• Event SF: First trial is a success; second trial is a failure.
• Event FS: First trial is a failure; second trial is a success.
• Event FF: First trial is a failure; second trial is a failure.
Using the laws of probability, we have
\begin{align*} 1 &=P\left(Event\ SS\right)+P\left(Event\ SF\right)+P\left(Event\ FS\right)+\ P(Event\ FF) \[4pt] &=P_1\left(S\right)\times P_2\left(S\right)+P_1\left(S\right)\times P_2\left(F\right) +P_1(F)\times P_2(S)+P_1(F)\times P_2(F) \end{align*}
where $P_1\left(X\right)$ and $P_2\left(X\right)$ are the probability of event $X$ in the first and second trials, respectively.
This situation can be mapped onto a simple diagram. We represent the possible outcomes of the first trial by line segments on one side of a unit square $P_1\left(S\right)+P_1\left(F\right)=1$. We represent the outcomes of the second trial by line segments along an adjoining side of the unit square. The four possible events are now represented by the areas of four mutually exclusive and exhaustive portions of the unit square as shown in Figure 2. | textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Chemical_Thermodynamics_(Supplement_to_Shepherd_et_al.)/04%3A_Fundamental_2_-_Counting_Configurations/4.04%3A_Applying_the_Laws_of_Probability.txt |
Definitions
Combinatorics: a branch of mathematics that deals with the rules for combining different outcomes and events and calculating the probabilities of these combinations.
Probability: The probability of an outcome is a measure of the likelihood that the outcome will occur in comparison to all possible outcomes.
Permutation: one “way” in which all or part of a set of objects can be ordered or arranged.
Combination: one “way” of selecting all or part of a set of objects without regard to the order in which the objects are selected.
Multiplicity: the multiplicity of events is the total number of “ways” in which different outcomes can possibly occur. Represented by the symbol W, and also called ways, permutations, sequences, degeneracy, weight, arrangements, thermodynamic probability, etc. depending on the context.
As you can see from these definitions, combinatorics is the branch of math related to counting events and outcomes, while multiplicity is the statistical thermodynamics variable equal to the number of possible outcomes. They are intricately connected.
Depending on the situation, this number of possible outcomes (multiplicity) could be calculated using the fundamental principles of counting, permutation formulas, or combination formulas from the field of combinatorics. Below is a detailed explanation and example of each of these counting methods and when they can be applied.
The Fundamental Principles of Counting
The Multiplication Principle
During restaurant week, you go out to eat for dinner. The restaurant week menu gives you the option to choose between three appetizers, four entrées, and two desserts. You feel a bit overwhelmed by the number of possibilities; how many different meal options are there?
• Each of these selection events has a different number of possible outcomes or options:
• Appetizer selection = 3 options
• Entrée selection = 4 options
• Dessert event = 2 options
• Because you will choose one appetizer AND one entrée AND one dessert, the total number of different ways your meal could be prepared is: $W=3 \times 4 \times 2=24$
• One of the Fundamental Principles of Counting, the Multiplication Principle states that if there are n possible outcomes for each event type, i, in a sequence, then the total number of possible outcomes is equal to the values of n multiplied together: $W=n_{1} n_{2} \cdots n_{t}=\prod_{i=1}^{t} n_{i}$ where $\prod$ symbol is the product operator (similar to $\sum$ symbol for the sum operator).
• In this context, each $n_i$ represents the number of possible outcomes for each event. Therefore, the multiplicity of each event type, $W_i$, is equal to $n_i$, and the total multiplicity, $W_{t o t a l}$, can be determined by: $W_{t o t a l}=W_{1} W_{2} \cdots W_{3}=\prod_{i=1}^{t} W_{i}$
The Addition Principle
You are looking to buy a new binder to store your class notes. You’re torn between a 1” binder and a 1.5” binder. The 1” binder comes in 5 colors and the 1.5” binder comes in 3. How many total binder options are you considering?
• The two binders each has a different number of possible outcomes:
• 1” binder = 5 outcomes
• 1.5” binder = 3 outcomes
• Because you will choose a 1” binder OR a 1.5” binder, the total number of different outcomes you are considering is: $W=5+3=8$
• One of the Fundamental Principles of Counting, the Addition Principle states that if there are n possible outcomes for each event, i, and we cannot do both at the same time, then the total number of possible outcomes is equal to the values of n added together: $W=n_{1}+n_{2}+n_{3} \cdots=\sum_{i=1}^{t} n_{i}$
Permutations
Permutations of distinguishable outcomes without repetition: ALL outcomes selected
You decide to take on the challenge of trying each of CookOut’s 40 flavors of milkshakes! If you have a different milkshake each day until you’ve tried them all (no repeats!), how many different sequences of milkshakes (i.e., orders or arrangements) are possible?
• Since you are not repeating any milkshakes during this time, you will choose from 40 milkshakes on day one AND 39 on day two AND 38 on day three, etc. According to the Multiplication Principle above, the total number of sequences is: $W=40 \times 39 \times 38 \times 37 \times \cdots \times 2 \times 1=40 !=8.16 \times 10^{47}$
• The order of the milkshakes matters in this question but no milkshakes are repeated; this is called a permutation without repetition: $W=N !$ where $N$ is the total number of possible outcomes and all possible outcomes are sampled (i.e., you will keep selecting milkshakes until you’ve tried all the milkshakes). Because you can identify which milkshake you are trying each day, the outcomes or options are considered distinguishable.
Permutations of distinguishable outcomes without repetition: SOME outcomes only
On second thought, having a different milkshake every day for 40 days may be a bit much… Instead, you decide to have a different milkshake every day for a week. (Then you’ll take a break and come back to tackle the rest of the menu in the future!) How many different arrangements or sequences are possible during the first week?
• The math is similar to the previous question, except that we only need to multiply the first seven of the numbers in the factorial for the first seven days: $W=40 \times 39 \times 38 \times 37 \times 36 \times 35 \times 34=93963542400$
• A more general expression for this permutation without repetition includes the total number of possible outcomes, N, and the total number of selection events, r, which is expressed as “$N$, take $r$” (or you may want to think of it as “$N$, arrange $r$”): $W=\ _{N}P_{r}=\frac{N !}{(N-r) !}$ where $_{N}P_{r}$ is a common notation for permutation without repetition.
• For our example, this would be: $W={ }_{40} P_{7}=\frac{40 !}{(40-7) !}=\frac{40 !}{33 !}=\frac{40 \times 39 \times 38 \times 37 \times 36 \times 35 \times 34 \times 33 !}{33 !}=93963542400$
• The previous example (with all 40 milkshakes) can also be depicted this way, however, since all items are selected, $N$ and $r$ are equal: $W={ }_{40} P_{40}=\frac{40 !}{(40-40) !}=\frac{40 !}{0 !}=40!$
Permutations of distinguishable outcomes with repetition
To help power you through finals, you decide to have a milkshake every day during finals week, but you are not going to bother with trying different ones; you may just decide to have the same one every day! What would the total number of milkshake sequences be in this case?
• If every day you have the option of 40 different milkshakes, the number of possible sequences is: $W=40 \times 40 \times 40 \times 40 \times 40 \times 40 \times 40=40^{7}=163840000000$
• This is a permutation with repetition, and the equation gives the number of possible sequences for r events that each have N possible outcomes: $W=N^{r}$
• The term repetition indicates that an outcome or object is not removed from the available pool after selection. Another way to refer to this concept is as permutation with replacement; after selecting a particular outcome, that outcome is returned to the selection pool so that the available selection options are always the same.
Permutations of indistinguishable outcomes
Over winter break, you purchase an 11 oz bag of Holiday Milk Chocolate Hershey’s Kisses. In the bag of 72 kisses, you have 25 red, 23 silver, and 24 green kisses. If you pull the kisses out of the bag one at a time, how many different sequences of holiday colors are possible?
• It may seem that this description also refers to calculating the number of permutations for items that repeat (since there are multiple kisses of each wrapper color), and in fact, some resources do refer to the equation that way. However, this scenario does not repeat in the same way as the previous example.
• The term repetition is used for a series of events or set of objects where the outcomes are allowed to repeat after being selected (i.e., the selection pool does not change because the selected outcome is replaced by another outcome of the same type).
• In the current scenario, the selected individual kisses do not “repeat” because they are not returned to the bag; as each kiss is removed, the available selection pool decreases. Instead, there are simply multiple indistinguishable items in the selection pool before the selections begin. This makes a difference in terms of how many items of each type (i.e., kisses of each color) are available to be selected.
• If our kisses were labeled so that each kiss was distinguishable from the others, then our total number of permutations would be calculated as described previously: $W=72 !$ However, this calculation will count $red_A$ followed by $red_B$ as a different sequence from $red_B$ followed by $red_A$. These two outcomes are indistinguishable without the A and B labels, so the number of unique sequences must be determined by factoring out the number of identical or redundant arrangements.
• The number of possible arrangements for each individual type of kiss is: $n_{r e d} =25 !$ $n_{s i l v e r} =23 !$ $n_{g r e e n} =24 !$ Therefore, our number of unique sequences or arrangements is: $W=\frac{N !}{n_{red} ! n_{silver} ! n_{green} !}=\frac{72 !}{25 ! 23 ! 24 !}$
• Another way to refer to this type of permutation is a multiset permutation because the overall set is composed of smaller subsets of indistinguishable outcomes. The general expression for a multiset permutation is: $W=\frac{N !}{n_{1} ! n_{2} ! \cdots n_{t} !}$ where each $n_i$ is the number of possible outcomes for each selection type, $i$, and the total number of outcomes or objects, $N$, is: $N=\sum_{i=1}^{t} n_{i}$
Combinations
Combinations without repetition
In the spring semester, you decide you’ve had enough CookOut milkshakes for a while. You buy Blue Bell ice cream from the grocery store instead! You stock 5 flavors: cookies and cream (CC), mint chocolate chip (M), strawberry (S), Dutch chocolate (DC), and banana pudding (B). If you always get three scoops of different flavors, how many different combinations are possible?
• Let’s list our five available flavors of ice cream:
Assume we choose B for the first scoop. If we cannot have each flavor more than once (i.e., without repetition), then B is no longer an option for the remaining scoops. Therefore, the second scoop only has four choices available to it:
We choose S for the second scoop, and remove it from the available flavors:
Lastly, we choose DC for the third scoop:
• Thus far, this process looks like a permutation without repeats, and the permutation equation would yield: ${ }_{5} P_{3}=\frac{5 !}{(5-3) !}=\frac{5 !}{2 !}=5 \times 4 \times 3=60$
However, the combination {B, S, DC} is indistinguishable from other combinations of B, S, and DC scooped in a different order. Once in the bowl, order does not matter. The permutation equation would count each sequence of B, S, and DC separately, so we need to correct for the number of duplicate or indistinguishable combinations.
We can illustrate this point further by looking at the example of B, S, and DC in more details:
There are six distinguishable sequences that produce indistinguishable combinations. We could also calculate this value using the permutation without repetition for the number of scoops: $\ _{3} P_{3}=\frac{3 !}{(3-3) !}=\frac{3 !}{0 !}=3 !=6$
• To determine the number of unique or distinguishable combinations (where order does not matter), we need to divide our number of distinguishable sequences by the number of sequences that produce indistinguishable combinations: $W=\frac{60}{6}=10$ $W=\frac{ _{5} P_{3}}{3 !}=\frac{\frac{5 !}{(5-3) !}}{3 !}=\frac{5 !}{(5-3) ! 3 !}=\frac{5 !}{2 ! 3 !}=10$
• The general equation for this combination without repetition, which is referred to as “$N$, choose $r$” (or in our case “5 ice cream flavors, choose 3”) is:
$W=\ _{N} C_{r}=\left(\begin{array}{l} N \ r \end{array}\right)=\frac{ _{N} P_{r}}{r !}=\frac{N !}{(N-r) ! r !}$
where $\ _{N} C_{r}$ and $\left(\begin{array}{l} N \ r \end{array}\right)$ are common notations for combination without repetition.
Combinations with repetition
Instead of always doing three different scoops, how many different combinations are possible if you do include repetitions (i.e. combinations with more than one scoop of a particular flavor)?
• This concept is called combination with repetition or combination with replacement. In this case, it’s easier to think about as replacement.
Let’s list our five flavors of ice cream again:
Assume we choose B for the first scoop. However, unlike before, after B has been taken from the available flavors, we will replace it with another B, leaving the selection options the same:
We choose a second scoop of B next, replacing it again with another B:
Lastly, we choose DC for the last scoop:
• What you can see is that, if you have repetition or replacements, you do not end up with “5, choose 3.” Instead, because of the replacements after the first two scoops, you had 7 total available choices during the selection process, making “7, choose 3.” $W=\ _{7} C_{3}=\frac{7 !}{4 ! 3 !}=35$
• In considering this more generally, hopefully you can recognize that, regardless of whatever number of $N_initial$ options you started with, if you allow replacements, you will end up adding $(r-1)$ additional options (one fewer then the number of selections) by your final choice. Your final number of available options, therefore, becomes: $N_{\text {final}}=N_{\text {initial}}+(r-1)$ We can take this expression for the final number of available choices and substitute it into the combination formula from the previous example: $\ _{N} C_{r}=\frac{N !}{(N-r) ! r !}=\frac{N_{\text {final}} !}{\left(N_{\text {final}}-r\right) ! r !}=\frac{\left(N_{\text {initial}}+(r-1)\right) !}{\left(N_{\text {initial}}+(r-1)-r\right) ! r !}=\frac{\left(N_{\text {initial}}+(r-1)\right) !}{\left(N_{\text {initial}}-1\right) ! r !}$We can rearrange this equation as: $\left(\left(\begin{array}{l} N \ r \end{array}\right)\right)=\frac{\left(\left(N_{\text {initial}}-1\right)+r\right) !}{\left(N_{\text {initial}}-1\right) ! r !}$ where the notation $\left(\left(\begin{array}{l} N \ r \end{array}\right)\right)$ is used to denote a combination with replacement and is called “$N$, multichoose $r$.”
• This new expression resembles a permutation for two items with indistinguishable outcomes: there are $(N_{\text {initial}}-1)$ indistinguishable outcomes of one item and $r$ of the other. This is from where the line/dot representation comes.
We will develop the system as having $(N_{\text {initial}}-1 )$ lines and $r$ dots. Therefore, for our 5 ice cream flavors and 3 scoops, we will use 4 lines and 3 dots: $W=\frac{(4+3) !}{4 ! 3 !}=35$
• While this equation gives the correct answer, it may still seem strange to recast the problem in this way. Let’s look at it one more time: pretend you had an ice cream scooping machine, and you gave this machine instructions in a code of lines and dots. Each dot represents one scoop of ice cream, and each line separates (or partitions) one flavor from another. In order to communicate the combination of {B, B, DC} to the machine, you send the following code (remembering that order does not matter):
If you wanted {CC, M, DC}, it would be:
And if you wanted all CC:
To demonstrate that the line/dot method works generally, recognize that, for $N_{\text {initial}}$ options, there will always be $(N_{\text {initial}}-1 )$ lines needed to separate them from one another. For $r$ choices, you will always need a total of $r$ dots, one for each choice. It works to use three dots placed among four lines that separate the five flavors!
Therefore, the “$N$, multichoose $r$” combination with replacement equation is equal to the permutation equation for $(N_{\text {initial}}-1 )$ lines and $r$ dots, where the lines and dots are indistinguishable: $\left(\left(\begin{array}{l} N \ r \end{array}\right)\right)={ }_{N-1} P_{r}=\frac{((N-1)+r) !}{(N-1) ! r !}$ | textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Chemical_Thermodynamics_(Supplement_to_Shepherd_et_al.)/04%3A_Fundamental_2_-_Counting_Configurations/4.05%3A_Combinatorics_and_Multiplicity.txt |
• 5.1: Energy Basics
Energy is the capacity to do work (applying a force to move matter). Heat is energy that is transferred between objects at different temperatures; it flows from a high to a low temperature. Chemical and physical processes can absorb heat (endothermic) or release heat (exothermic). The SI unit of energy, heat, and work is the joule (J). Specific heat and heat capacity are measures of the energy needed to change the temperature of a substance or object.
05: Fundamental 4 - Heat Transfer
Learning Objectives
• Define energy, distinguish types of energy, and describe the nature of energy changes that accompany chemical and physical changes
• Distinguish the related properties of heat, thermal energy, and temperature
• Define and distinguish specific heat and heat capacity, and describe the physical implications of both
• Perform calculations involving heat, specific heat, and temperature change
Chemical changes and their accompanying changes in energy are important parts of our everyday world (Figure $1$). The macronutrients in food (proteins, fats, and carbohydrates) undergo metabolic reactions that provide the energy to keep our bodies functioning. We burn a variety of fuels (gasoline, natural gas, coal) to produce energy for transportation, heating, and the generation of electricity. Industrial chemical reactions use enormous amounts of energy to produce raw materials (such as iron and aluminum). Energy is then used to manufacture those raw materials into useful products, such as cars, skyscrapers, and bridges.
Over 90% of the energy we use comes originally from the sun. Every day, the sun provides the earth with almost 10,000 times the amount of energy necessary to meet all of the world’s energy needs for that day. Our challenge is to find ways to convert and store incoming solar energy so that it can be used in reactions or chemical processes that are both convenient and nonpolluting. Plants and many bacteria capture solar energy through photosynthesis. We release the energy stored in plants when we burn wood or plant products such as ethanol. We also use this energy to fuel our bodies by eating food that comes directly from plants or from animals that got their energy by eating plants. Burning coal and petroleum also releases stored solar energy: These fuels are fossilized plant and animal matter.
This chapter will introduce the basic ideas of an important area of science concerned with the amount of heat absorbed or released during chemical and physical changes—an area called thermochemistry. The concepts introduced in this chapter are widely used in almost all scientific and technical fields. Food scientists use them to determine the energy content of foods. Biologists study the energetics of living organisms, such as the metabolic combustion of sugar into carbon dioxide and water. The oil, gas, and transportation industries, renewable energy providers, and many others endeavor to find better methods to produce energy for our commercial and personal needs. Engineers strive to improve energy efficiency, find better ways to heat and cool our homes, refrigerate our food and drinks, and meet the energy and cooling needs of computers and electronics, among other applications. Understanding thermochemical principles is essential for chemists, physicists, biologists, geologists, every type of engineer, and just about anyone who studies or does any kind of science.
Energy
Energy can be defined as the capacity to supply heat or do work. One type of work (w) is the process of causing matter to move against an opposing force. For example, we do work when we inflate a bicycle tire—we move matter (the air in the pump) against the opposing force of the air surrounding the tire.
Like matter, energy comes in different types. One scheme classifies energy into two types: potential energy, the energy an object has because of its relative position, composition, or condition, and kinetic energy, the energy that an object possesses because of its motion. Water at the top of a waterfall or dam has potential energy because of its position; when it flows downward through generators, it has kinetic energy that can be used to do work and produce electricity in a hydroelectric plant (Figure $2$). A battery has potential energy because the chemicals within it can produce electricity that can do work.
Energy can be converted from one form into another, but all of the energy present before a change occurs always exists in some form after the change is completed. This observation is expressed in the law of conservation of energy: during a chemical or physical change, energy can be neither created nor destroyed, although it can be changed in form. (This is also one version of the first law of thermodynamics, as you will learn later.)
When one substance is converted into another, there is always an associated conversion of one form of energy into another. Heat is usually released or absorbed, but sometimes the conversion involves light, electrical energy, or some other form of energy. For example, chemical energy (a type of potential energy) is stored in the molecules that compose gasoline. When gasoline is combusted within the cylinders of a car’s engine, the rapidly expanding gaseous products of this chemical reaction generate mechanical energy (a type of kinetic energy) when they move the cylinders’ pistons.
According to the law of conservation of matter (seen in an earlier chapter), there is no detectable change in the total amount of matter during a chemical change. When chemical reactions occur, the energy changes are relatively modest and the mass changes are too small to measure, so the laws of conservation of matter and energy hold well. However, in nuclear reactions, the energy changes are much larger (by factors of a million or so), the mass changes are measurable, and matter-energy conversions are significant. This will be examined in more detail in a later chapter on nuclear chemistry. To encompass both chemical and nuclear changes, we combine these laws into one statement: The total quantity of matter and energy in the universe is fixed.
Thermal Energy, Temperature, and Heat
Thermal energy is kinetic energy associated with the random motion of atoms and molecules. Temperature is a quantitative measure of “hot” or “cold.” When the atoms and molecules in an object are moving or vibrating quickly, they have a higher average kinetic energy (KE), and we say that the object is “hot.” When the atoms and molecules are moving slowly, they have lower KE, and we say that the object is “cold” (Figure $3$). Assuming that no chemical reaction or phase change (such as melting or vaporizing) occurs, increasing the amount of thermal energy in a sample of matter will cause its temperature to increase. And, assuming that no chemical reaction or phase change (such as condensation or freezing) occurs, decreasing the amount of thermal energy in a sample of matter will cause its temperature to decrease.
Interactive Element: PhET
Most substances expand as their temperature increases and contract as their temperature decreases. This property can be used to measure temperature changes, as shown in Figure $4$. The operation of many thermometers depends on the expansion and contraction of substances in response to temperature changes.
Heat (q) is the transfer of thermal energy between two bodies at different temperatures. Heat flow (a redundant term, but one commonly used) increases the thermal energy of one body and decreases the thermal energy of the other. Suppose we initially have a high temperature (and high thermal energy) substance (H) and a low temperature (and low thermal energy) substance (L). The atoms and molecules in H have a higher average KE than those in L. If we place substance H in contact with substance L, the thermal energy will flow spontaneously from substance H to substance L. The temperature of substance H will decrease, as will the average KE of its molecules; the temperature of substance L will increase, along with the average KE of its molecules. Heat flow will continue until the two substances are at the same temperature (Figure $5$).
Matter undergoing chemical reactions and physical changes can release or absorb heat. A change that releases heat is called an exothermic process. For example, the combustion reaction that occurs when using an oxyacetylene torch is an exothermic process—this process also releases energy in the form of light as evidenced by the torch’s flame (Figure $\PageIndex{6a}$). A reaction or change that absorbs heat is an endothermic process. A cold pack used to treat muscle strains provides an example of an endothermic process. When the substances in the cold pack (water and a salt like ammonium nitrate) are brought together, the resulting process absorbs heat, leading to the sensation of cold.
Measuring Energy and Heat Capacity
Historically, energy was measured in units of calories (cal). A calorie is the amount of energy required to raise one gram of water by 1 degree C (1 kelvin). However, this quantity depends on the atmospheric pressure and the starting temperature of the water. The ease of measurement of energy changes in calories has meant that the calorie is still frequently used. The Calorie (with a capital C), or large calorie, commonly used in quantifying food energy content, is a kilocalorie. The SI unit of heat, work, and energy is the joule. A joule (J) is defined as the amount of energy used when a force of 1 newton moves an object 1 meter. It is named in honor of the English physicist James Prescott Joule. One joule is equivalent to 1 kg m2/s2, which is also called 1 newton–meter. A kilojoule (kJ) is 1000 joules. To standardize its definition, 1 calorie has been set to equal 4.184 joules.
We now introduce two concepts useful in describing heat flow and temperature change. The heat capacity (C) of a body of matter is the quantity of heat (q) it absorbs or releases when it experiences a temperature change (ΔT) of 1 degree Celsius (or equivalently, 1 kelvin)
$C=\dfrac{q}{ΔT} \label{5.2.1}$
Heat capacity is determined by both the type and amount of substance that absorbs or releases heat. It is therefore an extensive property—its value is proportional to the amount of the substance.
For example, consider the heat capacities of two cast iron frying pans. The heat capacity of the large pan is five times greater than that of the small pan because, although both are made of the same material, the mass of the large pan is five times greater than the mass of the small pan. More mass means more atoms are present in the larger pan, so it takes more energy to make all of those atoms vibrate faster. The heat capacity of the small cast iron frying pan is found by observing that it takes 18,150 J of energy to raise the temperature of the pan by 50.0 °C
$C_{\text{small pan}}=\mathrm{\dfrac{18,140\; J}{50.0\; °C} =363\; J/°C} \label{5.2.2}$
The larger cast iron frying pan, while made of the same substance, requires 90,700 J of energy to raise its temperature by 50.0 °C. The larger pan has a (proportionally) larger heat capacity because the larger amount of material requires a (proportionally) larger amount of energy to yield the same temperature change:
$C_{\text{large pan}}=\mathrm{\dfrac{90,700\; J}{50.0\;°C}=1814\; J/°C} \label{5.2.3}$
The specific heat capacity (c) of a substance, commonly called its “specific heat,” is the quantity of heat required to raise the temperature of 1 gram of a substance by 1 degree Celsius (or 1 kelvin):
$c = \dfrac{q}{\mathrm{m\Delta T}} \label{5.2.4}$
Specific heat capacity depends only on the kind of substance absorbing or releasing heat. It is an intensive property—the type, but not the amount, of the substance is all that matters. For example, the small cast iron frying pan has a mass of 808 g. The specific heat of iron (the material used to make the pan) is therefore:
$c_\ce{iron}=\mathrm{\dfrac{18,140\; J}{(808\; g)(50.0\;°C)} = 0.449\; J/g\; °C} \label{5.2.5}$
The large frying pan has a mass of 4040 g. Using the data for this pan, we can also calculate the specific heat of iron:
$c_\ce{iron}=\mathrm{\dfrac{90,700\; J}{(4,040\; g)(50.0\;°C)}=0.449\; J/g\; °C} \label{5.2.6}$
Although the large pan is more massive than the small pan, since both are made of the same material, they both yield the same value for specific heat (for the material of construction, iron). Note that specific heat is measured in units of energy per temperature per mass and is an intensive property, being derived from a ratio of two extensive properties (heat and mass). The molar heat capacity, also an intensive property, is the heat capacity per mole of a particular substance and has units of J/mol °C (Figure $7$).
Liquid water has a relatively high specific heat (about 4.2 J/g °C); most metals have much lower specific heats (usually less than 1 J/g °C). The specific heat of a substance varies somewhat with temperature. However, this variation is usually small enough that we will treat specific heat as constant over the range of temperatures that will be considered in this chapter. Specific heats of some common substances are listed in Table $1$.
Table $1$: Specific Heats of Common Substances at 25 °C and 1 bar
Substance Symbol (state) Specific Heat (J/g °C)
helium He(g) 5.193
water H2O(l) 4.184
ethanol C2H6O(l) 2.376
ice H2O(s) 2.093 (at −10 °C)
water vapor H2O(g) 1.864
nitrogen N2(g) 1.040
air 1.007
oxygen O2(g) 0.918
aluminum Al(s) 0.897
carbon dioxide CO2(g) 0.853
argon Ar(g) 0.522
iron Fe(s) 0.449
copper Cu(s) 0.385
lead Pb(s) 0.130
gold Au(s) 0.129
silicon Si(s) 0.712
If we know the mass of a substance and its specific heat, we can determine the amount of heat, q, entering or leaving the substance by measuring the temperature change before and after the heat is gained or lost:
\begin{align*} q &= \ce{(specific\: heat)×(mass\: of\: substance)×(temperature\: change)}\label{5.2.7}\q&=c×m×ΔT \[4pt] &=c×m×(T_\ce{final}−T_\ce{initial})\end{align*}
In this equation, $c$ is the specific heat of the substance, m is its mass, and ΔT (which is read “delta T”) is the temperature change, TfinalTinitial. If a substance gains thermal energy, its temperature increases, its final temperature is higher than its initial temperature, TfinalTinitial has a positive value, and the value of q is positive. If a substance loses thermal energy, its temperature decreases, the final temperature is lower than the initial temperature, TfinalTinitial has a negative value, and the value of q is negative.
Example $1$: Measuring Heat
A flask containing $\mathrm{8.0 \times 10^2\; g}$ of water is heated, and the temperature of the water increases from 21 °C to 85 °C. How much heat did the water absorb?
Solution
To answer this question, consider these factors:
• the specific heat of the substance being heated (in this case, water)
• the amount of substance being heated (in this case, 800 g)
• the magnitude of the temperature change (in this case, from 21 °C to 85 °C).
The specific heat of water is 4.184 J/g °C, so to heat 1 g of water by 1 °C requires 4.184 J. We note that since 4.184 J is required to heat 1 g of water by 1 °C, we will need 800 times as much to heat 800 g of water by 1 °C. Finally, we observe that since 4.184 J are required to heat 1 g of water by 1 °C, we will need 64 times as much to heat it by 64 °C (that is, from 21 °C to 85 °C).
This can be summarized using the equation:
\begin{align*} q&=c×m×ΔT \[4pt] &=c×m×(T_\ce{final}−T_\ce{initial}) \[4pt] &=\mathrm{(4.184\:J/\cancel{g}°C)×(800\:\cancel{g})×(85−21)°C}\[4pt] &=\mathrm{(4.184\:J/\cancel{g}°\cancel{C})×(800\:\cancel{g})×(64)°\cancel{C}}\[4pt] &=\mathrm{210,000\: J(=210\: kJ)} \end{align*} \nonumber
Because the temperature increased, the water absorbed heat and $q$ is positive.
Exercise $1$
How much heat, in joules, must be added to a $\mathrm{5.00 \times 10^2 \;g}$ iron skillet to increase its temperature from 25 °C to 250 °C? The specific heat of iron is 0.451 J/g °C.
Answer
$\mathrm{5.05 \times 10^4\; J}$
Note that the relationship between heat, specific heat, mass, and temperature change can be used to determine any of these quantities (not just heat) if the other three are known or can be deduced.
Example $2$: Determining Other Quantities
A piece of unknown metal weighs 348 g. When the metal piece absorbs 6.64 kJ of heat, its temperature increases from 22.4 °C to 43.6 °C. Determine the specific heat of this metal (which might provide a clue to its identity).
Solution
Since mass, heat, and temperature change are known for this metal, we can determine its specific heat using the relationship:
\begin{align*} q&=c \times m \times \Delta T \[4pt] &=c \times m \times (T_\ce{final}−T_\ce{initial}) \end{align*} \nonumber
Substituting the known values:
$6,640\; \ce J=c \times \mathrm{(348\; g) \times (43.6 − 22.4)\; °C} \nonumber$
Solving:
$c=\mathrm{\dfrac{6,640\; J}{(348\; g) \times (21.2°C)} =0.900\; J/g\; °C} \nonumber$
Comparing this value with the values in Table $1$, this value matches the specific heat of aluminum, which suggests that the unknown metal may be aluminum.
Exercise $2$
A piece of unknown metal weighs 217 g. When the metal piece absorbs 1.43 kJ of heat, its temperature increases from 24.5 °C to 39.1 °C. Determine the specific heat of this metal, and predict its identity.
Answer
$c = \mathrm{0.45 \;J/g \;°C}$; the metal is likely to be iron from checking Table $1$.
Solar Thermal Energy Power Plants
The sunlight that reaches the earth contains thousands of times more energy than we presently capture. Solar thermal systems provide one possible solution to the problem of converting energy from the sun into energy we can use. Large-scale solar thermal plants have different design specifics, but all concentrate sunlight to heat some substance; the heat “stored” in that substance is then converted into electricity.
The Solana Generating Station in Arizona’s Sonora Desert produces 280 megawatts of electrical power. It uses parabolic mirrors that focus sunlight on pipes filled with a heat transfer fluid (HTF) (Figure $8$). The HTF then does two things: It turns water into steam, which spins turbines, which in turn produces electricity, and it melts and heats a mixture of salts, which functions as a thermal energy storage system. After the sun goes down, the molten salt mixture can then release enough of its stored heat to produce steam to run the turbines for 6 hours. Molten salts are used because they possess a number of beneficial properties, including high heat capacities and thermal conductivities.
The 377-megawatt Ivanpah Solar Generating System, located in the Mojave Desert in California, is the largest solar thermal power plant in the world (Figure $9$). Its 170,000 mirrors focus huge amounts of sunlight on three water-filled towers, producing steam at over 538 °C that drives electricity-producing turbines. It produces enough energy to power 140,000 homes. Water is used as the working fluid because of its large heat capacity and heat of vaporization.
Summary
Energy is the capacity to do work (applying a force to move matter). Kinetic energy (KE) is the energy of motion; potential energy is energy due to relative position, composition, or condition. When energy is converted from one form into another, energy is neither created nor destroyed (law of conservation of energy or first law of thermodynamics). Matter has thermal energy due to the KE of its molecules and temperature that corresponds to the average KE of its molecules. Heat is energy that is transferred between objects at different temperatures; it flows from a high to a low temperature. Chemical and physical processes can absorb heat (endothermic) or release heat (exothermic). The SI unit of energy, heat, and work is the joule (J). Specific heat and heat capacity are measures of the energy needed to change the temperature of a substance or object. The amount of heat absorbed or released by a substance depends directly on the type of substance, its mass, and the temperature change it undergoes.
Key Equations
• $q=c×m×ΔT=c×m×(T_\ce{final}−T_\ce{initial})$
Glossary
calorie (cal)
unit of heat or other energy; the amount of energy required to raise 1 gram of water by 1 degree Celsius; 1 cal is defined as 4.184 J
endothermic process
chemical reaction or physical change that absorbs heat
energy
capacity to supply heat or do work
exothermic process
chemical reaction or physical change that releases heat
heat (q)
transfer of thermal energy between two bodies
heat capacity (C)
extensive property of a body of matter that represents the quantity of heat required to increase its temperature by 1 degree Celsius (or 1 kelvin)
joule (J)
SI unit of energy; 1 joule is the kinetic energy of an object with a mass of 2 kilograms moving with a velocity of 1 meter per second, 1 J = 1 kg m2/s and 4.184 J = 1 cal
kinetic energy
energy of a moving body, in joules, equal to $\dfrac{1}{2}mv^2$ (where m = mass and v = velocity)
potential energy
energy of a particle or system of particles derived from relative position, composition, or condition
specific heat capacity (c)
intensive property of a substance that represents the quantity of heat required to raise the temperature of 1 gram of the substance by 1 degree Celsius (or 1 kelvin)
temperature
intensive property of matter that is a quantitative measure of “hotness” and “coldness”
thermal energy
kinetic energy associated with the random motion of atoms and molecules
thermochemistry
study of measuring the amount of heat absorbed or released during a chemical reaction or a physical change
work (w)
energy transfer due to changes in external, macroscopic variables such as pressure and volume; or causing matter to move against an opposing force | textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Chemical_Thermodynamics_(Supplement_to_Shepherd_et_al.)/05%3A_Fundamental_4_-_Heat_Transfer/5.01%3A_Energy_Basics.txt |
• 6.1: Entropy
Entropy (S) is a state function that can be related to the number of microstates for a system (the number of ways the system can be arranged) and to the ratio of reversible heat to kelvin temperature. It may be interpreted as a measure of the dispersal or distribution of matter and/or energy in a system, and it is often described as representing the “disorder” of the system. For a given substance, \(S_{solid} < S_{liquid} < S_{gas}\) in a given physical state at a given temperature.
06: Fundamental 5 - Entropy
Learning Objectives
• Define entropy
• Explain the relationship between entropy and the number of microstates
• Predict the sign of the entropy change for chemical and physical processes
In 1824, at the age of 28, Nicolas Léonard Sadi Carnot (Figure $2$) published the results of an extensive study regarding the efficiency of steam heat engines. In a later review of Carnot’s findings, Rudolf Clausius introduced a new thermodynamic property that relates the spontaneous heat flow accompanying a process to the temperature at which the process takes place. This new property was expressed as the ratio of the reversible heat (qrev) and the kelvin temperature (T). The term reversible process refers to a process that takes place at such a slow rate that it is always at equilibrium and its direction can be changed (it can be “reversed”) by an infinitesimally small change is some condition. Note that the idea of a reversible process is a formalism required to support the development of various thermodynamic concepts; no real processes are truly reversible, rather they are classified as irreversible.
Similar to other thermodynamic properties, this new quantity is a state function, and so its change depends only upon the initial and final states of a system. In 1865, Clausius named this property entropy (S) and defined its change for any process as the following:
$ΔS=\dfrac{q_\ce{rev}}{T} \label{Eq1}$
The entropy change for a real, irreversible process is then equal to that for the theoretical reversible process that involves the same initial and final states.
Entropy and Microstates
Following the work of Carnot and Clausius, Ludwig Boltzmann developed a molecular-scale statistical model that related the entropy of a system to the number of microstates possible for the system. A microstate is a specific configuration of the locations and energies of the atoms or molecules that comprise a system like the following:
$S=k \ln W \label{Eq2}$
Here $k$ is the Boltzmann constant and has a value of $1.38 \times 10^{−23}\, J/K$ and $W$ is the microscopic multiplicity.
As for other state functions, the change in entropy for a process is the difference between its final ($S_f$) and initial ($S_i$) values:
\begin{align} ΔS &=S_\ce{f}−S_\ce{i} \nonumber \[4pt] &=k \ln W_\ce{f} − k \ln W_\ce{i} \nonumber \[4pt] &=k \ln\dfrac{W_\ce{f}}{W_\ce{i}} \label{Eq2a} \end{align}
For processes involving an increase in the number of microstates of the system, $W_f > W_i$, the entropy of the system increases, $ΔS > 0$. Conversely, processes that reduce the number of microstates in the system, $W_f < W_i$, yield a decrease in system entropy, $ΔS < 0$. This molecular-scale interpretation of entropy provides a link to the probability that a process will occur as illustrated in the next paragraphs.
Consider the general case of a system comprised of N particles distributed among n boxes. The number of microstates possible for such a system is nN. For example, distributing four particles among two boxes will result in 24 = 16 different microstates as illustrated in Figure $2$. Microstates with equivalent particle arrangements (not considering individual particle identities) are grouped together and are called distributions (sometimes called macrostates or configurations). The probability that a system will exist with its components in a given distribution is proportional to the number of microstates within the distribution. Since entropy increases logarithmically with the number of microstates, the most probable distribution is therefore the one of greatest entropy.
For this system, the most probable configuration is one of the six microstates associated with distribution (c) where the particles are evenly distributed between the boxes, that is, a configuration of two particles in each box. The probability of finding the system in this configuration is
$\dfrac{6}{16} = \dfrac{3}{8}$
The least probable configuration of the system is one in which all four particles are in one box, corresponding to distributions (a) and (e), each with a probability of
$\dfrac{1}{16}$
The probability of finding all particles in only one box (either the left box or right box) is then
$\left(\dfrac{1}{16}+\dfrac{1}{16}\right)=\dfrac{2}{16} = \dfrac{1}{8}$
As you add more particles to the system, the number of possible microstates increases exponentially (2N). A macroscopic (laboratory-sized) system would typically consist of moles of particles (N ~ 1023), and the corresponding number of microstates would be staggeringly huge. Regardless of the number of particles in the system, however, the distributions in which roughly equal numbers of particles are found in each box are always the most probable configurations.
The most probable distribution is therefore the one of greatest entropy.
The previous description of an ideal gas expanding into a vacuum is a macroscopic example of this particle-in-a-box model. For this system, the most probable distribution is confirmed to be the one in which the matter is most uniformly dispersed or distributed between the two flasks. The spontaneous process whereby the gas contained initially in one flask expands to fill both flasks equally therefore yields an increase in entropy for the system.
A similar approach may be used to describe the spontaneous flow of heat. Consider a system consisting of two objects, each containing two particles, and two units of energy (represented as “*”) in Figure $3$. The hot object is comprised of particles A and B and initially contains both energy units. The cold object is comprised of particles C and D, which initially has no energy units. Distribution (a) shows the three microstates possible for the initial state of the system, with both units of energy contained within the hot object. If one of the two energy units is transferred, the result is distribution (b) consisting of four microstates. If both energy units are transferred, the result is distribution (c) consisting of three microstates. And so, we may describe this system by a total of ten microstates. The probability that the heat does not flow when the two objects are brought into contact, that is, that the system remains in distribution (a), is $\frac{3}{10}$. More likely is the flow of heat to yield one of the other two distribution, the combined probability being $\frac{7}{10}$. The most likely result is the flow of heat to yield the uniform dispersal of energy represented by distribution (b), the probability of this configuration being $\frac{4}{10}$. As for the previous example of matter dispersal, extrapolating this treatment to macroscopic collections of particles dramatically increases the probability of the uniform distribution relative to the other distributions. This supports the common observation that placing hot and cold objects in contact results in spontaneous heat flow that ultimately equalizes the objects’ temperatures. And, again, this spontaneous process is also characterized by an increase in system entropy.
Example $1$: Determination of ΔS
Consider the system shown here. What is the change in entropy for a process that converts the system from distribution (a) to (c)?
Solution
We are interested in the following change:
The initial number of microstates is one, the final six:
\begin{align} ΔS &=k \ln\dfrac{W_\ce{c}}{W_\ce{a}} \nonumber \[4pt] &= 1.38×10^{−23}\:J/K × \ln\dfrac{6}{1} \nonumber \[4pt] &= 2.47×10^{−23}\:J/K \nonumber \end{align} \nonumber
The sign of this result is consistent with expectation; since there are more microstates possible for the final state than for the initial state, the change in entropy should be positive.
Exercise $1$
Consider the system shown in Figure $3$. What is the change in entropy for the process where all the energy is transferred from the hot object (AB) to the cold object (CD)?
Answer
0 J/K
Predicting the Sign of ΔS
The relationships between entropy, microstates, and matter/energy dispersal described previously allow us to make generalizations regarding the relative entropies of substances and to predict the sign of entropy changes for chemical and physical processes. Consider the phase changes illustrated in Figure $4$. In the solid phase, the atoms or molecules are restricted to nearly fixed positions with respect to each other and are capable of only modest oscillations about these positions. With essentially fixed locations for the system’s component particles, the number of microstates is relatively small. In the liquid phase, the atoms or molecules are free to move over and around each other, though they remain in relatively close proximity to one another. This increased freedom of motion results in a greater variation in possible particle locations, so the number of microstates is correspondingly greater than for the solid. As a result, Sliquid > Ssolid and the process of converting a substance from solid to liquid (melting) is characterized by an increase in entropy, ΔS > 0. By the same logic, the reciprocal process (freezing) exhibits a decrease in entropy, ΔS < 0.
Now consider the vapor or gas phase. The atoms or molecules occupy a much greater volume than in the liquid phase; therefore each atom or molecule can be found in many more locations than in the liquid (or solid) phase. Consequently, for any substance, Sgas > Sliquid > Ssolid, and the processes of vaporization and sublimation likewise involve increases in entropy, ΔS > 0. Likewise, the reciprocal phase transitions, condensation and deposition, involve decreases in entropy, ΔS < 0.
According to kinetic-molecular theory, the temperature of a substance is proportional to the average kinetic energy of its particles. Raising the temperature of a substance will result in more extensive vibrations of the particles in solids and more rapid translations of the particles in liquids and gases. At higher temperatures, the distribution of kinetic energies among the atoms or molecules of the substance is also broader (more dispersed) than at lower temperatures. Thus, the entropy for any substance increases with temperature (Figure $5$ ).
The entropy of a substance is influenced by structure of the particles (atoms or molecules) that comprise the substance. With regard to atomic substances, heavier atoms possess greater entropy at a given temperature than lighter atoms, which is a consequence of the relation between a particle’s mass and the spacing of quantized translational energy levels (which is a topic beyond the scope of our treatment). For molecules, greater numbers of atoms (regardless of their masses) increase the ways in which the molecules can vibrate and thus the number of possible microstates and the system entropy.
Finally, variations in the types of particles affects the entropy of a system. Compared to a pure substance, in which all particles are identical, the entropy of a mixture of two or more different particle types is greater. This is because of the additional orientations and interactions that are possible in a system comprised of nonidentical components. For example, when a solid dissolves in a liquid, the particles of the solid experience both a greater freedom of motion and additional interactions with the solvent particles. This corresponds to a more uniform dispersal of matter and energy and a greater number of microstates. The process of dissolution therefore involves an increase in entropy, ΔS > 0.
Considering the various factors that affect entropy allows us to make informed predictions of the sign of ΔS for various chemical and physical processes as illustrated in Example .
Example $2$: Predicting the Sign of ∆S
Predict the sign of the entropy change for the following processes. Indicate the reason for each of your predictions.
1. One mole liquid water at room temperature $⟶$ one mole liquid water at 50 °C
2. $\ce{Ag+}(aq)+\ce{Cl-}(aq)⟶\ce{AgCl}(s)$
3. $\ce{C6H6}(l)+\dfrac{15}{2}\ce{O2}(g)⟶\ce{6CO2}(g)+\ce{3H2O}(l)$
4. $\ce{NH3}(s)⟶\ce{NH3}(l)$
Solution
1. positive, temperature increases
2. negative, reduction in the number of ions (particles) in solution, decreased dispersal of matter
3. negative, net decrease in the amount of gaseous species
4. positive, phase transition from solid to liquid, net increase in dispersal of matter
Exercise $2$
Predict the sign of the enthalpy change for the following processes. Give a reason for your prediction.
1. $\ce{NaNO3}(s)⟶\ce{Na+}(aq)+\ce{NO3-}(aq)$
2. the freezing of liquid water
3. $\ce{CO2}(s)⟶\ce{CO2}(g)$
4. $\ce{CaCO}(s)⟶\ce{CaO}(s)+\ce{CO2}(g)$
Answer a
Positive; The solid dissolves to give an increase of mobile ions in solution.
Answer b
Negative; The liquid becomes a more ordered solid.
Answer c
Positive; The relatively ordered solid becomes a gas
Answer d
Positive; There is a net production of one mole of gas.
Summary
Entropy ($S$) is a state function that can be related to the number of microstates for a system (the number of ways the system can be arranged) and to the ratio of reversible heat to kelvin temperature. It may be interpreted as a measure of the dispersal or distribution of matter and/or energy in a system, and it is often described as representing the “disorder” of the system. For a given substance, $S_{solid} < S_{liquid} \ll S_{gas}$ in a given physical state at a given temperature, entropy is typically greater for heavier atoms or more complex molecules. Entropy increases when a system is heated and when solutions form. Using these guidelines, the sign of entropy changes for some chemical reactions may be reliably predicted.
Key Equations
• $ΔS=\dfrac{q_\ce{rev}}{T}$
• $S = k \ln W$
• $ΔS=k\ln\dfrac{W_\ce{f}}{W_\ce{i}}$
Glossary
entropy (S)
state function that is a measure of the matter and/or energy dispersal within a system, determined by the number of system microstates often described as a measure of the disorder of the system
microstate (W)
possible configuration or arrangement of matter and energy within a system
reversible process
process that takes place so slowly as to be capable of reversing direction in response to an infinitesimally small change in conditions; hypothetical construct that can only be approximated by real processes removed | textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Chemical_Thermodynamics_(Supplement_to_Shepherd_et_al.)/06%3A_Fundamental_5_-_Entropy/6.01%3A_Entropy.txt |
Quantization of the motional energy of molecules
Early in their discussion of kinetic-molecular theory, most general chemistry texts have a Figure of the greatly increased distribution of molecular speeds at higher temperatures in gases than at moderate temperatures.
When the temperature of a gas is raised (by transfer of energy from the surroundings of the system), there is a great increase in the velocity, $v$, of many of the gas molecules (Figure $1$). From 1/2mv2, this means that there has also been a great increase in the translational energies of those faster moving molecules. Finally, we can see that an input of energy not only causes the gas molecules in the system to move faster — but also to move at very many different fast speeds. (Thus, the energy in a heated system is more dispersed, spread out in being in many separate speeds rather than more localized in fewer moderate speeds.)
A symbolic indication of the different distributions of the translational energy of each molecule of a gas on low to high energy levels in a 36-molecule system is in Figure $2$, with the lower temperature gas as Figure $\PageIndex{2A}$ and the higher temperature gas as Figure $\PageIndex{2B}$.
These and later Figures in this section are symbolic because, in actuality, this small number of molecules is not enough to exhibit thermodynamic temperature. For further simplification, rotational energies that range from zero in monatomic molecules to about half the total translational energy of di- and tri-atomic molecules (and more for most polyatomic) at 300 K are not shown in the Figures. If those rotational energies were included, they would constitute a set of energy levels (corresponding to a spacing of ~10-23 J) each with translational energy distributions of the 36 molecules (corresponding to a spacing of ~10-37 J). These numbers show why translational levels, though quantized, are considered virtually continuous compared to the separation of rotational energies. The details of vibrational energy levels — two at moderate temperatures (on the ground state of which would be almost all the rotational and translational levels populated by the molecules of a symbolic or real system) — can also be postponed until physical chemistry. At this point in the first year course, depending on the instructor's preference, only a verbal description of rotational and vibrational motions and energy level spacing need be introduced.
By the time in the beginning course that students reach thermodynamics, five to fifteen chapters later than kinetic theory, they can accept the concept that the total motional energies of molecules includes not just translational but also rotational and vibrational movements (that can be sketched simply below).
A microstate is one of many arrangements of the molecular energies (i.e., ‘the molecules on each particular energy level') for the total energy of a system. Thus, Figure $\PageIndex{2A}$ is one microstate for a system with a given energy and Figure $\PageIndex{1B}$ is a microstate of the same system but with a greater total energy. Figure $\PageIndex{3A}$ (just a repeat of Figure $\PageIndex{2A}$, for convenience) is a different microstate than the microstate for the same system shown in Figure $\PageIndex{3B}$; the total energy is the same in $\PageIndex{3A}$ and $\PageIndex{3B}$, but in Figure $\PageIndex{3B}$ the arrangement of energies has been changed because two molecules have changed their energy levels, as indicated by the arrows.
A possible scenario for that different microstate in Figure $3$ is that these two molecules on the second energy level collided at a glancing angle such that one gained enough energy to be on the third energy level, while the other molecule lost the same amount of energy and dropped down to the lowest energy level. In the light of that result of a single collision and the billions of collisions of molecules per second in any system at room temperature, there can be a very large number of microstates even for this system of just 36 molecules in Figures $2$ and $3$. (This is true despite the fact that not every collision would change the energy of the two molecules involved, and thus not change the numbers on a given energy level. Glancing collisions could occur with no change in the energy of either participant.) For any real system involving 6 x 1023 molecules, however, the number of microstates becomes humanly incomprehensible for any system, even though we can express it in numbers, as will now be developed.
The quantitative entropy change in a reversible process is given by
$ΔS = \dfrac{q_{rev}}{T}$
(Irreversible processes involving temperature or volume change or mixing can be treated by calculations from incremental steps that are reversible.) According to the Boltzmann entropy relationship,
$ΔS = k \ln \dfrac{W_{Final}}{W_{Initial}}$
where $k$ is Boltzmann's constant and $W_{Final}$ or $W_{Initial}$ is the count of how many microstates correspond to the Final or Initial macrostates, respectively.
The number of microstates for a system determines the number of ways in any one of which that the total energy of a macrostate can be at one instant. Thus, an increase in entropy means a greater number of microstates for the Final state than for the Initial. In turn, this means that there are more choices for the arrangement of a system's total energy at any one instant, far less possibility of localization (such as cycling back and forth between just 2 microstates), i.e., greater dispersal of the total energy of a system because of so many possibilities.
An increase in entropy means a greater number of microstates for the Final state than for the Initial. In turn, this means that there are more choices for the arrangement of a system's total energy at any one instant.
Delocalization vs. Dispersal
Some instructors may prefer “delocalization” to describe the status of the total energy of a system when there are a greater number of microstates rather than fewer, as an exact synonym for “dispersal” of energy as used here in this article for other situations in chemical thermodynamics. The advantage of uniform use of ‘dispersal' is its correct common-meaning applicability to examples ranging from motional energy becoming literally spread out in a larger volume to the cases of thermal energy transfer from hot surroundings to a cooler system, as well as to distributions of molecular energies on energy levels for either of those general cases. Students of lesser ability should be able to grasp what ‘dispersal' means in three dimensions, even though the next steps of abstraction to what it means in energy levels and numbers of microstates may result in more of a ‘feeling' than a preparation for physical chemistry that it can be for the more able.
Of course, dispersal of the energy of a system in terms of microstates does not mean that the energy is smeared or spread out over microstates like peanut butter on bread! All the energy of the macrostate is always in only one microstate at one instant. It is the possibility that the total energy of the macrostate can be in any one of so many more different arrangements of that energy at the next instant — an increased probability that it could not be localized by returning to the same microstate — that amounts to a greater dispersal or spreading out of energy when there are a larger number of microstates
(The numbers of microstates for chemical systems above 0 K are astounding. For any substance at a temperature about 1-4 K, there are 1026,000,000,000,000,000,000 microstates (5). For a mole of water at 273.15 K, there are 102,000,000,000,000,000,000,000,000 microstates and when it is heated to be just one degree warmer, that number is increased 1022 times to 102,010,000,000,000,000,000,000,000 microstates. For comparison, an estimate of the number of atoms in the entire universe is ‘only' about 1070, while a googol, considered a large number in mathematics, is `only' 10100.)
Summarizing, when a substance is heated, its entropy increases because the energy acquired and that previously within it can be far more dispersed on the previous higher energy levels and on those additional high energy levels that now can be occupied. This in turn means that there are many many more possible arrangements of the molecular energies on their energy levels than before and thus, there is a great increase in accessible microstates for the system at higher temperatures. A concise statement would be that when a system is heated, there are many more microstates accessible and this amounts to greater delocalization or dispersal of its total energy. (The common comment "heating causes or favors molecular disorder" is an anthropomorphic labeling of molecular behavior that has more flaws than utility. There is virtual chaos, so far as the distribution of energy for a system (its number of microstates) is concerned, before as well as after heating at any temperature above 0 K and energy distribution is at the heart of the meaning of entropy and entropy change. ) (5). | textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Chemical_Thermodynamics_(Supplement_to_Shepherd_et_al.)/07%3A_Extension_5_-_Temperature/7.01%3A_The_Molecular_Basis_for_Understanding_Simple_Entropy_Change.txt |
Learning Objectives
• To know the relationship between energy, work, and heat.
One definition of energy is the capacity to do work. There are many kinds of work, including mechanical work, electrical work, and work against a gravitational or a magnetic field. Here we will consider only mechanical work and focus on the work done during changes in the pressure or the volume of a gas.
Mechanical Work
The easiest form of work to visualize is mechanical work (Figure $1$), which is the energy required to move an object a distance d when opposed by a force F, such as gravity:
$w=F\,d \label{7.4.1}$
with
• $w$ is work
• $F$ is opposing force
• $d$ is distance
Because the force (F) that opposes the action is equal to the mass (m) of the object times its acceleration ($a$), Equation \ref{7.4.1} can be rewritten to:
$w = m\,a\,d \label{7.4.2}$
with
• $w$ is work
• $m$ is mass
• $a$ is a acceleration, and
• $d$ is distance
Recall from that weight is a force caused by the gravitational attraction between two masses, such as you and Earth. Hence for works against gravity (on Earth), $a$ can be set to $g=9.8\; m/s^2)$. Consider the mechanical work required for you to travel from the first floor of a building to the second. Whether you take an elevator or an escalator, trudge upstairs, or leap up the stairs two at a time, energy is expended to overcome the opposing force of gravity. The amount of work done (w) and thus the energy required depends on three things:
1. the height of the second floor (the distance $d$);
2. your mass, which must be raised that distance against the downward acceleration due to gravity; and
3. your path.
Pressure-Volume (PV) Work
To describe this pressure–volume work ($PV$ work), we will use such imaginary oddities as frictionless pistons, which involve no component of resistance, and ideal gases, which have no attractive or repulsive interactions. Imagine, for example, an ideal gas, confined by a frictionless piston, with internal pressure ($P_{int}$) and initial volume $V_i$ (Figure 7.4.2). If $P_{ext} = P_{int}$, the system is at equilibrium; the piston does not move, and no work is done. If the external pressure on the piston ($P_{ext}$) is less than $P_{int}$, however, then the ideal gas inside the piston will expand, forcing the piston to perform work on its surroundings; that is, the final volume ($V_f$) will be greater than $V_i$. If $P_{ext} > P_{int}$, then the gas will be compressed, and the surroundings will perform work on the system.
If the piston has cross-sectional area $A$, the external pressure exerted by the piston is, by definition, the force per unit area:
$P_{ext} = \dfrac{F}{A}$
The volume of any three-dimensional object with parallel sides (such as a cylinder) is the cross-sectional area times the height ($V = Ah$). Rearranging to give $F = P_{ext}A$ and defining the distance the piston moves ($d$) as $\Delta h$, we can calculate the magnitude of the work performed by the piston by substituting into Equation 7.4.1:
$w = F d = P_{ext}A\Delta h \label{7.4.3}$
The change in the volume of the cylinder ($\Delta V$) as the piston moves a distance d is $\Delta V = A\Delta h$, as shown in Figure 7.4.3. The work performed is thus
$w = P_{ext}\Delta V \label{7.4.4}$
The units of work obtained using this definition are correct for energy: pressure is force per unit area (newton/m2) and volume has units of cubic meters, so
$w=\left(\dfrac{F}{A}\right)_{\textrm{ext}}(\Delta V)=\dfrac{\textrm{newton}}{\textrm m^2}\times \textrm m^3=\mathrm{newton\cdot m}=\textrm{joule}$
If we use atmospheres for P and liters for V, we obtain units of L·atm for work. These units correspond to units of energy, as shown in the different values of the ideal gas constant R:
$R=\dfrac{0.08206\;\mathrm{L\cdot atm}}{\mathrm{mol\cdot K}}=\dfrac{8.314\textrm{ J}}{\mathrm{mol\cdot K}}$
Thus 0.08206 L·atm = 8.314 J and 1 L·atm = 101.3 J.
Whether work is defined as having a positive sign or a negative sign is a matter of convention. Heat flow is defined from a system to its surroundings as negative; using that same sign convention, we define work done by a system on its surroundings as having a negative sign because it results in a transfer of energy from a system to its surroundings. This is an arbitrary convention and one that is not universally used. Some engineering disciplines are more interested in the work done on the surroundings than in the work done by the system and therefore use the opposite convention. Because $\Delta V$ > 0 for an expansion, Equation 7.4.4 must be written with a negative sign to describe PV work done by the system as negative:
$w = −P_{ext}ΔV \label{7.4.5}$
The work done by a gas expanding against an external pressure is therefore negative, corresponding to work done by a system on its surroundings. Conversely, when a gas is compressed by an external pressure, ΔV < 0 and the work is positive because work is being done on a system by its surroundings.
Note: A Matter of Convention
• Heat flow is defined from the system to its surroundings as negative
• Work is defined as by the system on its surroundings as negative
Outside Links
• Gasparro, Frances P. "Remembering the sign conventions for q and w in deltaU = q - w." J. Chem. Educ. 1976: 53, 389.
• Koubek, E. "PV work demonstration (TD)." J. Chem. Educ. 1980: 57, 374. '
8.02: Gas Expansion
In Gas Expansion, we assume Ideal behavior for the two types of expansions:
Isothermal Expansion
This shows the expansion of gas at constant temperature against weight of an object's mass (m) on the piston. Temperature is held constant, therefore the change in energy is zero (U=0). So, the heat absorbed by the gas equals the work done by the ideal gas on its surroundings. Enthalpy change is also equal to zero because the change in energy zero and the pressure and volume is constant.
Isothermal Irreversible/Reversible process
The graphs clearly show work done (area under the curve) is greater in a reversible process.
Adiabatic Expansions
Adiabatic means when no heat exchange occurs during expansion between system and surrounding and the temperature is no longer held constant.
Reversible Adiabatic Expansion
This equation shows the relationship between PV and is useful only when it applies to ideal gas and reversible adiabatic change. The equation is very similar to Boyle's law except it has exponent (gamma) due to change in temperature. The work done by an adiabatic reversible process is given by the following equation:
where T2 is less than T1. The internal energy of the system decreases as the gas expands. The work can be calculated in two ways because the Internal energy (U) does not depend on path. The graph shows that less work is done in an adiabatic reversible process than an Isothermal reversible process. | textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Chemical_Thermodynamics_(Supplement_to_Shepherd_et_al.)/08%3A_Fundamental_6_-_Work/8.01%3A_Work.txt |
• 9.1: Partial Differentiation
The development of thermodynamics would have been unthinkable without calculus in more than one dimension (multivariate calculus) and partial differentiation is essential to the theory.
• 9.2: Functions of Two Independent Variables
A function of two independent variables, z=f(x,y) , defines a surface in three-dimensional space. For a function of two or more variables, there are as many independent first derivatives as there are independent variables.
• 9.3: The Total Differential
09: Fundamental 7 - Variable Changes
The development of thermodynamics would have been unthinkable without calculus in more than one dimension (multivariate calculus) and partial differentiation is essential to the theory.
'Active' Variables
When applying partial differentiation it is very important to keep in mind, which symbol is the variable and which ones are the constants. Mathematicians usually write the variable as x or y and the constants as a, b or c but in Physical Chemistry the symbols are different. It sometimes helps to replace the symbols in your mind.
For example the van der Waals equation can be written as:
$P= \dfrac{RT}{\overline{V} -b} - \dfrac{a}{\overline{V}^2} \label{eq1}$
Suppose we must compute the partial differential
$\left( \dfrac{\partial P}{\partial \overline{V}} \right)_T$
In this case molar volume is the variable 'x' and the pressure is the function $f(x)$, the rest is just constants, so Equation \ref{eq1} can be rewritten in the form
$f(x)= \dfrac{c}{x-b} - \dfrac{a}{x^2} \label{eq4}$
When calculating
$\left( \dfrac{\partial P}{\partial T} \right)_{\overline{V}}$
should look at Equation \ref{eq1} as:
$f(x) = cx -d$
The active variable 'x' is now the temperature T and all the rest is just constants. It is useful to train your eye to pick out the one active one from all the inactive ones. Use highlighters, underline, rewrite, do whatever helps you best.
9.02: Functions of Two Independent Variables
A (real) function of one variable, $y = f(x)$, defines a curve in the plane. The first derivative of a function of one variable can be interpreted graphically as the slope of a tangent line, and dynamically as the rate of change of the function with respect to the variable Figure $1$.
A function of two independent variables, $z=f (x,y)$, defines a surface in three-dimensional space. For a function of two or more variables, there are as many independent first derivatives as there are independent variables. For example, we can differentiate the function $z=f (x,y)$ with respect to $x$ keeping $y$ constant. This derivative represents the slope of the tangent line shown in Figure $2 \text{A}$. We can also take the derivative with respect to $y$ keeping $x$ constant, as shown in Figure $2 \text{B}$.
For example, let’s consider the function $z=3x^2-y^2+2xy$. We can take the derivative of this function with respect to $x$ treating $y$ as a constant. The result is $6x+2y$. This is the partial derivative of the function with respect to $x$, and it is written:
$\left (\frac{\partial z}{\partial x} \right )_y=6x+2y \nonumber$
where the small subscripts indicate which variables are held constant. Analogously, the partial derivate of $z$ with respect to $y$ is:
$\left (\frac{\partial z}{\partial y} \right )_x=2x-2y \nonumber$
We can extend these ideas to functions of more than two variables. For example, consider the function $f(x,y,z)=x^2y/z$. We can differentiate the function with respect to $x$ keeping $y$ and $z$ constant to obtain:
$\left (\frac{\partial f}{\partial x} \right )_{y,z}=2x\frac{y}{z} \nonumber$
We can also differentiate the function with respect to $z$ keeping $x$ and $y$ constant:
$\left (\frac{\partial f}{\partial z} \right )_{x,y}=-x^2y/z^2 \nonumber$
and differentiate the function with respect to $y$ keeping $x$ and $z$ constant:
$\left (\frac{\partial f}{\partial y} \right )_{x,z}=\frac{x^2}{z} \nonumber$
Functions of two or more variables can be differentiated partially more than once with respect to either variable while holding the other constant to yield second and higher derivatives. For example, the function $z=3x^2-y^2+2xy$ can be differentiated with respect to $x$ two times to obtain:
$\left ( \frac{\partial }{\partial x}\left ( \frac{\partial z}{\partial x} \right )_{y} \right )_{y}=\left ( \frac{\partial ^2z}{\partial x^2} \right )_{y}=6 \nonumber$
We can also differentiate with respect to $x$ first and $y$ second:
$\left ( \frac{\partial }{\partial y}\left ( \frac{\partial f}{\partial x} \right )_{y} \right )_{x}=\left ( \frac{\partial ^2f}{\partial y \partial x} \right )=2 \nonumber$
Check the videos below if you are learning this for the first time, or if you feel you need to refresh the concept of partial derivatives.
If a function of two or more variables and its derivatives are single-valued and continuous, a property normally attributed to physical variables, then the mixed partial second derivatives are equal (Euler reciprocity):
$\label{c2v:euler reciprocity} \left ( \frac{\partial ^2f}{\partial x \partial y} \right )=\left ( \frac{\partial ^2f}{\partial y \partial x} \right )$
For example, for $z=3x^2-y^2+2xy$:
$\left ( \frac{\partial ^2f}{\partial y \partial x} \right )=\left ( \frac{\partial }{\partial y}\left ( \frac{\partial f}{\partial x} \right )_{y} \right )_{x}=\left ( \frac{\partial }{\partial y}\left ( 6x+2y\right ) \right )_{x}=2 \nonumber$
$\left ( \frac{\partial ^2f}{\partial x \partial y} \right )=\left ( \frac{\partial }{\partial x}\left ( \frac{\partial f}{\partial y} \right )_{x} \right )_{y}=\left ( \frac{\partial }{\partial x}\left ( -2y+2x\right ) \right )_{y}=2 \nonumber$
Another useful property of the partial derivatives is the so-called reciprocal identity, which holds when the same variables are held constant in the two derivatives:
$\label{c2v:inverse} \left ( \frac{\partial y}{\partial x} \right )=\frac{1}{\left ( \frac{\partial x}{\partial y} \right )}$
For example, for $z=x^2y$:
$\left ( \frac{\partial z}{\partial x} \right )_y=\left ( \frac{\partial }{\partial x} x^2y\right )_y=2xy \nonumber$
$\left ( \frac{\partial x}{\partial z} \right )_y=\left ( \frac{\partial }{\partial z} \sqrt{z/y} \right )_y=\frac{1}{2y} (z/y)^{-1/2}=\frac{1}{2xy}=\frac{1}{\left ( \frac{\partial z}{\partial x} \right )}_y \nonumber$
Finally, let’s mention the cycle rule. For a function $z(x,y)$:
$\label{c2v:cycle} \left ( \frac{\partial y}{\partial x} \right )_z\left ( \frac{\partial x}{\partial z} \right )_y\left ( \frac{\partial z}{\partial y} \right )_x=-1$
We can construct other versions as follows:
For example, for $z=x^2y$:
$\left ( \frac{\partial y}{\partial x} \right )_z=\left ( \frac{\partial }{\partial x} (z/x^2)\right )_z=-2z/x^3 \nonumber$
$\left ( \frac{\partial x}{\partial z} \right )_y=\left ( \frac{\partial }{\partial z} \sqrt{z/y}\right )_y=\frac{1}{2y} (z/y)^{-1/2} \nonumber$
$\left ( \frac{\partial z}{\partial y} \right )_x=\left ( \frac{\partial }{\partial y} x^2y\right )_x=x^2 \nonumber$
$\left ( \frac{\partial y}{\partial x} \right )_z\left ( \frac{\partial x}{\partial z} \right )_y\left ( \frac{\partial z}{\partial y} \right )_x=-\frac{2z}{x^3}\frac{1}{2y} \left(\frac{y}{z}\right)^{1/2}x^2=-\left(\frac{z}{y}\right)^{1/2}\frac{1}{x}=-\left(\frac{x^2y}{y}\right)^{1/2}\frac{1}{x}=-1 \nonumber$
Before discussing partial derivatives any further, let’s introduce a few physicochemical concepts to put our discussion in context. | textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Chemical_Thermodynamics_(Supplement_to_Shepherd_et_al.)/09%3A_Fundamental_7_-_Variable_Changes/9.01%3A_Partial_Differentiation.txt |
In Chapter 8 we learned that partial derivatives indicate how the dependent variable changes with one particular independent variable keeping the others fixed. In the context of an equation of state $P=P(T,V,n)$, the partial derivative of $P$ with respect to $V$ at constant $T$ and $n$ is:
$\left (\dfrac{\partial P}{\partial V} \right )_{T,n} \nonumber$
and physically represents how the pressure varies as we change the volume at constant temperature and constant $n$.
The partial derivative of $P$ with respect to $T$ at constant $V$ and $n$ is:
$\left (\dfrac{\partial P}{\partial T} \right )_{V,n} \nonumber$
and physically represents how the pressure varies as we change the temperature at constant volume and constant $n$.
What happens with the dependent variable (in this case $P$) if we change two or more independent variables simultaneously? For an infinitesimal change in volume and temperature, we can write the change in pressure as:
$\label{eq:differentials1} dP=\left (\dfrac{\partial P}{\partial V} \right )_{T,n} dV+\left (\dfrac{\partial P}{\partial T} \right )_{V,n} dT$
Equation \ref{eq:differentials1} is called the total differential of P, and it simply states that the change in $P$ is the sum of the individual contributions due to the change in $V$ at constant $T$ and the change in $T$ at constant $V$. This equation is true for infinitesimal changes. If the changes are not infinitesimal we will integrate this expression to calculate the change in $P$.[differentials_position1]
Let’s now consider the volume of a fluid, which is a function of pressure, temperature and the number of moles: $V=V(n,T,P)$. The total differential of $V$, by definition, is:
$\label{eq:differentials3} dV=\left (\frac{\partial V}{\partial T} \right )_{P,n} dT+\left (\frac{\partial V}{\partial P} \right )_{T,n} dP+\left (\frac{\partial V}{\partial n} \right )_{T,V} dn$
If we want to calculate the change in volume in a fluid upon small changes in $P, T$ and $n$, we could use:
$\label{eq:differentials3b} \Delta V\approx \left (\frac{\partial V}{\partial T} \right )_{P,n} \Delta T+\left (\frac{\partial V}{\partial P} \right )_{T,n} \Delta P+\left (\frac{\partial V}{\partial n} \right )_{T,V} \Delta n$
Of course, if we know the function $V=V(n,T,P)$, we could also calculate $\Delta V$ as $V_f-F_i$, where the final and initial volumes are calculated using the final and initial values of $P, T$ and $n$. This seems easy, so why do we need to bother with Equation \ref{eq:differentials3b}? The reason is that sometimes we can measure the partial derivatives experimentally, but we do not have an equation of the type $V=V(n,T,P)$ to use. For example, the following quantities are accessible experimentally and tabulated for different fluids and materials (Fig. [fig:diff_tables]):
• $\alpha=\frac{1}{V}\left(\frac{\partial V}{\partial T} \right )_{P,n}$ (coefficient of thermal expansion)
• $\kappa=-\frac{1}{V}\left(\frac{\partial V}{\partial P} \right )_{V,n}$ (isothermal compressibility)[differentials:compressibility]
• $V_m=\left(\frac{\partial V}{\partial n} \right )_{P,T}$ (molar volume)
Using these definitions, Equation \ref{eq:differentials3} becomes:
$\label{eq:differentials4} dV=\alpha V dT-\kappa VdP+V_m dn$
You can find tables with experimentally determined values of $\alpha$ and $\kappa$ under different conditions, which you can use to calculate the changes in $V$. Again, as we will see later in this chapter, this equation will need to be integrated if the changes are not small. In any case, the point is that you may have access to information about the derivatives of the function, but not to the function itself (in this case $V$ as a function of $T, P, n$).
In general, for a function $u=u(x_1, x_2...x_n)$, we define the total differential of $u$ as:
$\label{eq:total_differential} du=\left (\frac{\partial u}{\partial x_1} \right )_{x_2...x_n} dx_1+\left (\frac{\partial u}{\partial x_2} \right )_{x_1, x_3...x_n} dx_2+...+\left (\frac{\partial u}{\partial x_n} \right )_{x_1...x_{n-1}} dx_n$
Example $1$
Calculate the total differential of the function $z=3x^3+3yx^2+xy^2$.
Solution
By definition, the total differential is:
$dz=\left (\frac{\partial z}{\partial x} \right )_{y} dx+\left (\frac{\partial z}{\partial y} \right )_{x} dy \nonumber$
For the function given in the problem,
$\left (\frac{\partial z}{\partial x} \right )_{y}=9x^2+6xy+y^2 \nonumber$
and
$\left (\frac{\partial z}{\partial y} \right )_{x}=3x^2+2xy \nonumber$
and therefore,
$\displaystyle{\color{Maroon}dz=(9x^2+6xy+y^2)dx+(3x^2+2xy)dy} \nonumber$
Want to see more examples? | textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Chemical_Thermodynamics_(Supplement_to_Shepherd_et_al.)/09%3A_Fundamental_7_-_Variable_Changes/9.03%3A_The_Total_Differential.txt |
• 10.1: Exact Differentials
Total differentials are used identify how a change in a property depends on the changes of the natural variables of that property.
10: Extension 7 - Path Dependence
In general, if a differential can be expressed as
$df(x,y) = X\,dx + Y\,dy$
the differential will be an exact differential if it follows the Euler relation
$\left( \dfrac{\partial X}{\partial y} \right)_x = \left( \dfrac{\partial Y}{\partial x} \right)_y \label{euler}$
In order to illustrate this concept, consider $P(\overline{V}, T)$ using the ideal gas law.
$P= \dfrac{RT}{\overline{V}}$
The total differential of $P$ can be written
$dP = \left( - \dfrac{RT}{\overline{V}^2} \right) dV + \left( \dfrac{R}{\overline{V}} \right) dT \label{Eq10}$
Example $1$: Euler Relation
Does Equation \ref{Eq10} follow the Euler relation (Equation \ref{euler})?
Solution
Let’s confirm!
\begin{align*} \left[ \dfrac{1}{\partial T} \left( - \dfrac{RT}{\overline{V}^2} \right) \right]_\overline{V} &\stackrel{?}{=} \left[ \dfrac{1}{\partial \overline{V}} \left( \dfrac{R}{\overline{V}} \right) \right]_T \[4pt] \left( - \dfrac{R}{\overline{V}^2} \right) &\stackrel{\checkmark }{=} \left( - \dfrac{R}{\overline{V}^2} \right) \end{align*}
$dP$ is, in fact, an exact differential.
The differentials of all of the thermodynamic functions that are state functions will be exact. Heat and work, which are path functions, are not exact differential and $dw$ and $dq$ are called inexact differentials instead.
Contributors and Attributions
• Patrick E. Fleming (Department of Chemistry and Biochemistry; California State University, East Bay)
11.01: Internal Energy
• 11.1: Internal Energy
The internal energy of a system is identified with the random, disordered motion of molecules; the total (internal) energy in a system includes potential and kinetic energy. This is contrast to external energy which is a function of the sample with respect to the outside environment (e.g. kinetic energy if the sample is moving or potential energy if the sample is at a height from the ground etc).
• 11.2: Total Differential of the Internal Energy
11: Fundamental 8 - Energy Transformations
The internal energy of a system is identified with the random, disordered motion of molecules; the total (internal) energy in a system includes potential and kinetic energy. This is contrast to external energy which is a function of the sample with respect to the outside environment (e.g. kinetic energy if the sample is moving or potential energy if the sample is at a height from the ground etc). The symbol for Internal Energy Change is $ΔU$.
Energy on a smaller scale
• Internal energy includes energy on a microscopic scale
• It is the sum of all the microscopic energies such as:
1. translational kinetic energy
2. vibrational and rotational kinetic energy
3. potential energy from intermolecular forces
Example
One gram of water at zero °Celsius compared with one gram of copper at zero °Celsius do NOT have the same internal energy because even though their kinetic energies are equal, water has a much higher potential energy causing its internal energy to be much greater than the copper's internal energy.
Internal Energy Change Equations
The first law of thermodynamics states:
$dU=dq+dw$
where $dq$ is heat and $dw$ is work.
An isolated system cannot exchange heat or work with its surroundings making the change in internal energy equal to zero:
$dU_{\text {isolated system}} = 0$
Therefore, in an isolated system:
$dq=-dw$
Energy is Conserved
$dU_{\text {isolated system}} = dU_{\text {system}} + dU_{\text {surroundings}}$
$dU_{\text {system}}= -dU_{\text {surroundings}}$
The signs of internal energy
• Energy entering the system is POSITIVE (+), meaning heat is absorbed, q>0. Work is thus done on the system, w>0
• Energy leaving the system is NEGATIVE (-), meaning heat is given off by the system, q<0 and work is done by the system, w<0
Quick Notes
• A system contains ONLY Internal Energy
• A system does NOT contain energy in the form of heat or work
• Heat and work only exist during a change in the system; they are path functions
• Internal energy is a state function
Outside Links
• Levine, Ira N. "Thermodynamic internal energy of an ideal gas of rigid rotors." J. Chem. Educ. 1985: 62, 53.
Contributors
• Lorraine Alborzfar (UCD) | textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Chemical_Thermodynamics_(Supplement_to_Shepherd_et_al.)/10%3A_Extension_7_-_Path_Dependence/10.01%3A_Exact_Differentials.txt |
One Component, Closed Systems
Consider a closed system of one chemical component (e.g., a pure substance) in a single homogeneous phase. The only kind of work is expansion work, with $V$ as the work variable. This kind of system has two independent variables. During a reversible process in this system, the heat is $\mathrm{d} q=T \mathrm{d} S$, the work is $\mathrm{d} w=-P \mathrm{d} V$, and an infinitesimal internal energy change is given by
$\mathrm{d} U=T \mathrm{d} S-P \mathrm{d} V \label{1}$
The appearance of the intensive variables $T$ and $P$ in $\ref{1}$ implies, of course, that the temperature and pressure are uniform throughout the system during the process. If they were not uniform, the phase would not be homogeneous and there would be more than two independent variables. The temperature and pressure are strictly uniform only if the process is reversible; it is not necessary to include “reversible” as one of the conditions of validity.
A real process approaches a reversible process in the limit of infinite slowness. For all practical purposes, therefore, we may apply $\ref{1}$ to a process obeying the conditions of validity and taking place so slowly that the temperature and pressure remain essentially uniform—that is, for a process in which the system stays very close to thermal and mechanical equilibrium.
Because the system under consideration has two independent variables, $\ref{1}$ is an expression for the total differential of $U$ with $S$ and $V$ as the independent variables. In general, an expression for the differential $\mathrm{d} X$ of a state function $X$ is a total differential if
1. it is a valid expression for $\mathrm{d} X$ consistent with the physical nature of the system and any conditions and constraints;
2. it is a sum with the same number of terms as the number of independent variables;
3. each term of the sum is a function of state functions multiplied by the differential of one of the independent variables.
Note that the work coordinate of any kind of dissipative work—work without a reversible limit—cannot appear in the expression for a total differential, because it is not a state function.
We may identify the coefficient of each term in an expression for the total differential of a state function as a partial derivative of the function. We identify the coefficients on the right side of $\ref{1}$ as follows:
$T=\left(\frac{\partial U}{\partial S}\right)_{V}$
$-P=\left(\frac{\partial U}{\partial V}\right)_{S}$
One Component, Open Systems
Now let us consider some of the ways a system might have more than two independent variables. Suppose the system has one phase and one substance, with expansion work only, and is open so that the amount $N$ of the substance can vary. Such a system has three independent variables. Let us write the formal expression for the total differential of $U$ with $S$, $V$, and $N$ as the three independent variables:
$\mathrm{d} U=\left(\frac{\partial U}{\partial S}\right)_{V, n} \mathrm{d} S+\left(\frac{\partial U}{\partial V}\right)_{S, n} \mathrm{d} V+\left(\frac{\partial U}{\partial n}\right)_{S, V} \mathrm{d} n \label{2}$
We have seen above that if the system is closed, the partial derivatives are $(\partial U / \partial S)_{V}=T$ and $(\partial U / \partial V)_{S}=-P$. Since both of these partial derivatives are for a closed system in which $N$ is constant, they are the same as the first two partial derivatives on the right side of $\ref{2}$.
The quantity given by the third partial derivative, $(\partial U / \partial N)_{S, V}$, is represented by the symbol $\mu$ (mu). This quantity is an intensive state function called the chemical potential.
With these substitutions, $\ref{2}$ becomes
$\mathrm{d} U=T \mathrm{d} S-P \mathrm{d} V+\mu \mathrm{d} N$
and this is a valid expression for the total differential of $U$ under the given conditions.
Multiple Component, Open Systems
If a system contains a mixture of $M$ different substances in a single phase, and the system is open so that the amount of each substance can vary independently, there are $2+M$ independent variables and the total differential of $U$ can be written
$\mathrm{d} U=T \mathrm{d} S-P \mathrm{d} V+\sum_{i=1}^{M} \mu_{i} \mathrm{d} N_{i}$
The coefficient $\mu_i$ is the chemical potential of substance $i$. We identify it as the partial derivative $\left(\partial U / \partial N_{i}\right)_{S, V, N_{j \neq i}}$. | textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Chemical_Thermodynamics_(Supplement_to_Shepherd_et_al.)/11%3A_Fundamental_8_-_Energy_Transformations/11.02%3A_Total_Differential_of_the_Internal_Energy.txt |
• 12.1: Reversible and Irreversible Pathways
It is convenient to use the work of expansion to exemplify the difference between work that is done reversibly and that which is done irreversibly. The example of expansion against a constant external pressure is an example of an irreversible pathway. It does not mean that the gas cannot be re-compressed. It does, however, mean that there is a definite direction of spontaneous change at all points along the expansion.
12: Fundamental 10 - Processes
The most common example of work in the systems discussed in this book is the work of expansion. It is also convenient to use the work of expansion to exemplify the difference between work that is done reversibly and that which is done irreversibly. The example of expansion against a constant external pressure is an example of an irreversible pathway. It does not mean that the gas cannot be re-compressed. It does, however, mean that there is a definite direction of spontaneous change at all points along the expansion.
Imagine instead a case where the expansion has no spontaneous direction of change as there is no net force push the gas to seek a larger or smaller volume. The only way this is possible is if the pressure of the expanding gas is the same as the external pressure resisting the expansion at all points along the expansion. With no net force pushing the change in one direction or the other, the change is said to be reversible or to occur reversibly. The work of a reversible expansion of an ideal gas is fairly easy to calculate.
If the gas expands reversibly, the external pressure ($P_{ex}$) can be replaced by a single value ($P$) which represents both the internal pressure of the gas and the external pressure.
$dw = -PdV$
or
$w = - \int P dV$
But now that the external pressure is not constant, $P$ cannot be extracted from the integral. Fortunately, however, there is a simple relationship that tells us how $P$ changes with changing $V$ – the equation of state! If the gas is assumed to be an ideal gas
$w = - \int P dV -\int \left( \dfrac{nRT}{V}\right) dV$
Constant Temperature (Isothermal) Pathways
If the temperature is held constant (so that the expansion follows an isothermal pathway) the nRT term can be extracted from the integral.
$w = -nRT \int_{V_1}^{V_2} \dfrac{dV}{V} = -nRT \ln \left( \dfrac{V_1}{V_2} \right) \label{isothermal}$
Equation \ref{isothermal} is derived for ideal gases only; a van der Waal gas would result in a different version.
Example $1$: Gas Expansion
What is the work done by 1.00 mol an ideal gas expanding reversibly from a volume of 22.4 L to a volume of 44.8 L at a constant temperature of 273 K?
Solution:
Using Equation \ref{isothermal} to calculate this
\begin{align*} w & = -(1.00 \, \cancel{mol}) \left(8.314\, \dfrac{J}{\cancel{mol}\,\cancel{ K}}\right) (273\,\cancel{K}) \ln \left( \dfrac{44.8\,L}{22.4 \,L} \right) \nonumber \[4pt] & = -1570 \,J = 1.57 \;kJ \end{align*}
Note: A reversible expansion will always require more work than an irreversible expansion (such as an expansion against a constant external pressure) when the final states of the two expansions are the same!
The work of expansion can be depicted graphically as the area under the P-V curve depicting the expansion. Comparing examples $1$ and $3.1.2$, for which the initial and final volumes were the same, and the constant external pressure of the irreversible expansion was the same as the final pressure of the reversible expansion, such a graph looks as follows.
The work is depicted as the shaded portion of the graph. It is clear to see that the reversible expansion (the work for which is shaded in both light and dark gray) exceeds that of the irreversible expansion (shaded in dark gray only) due to the changing pressure of the reversible expansion. In general, it will always be the case that the work generated by a reversible pathway connecting initial and final states will be the maximum work possible for the expansion.
It should be noted (although it will be proven in a later chapter) that $\Delta U$ for an isothermal reversible process involving only P-V work is 0 for an ideal gas. This is true because the internal energy, U, is a measure of a system’s capacity to convert energy into work. In order to do this, the system must somehow store that energy. The only mode in which an ideal gas can store this energy is in the translational kinetic energy of the molecules (otherwise, molecular collisions would not need to be elastic, which as you recall, was a postulate of the kinetic molecular theory!) And since the average kinetic energy is a function only of the temperature, it (and therefore $U$) can only change if there is a change in temperature. Hence, for any isothermal process for an ideal gas, $\Delta U=0$. And, perhaps just as usefully, for an isothermal process involving an ideal gas, $q = -w$, as any energy that is expended by doing work must be replaced with heat, lest the system temperature drop.
Constant Volume (Isochoric) Pathways
One common pathway which processes can follow is that of constant volume. This will happen if the volume of a sample is constrained by a great enough force that it simply cannot change. It is not uncommon to encounter such conditions with gases (since they are highly compressible anyhow) and also in geological formations, where the tremendous weight of a large mountain may force any processes occurring under it to happen at constant volume.
If reversible changes in which the only work that can be done is that of expansion (so-called P-V work) are considered, the following important result is obtained:
$dU = dq + dw = dq - PdV$
However, $dV = 0$ since the volume is constant! As such, $dU$ can be expressed only in terms of the heat that flows into or out of the system at constant volume
$dU = dq_v$
Recall that $dq$ can be found by
$dq = \dfrac{dq}{\partial T} dT = C\, dt \label{eq1}$
This suggests an important definition for the constant volume heat capacity ($C_V$) which is
$C_V \equiv \left( \dfrac{\partial U}{\partial T}\right)_V$
When Equation \ref{eq1} is integrated the
$q = \int _{T_1}^{T_2} nC_V dt \label{isochoric}$
Example $2$: Isochoric Pathway
Consider 1.00 mol of an ideal gas with $C_V = 3/2 R$ that undergoes a temperature change from 125 K to 255 K at a constant volume of 10.0 L. Calculate $\Delta U$, $q$, and $w$ for this change.
Solution:
Since this is a constant volume process
$w = 0 \nonumber$
Equation \ref{isochoric} is applicable for an isochoric process,
$q = \int _{T_1}^{T_2} nC_V dt \nonumber$
Assuming $C_V$ is independent of temperature:
\begin{align*} q & = nC_V \int _{T_1}^{T_2} dt \[4pt] &= nC_V ( T_2-T_1) \[4pt] & = (1.00 \, mol) \left( \dfrac{3}{2} 8.314\, \dfrac{J}{mol \, K}\right) (255\, K - 125 \,K) \[4pt] & = 1620 \,J = 1.62\, kJ \end{align*}
Since this a constant volume pathway,
\begin{align*} \Delta U & = q + \cancel{w} \ & = 1.62 \,kJ \end{align*}
Constant Pressure (Isobaric) Pathways
Most laboratory-based chemistry occurs at constant pressure. Specifically, it is exposed to the constant air pressure of the laboratory, glove box, or other container in which reactions are taking place. For constant pressure changes, it is convenient to define a new thermodynamic quantity called enthalpy.
$H \equiv U+ pV \nonumber$
or
\begin{align*} dH &\equiv dU + d(pV) \[4pt] &= dU + pdV + Vdp \end{align*}
For reversible changes at constant pressure ($dp = 0$) for which only P-V work is done
\begin{align} dH & = dq + dw + pdV + Vdp \[4pt] & = dq - \cancel{pdV} + \cancel{pdV} + \cancelto{0}{Vdp} \ & = dq \label{heat} \end{align}
And just as in the case of constant volume changes, this implies an important definition for the constant pressure heat capacity
$C_p \equiv \left( \dfrac{\partial H}{\partial T} \right)_p$
Example $3$: Isobaric Gas Expansion
Consider 1.00 mol of an ideal gas with $C_p = 5/2 R$ that changes temperature change from 125 K to 255 K at a constant pressure of 10.0 atm. Calculate $\Delta U$, $\Delta H$, $q$, and $w$ for this change.
Solution:
$q = \int_{T_1}^{T_2} nC_p dT \nonumber$
assuming $C_p$ is independent of temperature:
\begin{align*} q & = nC_p \int _{T_1}^{T_2} dT \ & = nC_p (T_2-T_1) \ & = (1.00 \, mol) \left( \dfrac{5}{2} 8.314 \dfrac{J}{mol \, K}\right) (255\, K - 125\, K) = 2700\, J = 1.62\, kJ \end{align*}
So via Equation \ref{heat} (specifically the integrated version of it using differences instead of differentials)
$\Delta H = q = 1.62 \,kJ \nonumber$
\begin{align*} \Delta U & = \Delta H - \Delta (pV) \ & = \Delta H -nR\Delta T \ & = 2700\, J - (1.00 \, mol) \left( 8.314\, \dfrac{J}{mol \, K}\right) (255\, K - 125 \,K) \ & = 1620 \,J = 1.62\, kJ \end{align*}
Now that $\Delta U$ and $q$ are determined, then work can be calculated
\begin{align*} w & =\Delta U -q \ & = 1.62\,kJ - 2.70\,kJ = -1.08\;kJ \end{align*}
It makes sense that $w$ is negative since this process is an gas expansion.
Example $4$: Isothermal Gas Expansion
Calculate $q$, $w$, $\Delta U$, and $\Delta H$ for 1.00 mol of an ideal gas expanding reversibly and isothermally at 273 K from a volume of 22.4 L and a pressure of 1.00 atm to a volume of 44.8 L and a pressure of 0.500 atm.
Solution
Since this is an isothermal expansion, Equation\ref{isothermal} is applicable
\begin{align*} w & = -nRT \ln \dfrac{V_2}{V_1} \ & = (1.00 \, mol) \left( 8.314\, \dfrac{J}{mol \, K}\right) (255\, K) \ln \left(\dfrac{44.8\,L}{22.4\,L} \right) \ & = 1572\,J = 1.57\,kJ \[4pt] \Delta U & = q + w \ & = q + 1.57\,KJ \ & = 0 \[4pt] q &= -1.57\,kJ \end{align*}
Since this is an isothermal expansion
$\Delta H = \Delta U + \Delta (pV) = 0 + 0 \nonumber$
where $\Delta (pV) = 0$ due to Boyle’s Law!
Adiabatic Pathways
An adiabatic pathway is defined as one in which no heat is transferred ($q = 0$). Under these circumstances, if an ideal gas expands, it is doing work ($w < 0$) against the surroundings (provided the external pressure is not zero!) and as such the internal energy must drop ($\Delta U <0$). And since $\Delta U$ is negative, there must also be a decrease in the temperature ($\Delta T < 0$). How big will the decrease in temperature be and on what will it depend? The key to answering these questions comes in the solution to how we calculate the work done.
If the adiabatic expansion is reversible and done on an ideal gas,
$dw = -PdV$
and
$dw = dU = nC_vdT \label{Adiabate2}$
Equating these two terms yields
$- PdV = nC_v dT$
Using the ideal gas law for an expression for $P$ ($P = nRT/V$)
$- \dfrac{nRT}{V} dV = nC_vdT$
And rearranging to gather the temperature terms on the right and volume terms on the left yields
$\dfrac{dV}{V} = -\dfrac{C_V}{R} \dfrac{dT}{T}$
This expression can be integrated on the left between $V_1$ and $V_2$ and on the right between $T_1$ and $T_2$. Assuming that $C_v/nR$ is independent of temperature over the range of integration, it can be pulled from the integrand in the term on the right.
$\int_{V_1}^{V_2} \dfrac{dV}{V} = -\dfrac{C_V}{R} \int_{T_1}^{T_2} \dfrac{dT}{T}$
The result is
$\ln \left(\dfrac{V_2}{V_1} \right) = - \dfrac{C_V}{R} \ln \left( \dfrac{T_2}{T_1} \right)$
or
$\left(\dfrac{V_2}{V_1} \right) = \left(\dfrac{T_2}{T_1} \right)^{- \frac{C_V}{R}}$
or
$V_1T_1^{\frac{C_V}{R}} = V_2T_2^{\frac{C_V}{R}}$
or
$T_1 \left(\dfrac{V_1}{V_2} \right)^{- \frac{R} {C_V}} = T_2 \label{Eq4Alternative}$
Once $\Delta T$ is known, it is easy to calculate $w$, $\Delta U$ and $\Delta H$.
Example $5$:
1.00 mol of an ideal gas (CV = 3/2 R) initially occupies 22.4 L at 273 K. The gas expands adiabatically and reversibly to a final volume of 44.8 L. Calculate $\Delta T$, $q$, $w$, $\Delta U$, and $\Delta H$ for the expansion.
Solution
Since the pathway is adiabatic:
$q =0 \nonumber$
Using Equation \ref{Eq4Alternative}
\begin{align*} T_2 & = T_1 \left(\dfrac{V_1}{V_2} \right)^{- \frac{R} {C_V}} \ & =(273\,K) \left( \dfrac{22.4\,L}{44.8\,L} \right)^{2/3} \ & = 172\,K \end{align*}
So
$\Delta T = 172\,K - 273\,K = -101\,K \nonumber$
For calculating work, we integrate Equation \ref{Adiabate2} to get
\begin{align*} w & = \Delta U = nC_v \Delta T \ & = (1.00 \, mol) \left(\dfrac{3}{2} 8.314\, \dfrac{J}{mol \, K} \right) (-101\,K ) \ & = 1.260 \,kJ \end{align*}
\begin{align*} \Delta H & = \Delta U + nR\Delta T \ & = -1260\,J + (1.00 \, mol) \left(\dfrac{3}{2} 8.314\, \dfrac{J}{mol \, K} \right) (-101\,K ) \ & = -2100\,J \end{align*}
The following table shows recipes for calculating $q$, $w$, $\Delta U$, and $\Delta H$ for an ideal gas undergoing a reversible change along the specified pathway.
Table 3.2.1: Thermodynamics Properties for a Reversible Expansion or Compression
Pathway $q$ $w$ $\Delta U$ $\Delta H$
Isothermal $nRT \ln (V_2/V_1)$ $-nRT \ln (V_2/V_1)$ 0 0
Isochoric $C_V \Delta T$ 0 $C_V \Delta T$ $C_V \Delta T + V\Delta p$
Isobaric $C_p \Delta T$ $- p\Delta V$ $C_p \Delta T - p\Delta V$ $C_p \Delta T$
Adiabatic 0 $C_V \Delta T$ $C_V \Delta T$ $C_p \Delta T$
Contributors and Attributions
• Patrick E. Fleming (Department of Chemistry and Biochemistry; California State University, East Bay) | textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Chemical_Thermodynamics_(Supplement_to_Shepherd_et_al.)/12%3A_Fundamental_10_-_Processes/12.01%3A_Reversible_and_Irreversible_Pathways.txt |
• 13.1: Carnot Cycle
The Carnot cycle has the greatest efficiency possible of an engine (although other cycles have the same efficiency) based on the assumption of the absence of incidental wasteful processes such as friction, and the assumption of no conduction of heat between different parts of the engine at different temperatures.
• 13.2: Entropy
In addition to learning that the efficiency of a Carnot engine depends only on the high and low temperatures, more interesting things can be derived through the exploration of this system.
13: Extension 10 - Cycles
In the early 19th century, steam engines came to play an increasingly important role in industry and transportation. However, a systematic set of theories of the conversion of thermal energy to motive power by steam engines had not yet been developed. Nicolas Léonard Sadi Carnot (1796-1832), a French military engineer, published Reflections on the Motive Power of Fire in 1824. The book proposed a generalized theory of heat engines, as well as an idealized model of a thermodynamic system for a heat engine that is now known as the Carnot cycle. Carnot developed the foundation of the second law of thermodynamics, and is often described as the "Father of thermodynamics."
The Carnot Cycle
The Carnot cycle consists of the following four processes:
1. A reversible isothermal gas expansion process. In this process, the ideal gas in the system absorbs $q_{in}$ amount heat from a heat source at a high temperature $T_{high}$, expands and does work on surroundings.
2. A reversible adiabatic gas expansion process. In this process, the system is thermally insulated. The gas continues to expand and do work on surroundings, which causes the system to cool to a lower temperature, $T_{low}$.
3. A reversible isothermal gas compression process. In this process, surroundings do work to the gas at $T_{low}$, and causes a loss of heat, $q_{out}$.
4. A reversible adiabatic gas compression process. In this process, the system is thermally insulated. Surroundings continue to do work to the gas, which causes the temperature to rise back to $T_{high}$.
P-V Diagram
The P-V diagram of the Carnot cycle is shown in Figure $2$. In isothermal processes I and III, ∆U=0 because ∆T=0. In adiabatic processes II and IV, q=0. Work, heat, ∆U, and ∆H of each process in the Carnot cycle are summarized in Table $1$.
Table $1$: Work, heat, ∆U, and ∆H in the P-V diagram of the Carnot Cycle.
Process w q ΔU ΔH
I $-nRT_{high}\ln\left(\dfrac{V_{2}}{V_{1}}\right)$ $nRT_{high}\ln\left(\dfrac{V_{2}}{V_{1}}\right)$ 0 0
II $n\bar{C_{v}}(T_{low}-T_{high})$ 0 $n\bar{C_{v}}(T_{low}-T_{high})$ $n\bar{C_{p}}(T_{low}-T_{high})$
III $-nRT_{low}\ln\left(\dfrac{V_{4}}{V_{3}}\right)$ $nRT_{low}\ln\left(\dfrac{V_{4}}{V_{3}}\right)$ 0 0
IV $n\bar{C_{v}}(T_{high}-T_{low})$ 0 $n\bar{C_{v}}(T_{hight}-T_{low})$ $n\bar{C_{p}}(T_{high}-T_{low})$
Full Cycle $-nRT_{high}\ln\left(\dfrac{V_{2}}{V_{1}}\right)-nRT_{low}\ln\left(\dfrac{V_{4}}{V_{3}}\right)$ $nRT_{high}\ln\left(\dfrac{V_{2}}{V_{1}}\right)+nRT_{low}\ln\left(\dfrac{V_{4}}{V_{3}}\right)$ 0 0
T-S Diagram
The T-S diagram of the Carnot cycle is shown in Figure $3$. In isothermal processes I and III, ∆T=0. In adiabatic processes II and IV, ∆S=0 because dq=0. ∆T and ∆S of each process in the Carnot cycle are shown in Table $2$.
Table $1$: Work, heat, and ∆U in the T-S diagram of the Carnot Cycle.
Process ΔT ΔS
I 0 $-nR\ln\left(\dfrac{V_{2}}{V_{1}}\right)$
II $T_{low}-T_{high}$ 0
III 0 $-nR\ln\left(\dfrac{V_{4}}{V_{3}}\right)$
IV $T_{high}-T_{low}$ 0
Full Cycle 0 0
Efficiency
The Carnot cycle is the most efficient engine possible based on the assumption of the absence of incidental wasteful processes such as friction, and the assumption of no conduction of heat between different parts of the engine at different temperatures. The efficiency of the carnot engine is defined as the ratio of the energy output to the energy input.
\begin{align*} \text{efficiency} &=\dfrac{\text{net work done by heat engine}}{\text{heat absorbed by heat engine}} =\dfrac{-w_{sys}}{q_{high}} \[4pt] &=\dfrac{nRT_{high}\ln\left(\dfrac{V_{2}}{V_{1}}\right)+nRT_{low}\ln \left(\dfrac{V_{4}}{V_{3}}\right)}{nRT_{high}\ln\left(\dfrac{V_{2}}{V_{1}}\right)} \end{align*}
Since processes II (2-3) and IV (4-1) are adiabatic,
$\left(\dfrac{T_{2}}{T_{3}}\right)^{C_{V}/R}=\dfrac{V_{3}}{V_{2}}$
and
$\left(\dfrac{T_{1}}{T_{4}}\right)^{C_{V}/R}=\dfrac{V_{4}}{V_{1}}$
And since T1 = T2 and T3 = T4,
$\dfrac{V_{3}}{V_{4}}=\dfrac{V_{2}}{V_{1}}$
Therefore,
$\text{efficiency}=\dfrac{nRT_{high}\ln\left(\dfrac{V_{2}}{V_{1}}\right)-nRT_{low}\ln\left(\dfrac{V_{2}}{V_{1}}\right)}{nRT_{high}\ln\left(\dfrac{V_{2}}{V_{1}}\right)}$
$\boxed{\text{efficiency}=\dfrac{T_{high}-T_{low}}{T_{high}}}$
Summary
The Carnot cycle has the greatest efficiency possible of an engine (although other cycles have the same efficiency) based on the assumption of the absence of incidental wasteful processes such as friction, and the assumption of no conduction of heat between different parts of the engine at different temperatures.
Problems
1. You are now operating a Carnot engine at 40% efficiency, which exhausts heat into a heat sink at 298 K. If you want to increase the efficiency of the engine to 65%, to what temperature would you have to raise the heat reservoir?
2. A Carnot engine absorbed 1.0 kJ of heat at 300 K, and exhausted 400 J of heat at the end of the cycle. What is the temperature at the end of the cycle?
3. An indoor heater operating on the Carnot cycle is warming the house up at a rate of 30 kJ/s to maintain the indoor temperature at 72 ºF. What is the power operating the heater if the outdoor temperature is 30 ºF?
13.02: Entropy
In addition to learning that the efficiency of a Carnot engine depends only on the high and low temperatures, more interesting things can be derived through the exploration of this system. For example, consider the total heat transferred in the cycle:
$q_{tot} = nRT_h \ln \left( \dfrac{V_2}{V_1} \right) - nRT_l \ln \left( \dfrac{V_4}{V_3} \right) \nonumber$
Making the substitution
$\dfrac{V_2}{V_1} = \dfrac{V_3}{V_4} \nonumber$
the total heat flow can be seen to be given by
$q_{tot} = nRT_h \ln \left( \dfrac{V_4}{V_3} \right) - nRT_l \ln \left( \dfrac{V_4}{V_3} \right)$
It is clear that the two terms do not have the same magnitude, unless $T_h = T_l$. This is sufficient to show that $q$ is not a state function, since it’s net change around a closed cycle is not zero (as any value of a state function must be.) However, consider what happens when the sum of $q/T$ is considered:
\begin{align*} \sum \dfrac{q}{T} &= \dfrac{nR \cancel{T_h} \ln \left( \dfrac{V_4}{V_3} \right)}{\cancel{T_h}} - \dfrac{nR \cancel{T_l} \ln \left( \dfrac{V_4}{V_3} \right)}{ \cancel{T_l}} \[4pt] &= nR \ln \left( \dfrac{V_4}{V_3} \right) - nR \ln \left( \dfrac{V_4}{V_3} \right) \[4pt] & = 0 \end{align*}
This is the behavior expected for a state function! It leads to the definition of entropy in differential form,
$dS \equiv \dfrac{dq_{rev}}{T}$
In general, $dq_{rev}$ will be larger than $dq$ (since the reversible pathway defines the maximum heat flow.) So, it is easy to calculate entropy changes, as one needs only to define a reversible pathway that connects the initial and final states, and then integrate $dq/T$ over that pathway. And since $\Delta S$ is defined using $q$ for a reversible pathway, $\Delta S$ is independent of the actual path a system follows to undergo a change.
Contributors
• Patrick E. Fleming (Department of Chemistry and Biochemistry; California State University, East Bay)
14.01: Helmholtz Energy
• 14.1: Helmholtz Energy
Above we have answered the question: what is entropy really, but we still do not have a general criterion for spontaneity, just one that works in an isolated system. Let's fix that now. It leads to two new state functions that prove to be most useful ones of thermodynamics.
14: Fundamental 11 - Boundary Changes
We have answered the question: what is entropy, but we still do not have a general criterion for spontaneity, just one that works in an isolated system. We will consider what happens when we hold volume and temperature constant. As discussed previously, the expression for the change in internal energy:
$dU = TdS -PdV \nonumber$
is only valid for reversible changes. Let us consider a spontaneous change. If we assume constant volume, the $-PdV$ work term drops out. From the Clausius inequality $dS>\dfrac{δq}{T}$ we get:
$\underset{ \text{constant V} }{dU \le TdS } \nonumber$
$\underset{ \text{constant V} }{ dU-TdS \le 0 } \nonumber$
Consider a new state function, Helmholtz energy, A:
$A ≡ U -TS \nonumber$
$dA = dU -TdS - SdT \label{diff1}$
If we also set $T$ constant, we see that Equation $\ref{diff1}$ becomes
$\underset{ \text{constant V and T} }{ dA=dU-TdS \le 0 } \nonumber$
This means that the Helmholtz energy, $A$, is a decreasing quantity for spontaneous processes (regardless of isolation!) when $T$ and $V$ are held constant. $A$ becomes constant once a reversible equilibrium is reached.
Example 22.1.1 : What A stands for
A good example is the case of the mixing of two gases. Let's assume isothermal conditions and keep the total volume constant. For this process, $\Delta U$ is zero (isothermal, ideal) but the
$\Delta S_{molar} = -y_1R\ln y_1-y_2 R \ln y_2 \nonumber$
This means that
$\Delta A_{molar} = RT (y_1\ln y_1+y_2\ln y_2). \nonumber$
This is a negative quantity because the mole ratios are smaller than unity. So yes this spontaneous process has a negative $\Delta A$. If we look at $\Delta A = \Delta U - T\Delta S$ we should see that the latter term is the same thing as $-q_{rev}$ So we have :
$\Delta A = \Delta U - q_{rev} = w_{rev} \nonumber$
This is however the maximal work that a system is able to produce and so the Helmholtz energy is a direct measure of how much work one can get out of a system. $A$ is therefore often called the Helmholtz free energy. Interestingly this work cannot be volume work as volume is constant. so it stands for the maximal other work (e.g. electrical work) that can be obtained under the unlikely condition that volume is constant.
Natural variables of A
Because $A≡ U-TS$ we can write
$dA = dU -TdS -SdT \nonumber$
$dA = TdS -PdV -TdS -SdT = -PdV - SdT \nonumber$
The natural variables of $A$ are volume $V$ and temperature $T$. | textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Chemical_Thermodynamics_(Supplement_to_Shepherd_et_al.)/13%3A_Extension_10_-_Cycles/13.01%3A_Carnot_Cycle.txt |
• 15.1: Differential Forms of Fundamental Equations
The fundamental thermodynamic equations follow from five primary thermodynamic definitions and describe internal energy, enthalpy, Helmholtz energy, and Gibbs energy in terms of their natural variables. Here they will be presented in their differential forms.
15: Extension 11 - Legendre Transforms
The fundamental thermodynamic equations follow from five primary thermodynamic definitions and describe internal energy, enthalpy, Helmholtz energy, and Gibbs energy in terms of their natural variables. Here they will be presented in their differential forms.
Introduction
The fundamental thermodynamic equations describe the thermodynamic quantities U, H, G, and A in terms of their natural variables. The term "natural variable" simply denotes a variable that is one of the convenient variables to describe U, H, G, or A. When considered as a whole, the four fundamental equations demonstrate how four important thermodynamic quantities depend on variables that can be controlled and measured experimentally. Thus, they are essentially equations of state, and using the fundamental equations, experimental data can be used to determine sought-after quantities like $G$ or $H$.
First Law of Thermodynamics
The first law of thermodynamics is represented below in its differential form
$dU = dq+dw$
where
• $U$ is the internal energy of the system,
• $q$ is heat flow of the system, and
• $w$ is the work of the system.
Recall that $U$ is a state function, while $q$ and $w$ are path functions. The first law states that internal energy changes occur only as a result of heat flow and work done.
It is assumed that w refers only to PV work, where
$w = -\int{PdV}$
The Principle of Clausius
The Principle of Clausius states that the entropy change of a system is equal to the ratio of heat flow in a reversible process to the temperature at which the process occurs. Mathematically this is written as
$dS = \dfrac{dq_{rev}}{T}$
where
• $S$ is the entropy of the system,
• $q_{rev}$ is the heat flow of a reversible process, and
• $T$ is the temperature in Kelvin.
Internal Energy
The fundamental thermodynamic equation for internal energy follows directly from the first law and the principle of Clausius:
$dU = dq + dw$
$dS = \dfrac{dq_{rev}}{T}$
we have
$dU = TdS + dw$
Since only $PV$ work is performed,
$dU = TdS - PdV \label{DefU}$
The above equation is the fundamental equation for $U$ with natural variables of entropy $S$ and volume$V$.
Enthalpy
Mathematically, enthalpy is defined as
$H = U + PV \label{DefEnth}$
where $H$ is enthalpy of the system, p is pressure, and V is volume. The fundamental thermodynamic equation for enthalpy follows directly from it definition (Equation $\ref{DefEnth}$) and the fundamental equation for internal energy (Equation $\ref{DefU}$) :
$dH = dU + d(PV)$
$dH = dU + PdV + VdP$
Because $dU = TdS - PdV$, the enthalpy equation becomes:
$dH = TdS - PdV + PdV + VdP$
$dH = TdS + VdP$
The above equation is the fundamental equation for H. The natural variables of enthalpy are S and P, entropy and pressure.
Gibbs Energy
The mathematical description of Gibbs energy is as follows
$G = U + PV - TS = H - TS \label{Defgibbs}$
where $G$ is the Gibbs energy of the system. The fundamental thermodynamic equation for Gibbs Energy follows directly from its definition $\ref{Defgibbs}$ and the fundamental equation for enthalpy $\ref{DefEnth}$:
$dG = dH - d(TS)$
$dG = dH - TdS - SdT$
Since $dH = TdS + VdP$,
$dG = TdS + VdP - TdS - SdT$
$dG = VdP - SdT$
The above equation is the fundamental equation for G. The natural variables of Gibbs energy are P and T, pressure and temperature.
Helmholtz Energy
Mathematically, Helmholtz energy is defined as
$A = U - TS \label{DefHelm}$
where $A$ is the Helmholtz energy of the system, which sometimes also written as the symbol $F$. The fundamental thermodynamic equation for Helmholtz energy follows directly from its definition (Equation $\ref{DefHelm}$) and the fundamental equation for internal energy (Equation $\ref{DefU}$):
$dA = dU - d(TS)$
$dA = dU - TdS - SdT$
Since $dU = TdS - PdV$,
$dA = TdS - PdV -TdS - SdT$
$dA = -PdV - SdT$
The above equation is the fundamental equation for A with natural variables of $V$ and $T$.
Importance/Relevance of Fundamental Equations
The differential fundamental equations describe U, H, G, and A in terms of their natural variables. The natural variables become useful in understanding not only how thermodynamic quantities are related to each other, but also in analyzing relationships between measurable quantities (i.e. P, V, T) in order to learn about the thermodynamics of a system. Below is a table summarizing the natural variables for U, H, G, and A:
Thermodynamic Quantity Natural Variables
U (internal energy) S, V
H (enthalpy) S, P
G (Gibbs energy) T, P
A (Helmholtz energy) T, V
For these definitions to hold, it is assumed that only PV work is done and that only reversible processes are used. These assumptions are required for the first law and the principle of Clausius to remain valid. Also, these equations do not account include $n$, the number of moles, as a variable. When $n$ is included, the equations appear different, but the essence of their meaning is captured without including the $n$-dependence.
Problems
1. If the assumptions made in the derivations above were not made, what would effect would that have? Try to think of examples were these assumptions would be violated. Could the definitions, principles, and laws used to derive the fundamental equations still be used? Why or why not?
2. For what kind of system does the number of moles not change? This said, do the fundamental equations without n-dependence apply to a wide range of processes and systems?
3. Derive the Maxwell Relations.
4. Derive the expression
$\left (\dfrac{\partial H}{\partial P} \right)_{T,n} = -T \left(\dfrac{\partial V}{\partial T} \right)_{P,n} +V$
Then apply this equation to an ideal gas. Does the result seem reasonable?
5. Using the definition of Gibbs energy and the conditions observed at phase equilibria, derive the Clapeyron equation.
Answers
1. If it was not assumed that PV-work was the only work done, then the work term in the second law of thermodynamics equation would include other terms (e.g. for electrical work, mechanical work). If reversible processes were not assumed, the Principle of Clausius could not be used. One example of such situations could the movement of charged particles towards a region of like charge (electrical work) or an irreversible process like combustion of hydrocarbons or friction.
2. In general, a closed system of non-reacting components would fit this description. For example, the number of moles would not change for a closed system in which a gas is sealed (to prevent leaks) in a container and allowed to expand/is contracted.
3. See the Maxwell Relations section.
4. $(\dfrac{\partial H}{\partial P})_{T,n} = 0$ for an ideal gas. Since there are no interactions between ideal gas molecules, changing the pressure will not involve the formation or breaking of any intermolecular interactions or bonds.
5. See the third outside link.
Contributors
• Andreana Rosnik, Hope College | textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Chemical_Thermodynamics_(Supplement_to_Shepherd_et_al.)/15%3A_Extension_11_-_Legendre_Transforms/15.01%3A_Differential_Forms_of_Fundamental_Equations.txt |
• 16.1: Expressions for Heat Capacity
• 16.2: The Third Law of Thermodynamics
This system may be described by a single microstate, as its purity, perfect crystallinity and complete lack of motion (at least classically, quantum mechanics argues for constant motion) means there is but one possible location for each identical atom or molecule comprising the crystal (Ω=1). According to the Boltzmann equation, the entropy of this system is zero.
16: Fundamental 12 - Laboratory Conditions
The heat capacity of a closed system is defined as the ratio of an infinitesimal quantity of heat transferred across the boundary under specified conditions and the resulting infinitesimal temperature change: ${\text {heat capacity}} \stackrel{\text { def }}{=} \mathrm{d} q / \mathrm{d} T$. The heat capacities of isochoric (constant volume) and isobaric (constant pressure) processes are of particular interest.
The heat capacity at constant volume, $C_V$, is the ratio $\mathrm{d} q/\mathrm{d} T$ for a process in a closed constant-volume system with no nonexpansion work—that is, no work at all. The first law shows that under these conditions the internal energy change equals the heat: $\mathrm{d} U=\mathrm{d} q$. We can replace $\mathrm{d} q$ by $\mathrm{d} U$ and write $C_V$ as a partial derivative:
$C_{V}=\left(\frac{\partial U}{\partial T}\right)_{V}\label{1}$
If the closed system has more than two independent variables, additional conditions are needed to define $C_V$ unambiguously. For instance, if the system is a gas mixture in which reaction can occur, we might specify that the system remains in reaction equilibrium as $T$ changes at constant $V$.
$\ref{1}$ does not require the condition $\mathrm{d} w'=0$ (no nonexpansion work), because all quantities appearing in the equation are state functions whose relations to one another are fixed by the nature of the system and not by the path. Thus, if heat transfer into the system at constant $V$ causes $U$ to increase at a certain rate with respect to $T$, and this rate is defined as $C_V$, the performance of electrical work on the system at constant $V$ will cause the same rate of increase of $U$ with respect to $T$ and can equally well be used to evaluate $C_V$.
Note that $C_V$ is a state function whose value depends on the state of the system—that is, on $T$, $V$, and any additional independent variables. $C_V$ is an extensive property: the combination of two identical phases has twice the value of $C_V$ that one of the phases has by itself.
For a phase containing a pure substance, the molar heat capacity at constant volume is defined by $\overline{C_{V}} \stackrel{\text { def }}{=} C_{V} / n$. $\overline{C_{V}}$ is an intensive property.
If the system is an ideal gas, its internal energy depends only on $T$, regardless of whether $V$ is constant, and $\ref{1}$ can be simplified to
$C_{V}=\frac{\mathrm{d} U}{\mathrm{d} T}\label{2}$
Thus the internal energy change of an ideal gas is given by $\mathrm{d} U=C_{V} \mathrm{d} T$, as mentioned earlier.
The heat capacity at constant pressure, $C_P$, is the ratio $\mathrm{d} q/\mathrm{d} T$ for a process in a closed system with a constant, uniform pressure and with expansion work only. Under these conditions, the heat $\mathrm{d} q$ is equal to the enthalpy change $\mathrm{d} H$, and we obtain a relation analogous to $\ref{1}$:
$C_{P}=\left(\frac{\partial H}{\partial T}\right)_{p}\label{3}$
$C_P$ is an extensive state function. For a phase containing a pure substance, the molar heat capacity at constant pressure is $\overline{C_{P}} \stackrel{\text { def }}{=} C_{P} / n$, an intensive property.
Since the enthalpy of a fixed amount of an ideal gas depends only on $T$, we can write a relation analogous to $\ref{2}$:
$C_{p}=\frac{\mathrm{d} H}{\mathrm{d} T}\label{4}$ | textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Chemical_Thermodynamics_(Supplement_to_Shepherd_et_al.)/16%3A_Fundamental_12_-_Laboratory_Conditions/16.01%3A_Expressions_for_Heat_Capacity.txt |
Learning Objectives
• The absolute entropy of a pure substance at a given temperature is the sum of all the entropy it would acquire on warming from absolute zero (where $S=0$) to the particular temperature.
• Calculate entropy changes for phase transitions and chemical reactions under standard conditions
The atoms, molecules, or ions that compose a chemical system can undergo several types of molecular motion, including translation, rotation, and vibration (Figure $1$). The greater the molecular motion of a system, the greater the number of possible microstates and the higher the entropy. A perfectly ordered system with only a single microstate available to it would have an entropy of zero. The only system that meets this criterion is a perfect crystal at a temperature of absolute zero (0 K), in which each component atom, molecule, or ion is fixed in place within a crystal lattice and exhibits no motion (ignoring quantum zero point motion).
This system may be described by a single microstate, as its purity, perfect crystallinity and complete lack of motion (at least classically, quantum mechanics argues for constant motion) means there is but one possible location for each identical atom or molecule comprising the crystal ($W = 1$). According to the Boltzmann equation, the entropy of this system is zero.
\begin{align*} S&=k\ln W \[4pt] &= k\ln(1) \[4pt] &=0 \label{$5$} \end{align*}
In practice, absolute zero is an ideal temperature that is unobtainable, and a perfect single crystal is also an ideal that cannot be achieved. Nonetheless, the combination of these two ideals constitutes the basis for the third law of thermodynamics: the entropy of any perfectly ordered, crystalline substance at absolute zero is zero.
Definition: Third Law of Thermodynamics
The entropy of a pure, perfect crystalline substance at 0 K is zero.
The third law of thermodynamics has two important consequences: it defines the sign of the entropy of any substance at temperatures above absolute zero as positive, and it provides a fixed reference point that allows us to measure the absolute entropy of any substance at any temperature. In this section, we examine two different ways to calculate ΔS for a reaction or a physical change. The first, based on the definition of absolute entropy provided by the third law of thermodynamics, uses tabulated values of absolute entropies of substances. The second, based on the fact that entropy is a state function, uses a thermodynamic cycle similar to those discussed previously.
Standard-State Entropies
One way of calculating $ΔS$ for a reaction is to use tabulated values of the standard molar entropy ($\overline{S}^o$), which is the entropy of 1 mol of a substance under standard pressure (1 bar). Often the standard molar entropy is given at 298 K and is often demarked as $\Delta \overline{S}^o_{298}$. The units of $\overline{S}^o$ are J/(mol•K). Unlike enthalpy or internal energy, it is possible to obtain absolute entropy values by measuring the entropy change that occurs between the reference point of 0 K (corresponding to $\overline{S} = 0$) and 298 K (Tables T1 and T2).
As shown in Table $1$, for substances with approximately the same molar mass and number of atoms, $\overline{S}^o$ values fall in the order
$\overline{S}^o(\text{gas}) \gg \overline{S}^o(\text{liquid}) > \overline{S}^o(\text{solid}).$
For instance, $\overline{S}^o$ for liquid water is 70.0 J/(mol•K), whereas $\overline{S}^o$ for water vapor is 188.8 J/(mol•K). Likewise, $\overline{S}^o$ is 260.7 J/(mol•K) for gaseous $\ce{I2}$ and 116.1 J/(mol•K) for solid $\ce{I2}$. This order makes qualitative sense based on the kinds and extents of motion available to atoms and molecules in the three phases (Figure $1$). The correlation between physical state and absolute entropy is illustrated in Figure $2$, which is a generalized plot of the entropy of a substance versus temperature.
The Third Law Lets us Calculate Absolute Entropies
The absolute entropy of a substance at any temperature above 0 K must be determined by calculating the increments of heat $q$ required to bring the substance from 0 K to the temperature of interest, and then summing the ratios $q/T$. Two kinds of experimental measurements are needed:
1. The enthalpies associated with any phase changes the substance may undergo within the temperature range of interest. Melting of a solid and vaporization of a liquid correspond to sizeable increases in the number of microstates available to accept thermal energy, so as these processes occur, energy will flow into a system, filling these new microstates to the extent required to maintain a constant temperature (the freezing or boiling point); these inflows of thermal energy correspond to the heats of fusion and vaporization. The entropy increase associated with transition at temperature $T$ is $\dfrac{ΔH_{fusion}}{T}.$
2. The heat capacity $C$ of a phase expresses the quantity of heat required to change the temperature by a small amount $ΔT$, or more precisely, by an infinitesimal amount $dT$. Thus the entropy increase brought about by warming a substance over a range of temperatures that does not encompass a phase transition is given by the sum of the quantities $C \frac{dT}{T}$ for each increment of temperature $dT$. This is of course just the integral
$S_{0 \rightarrow T} = \int _{0}^{T} \dfrac{C_p}{T} dt \label{eq20}$
Because the heat capacity is itself slightly temperature dependent, the most precise determinations of absolute entropies require that the functional dependence of $C$ on $T$ be used in the integral in Equation \ref{eq20}, i.e.,:
$S_{0 \rightarrow T} = \int _{0}^{T} \dfrac{C_p(T)}{T} dt. \label{eq21}$
When this is not known, one can take a series of heat capacity measurements over narrow temperature increments $ΔT$ and measure the area under each section of the curve. The area under each section of the plot represents the entropy change associated with heating the substance through an interval $ΔT$. To this must be added the enthalpies of melting, vaporization, and of any solid-solid phase changes.
Values of $C_p$ for temperatures near zero are not measured directly, but can be estimated from quantum theory. The cumulative areas from 0 K to any given temperature (Figure $3$) are then plotted as a function of $T$, and any phase-change entropies such as
$S_{vap} = \dfrac{H_{vap}}{T_b}$
are added to obtain the absolute entropy at temperature $T$. As shown in Figure $2$ above, the entropy of a substance increases with temperature, and it does so for two reasons:
• As the temperature rises, more microstates become accessible, allowing thermal energy to be more widely dispersed. This is reflected in the gradual increase of entropy with temperature.
• The molecules of solids, liquids, and gases have increasingly greater freedom to move around, facilitating the spreading and sharing of thermal energy. Phase changes are therefore accompanied by massive and discontinuous increase in the entropy.
Calculating $\Delta S_{sys}$
We can make careful calorimetric measurements to determine the temperature dependence of a substance’s entropy and to derive absolute entropy values under specific conditions. Standard molar entropies are given the label $\overline{S}^o_{298}$ for values determined for one mole of substance at a pressure of 1 bar and a temperature of 298 K. The standard entropy change ($ΔS^o$) for any process may be computed from the standard molar entropies of its reactant and product species like the following:
$ΔS^o=\sum ν\overline{S}^o_{298}(\ce{products})−\sum ν\overline{S}^o_{298}(\ce{reactants}) \label{$6$}$
Here, $ν$ represents stoichiometric coefficients in the balanced equation representing the process. For example, $ΔS^o$ for the following reaction at room temperature
$m\ce{A}+n\ce{B}⟶x\ce{C}+y\ce{D} \label{$7$}$
is computed as the following:
$ΔS^o=[x\overline{S}^o_{298}(\ce{C})+y\overline{S}^o_{298}(\ce{D})]−[m\overline{S}^o_{298}(\ce{A})+n\overline{S}^o_{298}(\ce{B})] \label{$8$}$
Table $1$ lists some standard molar entropies at 298.15 K. You can find additional standard molar entropies in Tables T1 and T2
Gases Liquids Solids
Table $1$: Standard Molar Entropy Values of Selected Substances at 25°C
Substance $\overline{S}^o$ [J/(mol•K)] Substance $\overline{S}^o$ [J/(mol•K)] Substance $\overline{S}^o$ [J/(mol•K)]
He 126.2 H2O 70.0 C (diamond) 2.4
H2 130.7 CH3OH 126.8 C (graphite) 5.7
Ne 146.3 Br2 152.2 LiF 35.7
Ar 154.8 CH3CH2OH 160.7 SiO2 (quartz) 41.5
Kr 164.1 C6H6 173.4 Ca 41.6
Xe 169.7 CH3COCl 200.8 Na 51.3
H2O 188.8 C6H12 (cyclohexane) 204.4 MgF2 57.2
N2 191.6 C8H18 (isooctane) 329.3 K 64.7
O2 205.2 NaCl 72.1
CO2 213.8 KCl 82.6
I2 260.7 I2 116.1
A closer examination of Table $1$ also reveals that substances with similar molecular structures tend to have similar $\overline{S}^o$ values. Among crystalline materials, those with the lowest entropies tend to be rigid crystals composed of small atoms linked by strong, highly directional bonds, such as diamond ($\overline{S}^o = 2.4 \,J/(mol•K)$). In contrast, graphite, the softer, less rigid allotrope of carbon, has a higher $\overline{S}^o$ (5.7 J/(mol•K)) due to more disorder (microstates) in the crystal. Soft crystalline substances and those with larger atoms tend to have higher entropies because of increased molecular motion and disorder. Similarly, the absolute entropy of a substance tends to increase with increasing molecular complexity because the number of available microstates increases with molecular complexity. For example, compare the $\overline{S}^o$ values for CH3OH(l) and CH3CH2OH(l). Finally, substances with strong hydrogen bonds have lower values of $\overline{S}^o$, which reflects a more ordered structure.
Entropy increases with softer, less rigid solids, solids that contain larger atoms, and solids with complex molecular structures.
To calculate $ΔS^o$ for a chemical reaction from standard molar entropies, we use the familiar “products minus reactants” rule, in which the absolute molar entropy of each reactant and product is multiplied by its stoichiometric coefficient in the balanced chemical equation. Example $1$ illustrates this procedure for the combustion of the liquid hydrocarbon isooctane ($\ce{C8H18}$; 2,2,4-trimethylpentane).
Example $1$
Use the data in Table $1$ to calculate $ΔS^o$ for the reaction of liquid isooctane with $\ce{O2(g)}$ to give $\ce{CO2(g)}$ and $\ce{H2O(g)}$ at 298 K.
Given: standard molar entropies, reactants, and products
Asked for: ΔS°
Strategy:
Write the balanced chemical equation for the reaction and identify the appropriate quantities in Table $1$. Subtract the sum of the absolute entropies of the reactants from the sum of the absolute entropies of the products, each multiplied by their appropriate stoichiometric coefficients, to obtain $ΔS^o$ for the reaction.
Solution:
The balanced chemical equation for the complete combustion of isooctane (C8H18) is as follows:
$\mathrm{C_8H_{18}(l)}+\dfrac{25}{2}\mathrm{O_2(g)}\rightarrow\mathrm{8CO_2(g)}+\mathrm{9H_2O(g)} \nonumber$
We calculate $ΔS^o$ for the reaction using the “products minus reactants” rule, where m and n are the stoichiometric coefficients of each product and each reactant:
\begin{align*}\Delta S^o_{\textrm{rxn}}&=\sum m\overline{S}^o(\textrm{products})-\sum n\overline{S}^o(\textrm{reactants})
\ &=[8\overline{S}^o(\mathrm{CO_2})+9\overline{S}^o(\mathrm{H_2O})]-[\overline{S}^o(\mathrm{C_8H_{18}})+\dfrac{25}{2}\overline{S}^o(\mathrm{O_2})]
\ &=\left \{ [8\textrm{ mol }\mathrm{CO_2}\times213.8\;\mathrm{J/(mol\cdot K)}]+[9\textrm{ mol }\mathrm{H_2O}\times188.8\;\mathrm{J/(mol\cdot K)}] \right \}
\ &-\left \{[1\textrm{ mol }\mathrm{C_8H_{18}}\times329.3\;\mathrm{J/(mol\cdot K)}]+\left [\dfrac{25}{2}\textrm{ mol }\mathrm{O_2}\times205.2\textrm{ J}/(\mathrm{mol\cdot K})\right ] \right \}
\ &=515.3\;\mathrm{J/K}\end{align*}
$ΔS^o$ is positive, as expected for a combustion reaction in which one large hydrocarbon molecule is converted to many molecules of gaseous products.
Exercise $1$
Use the data in Table $1$ to calculate $ΔS^o$ for the reaction of $\ce{H2(g)}$ with liquid benzene (C6H6) to give cyclohexane (C6H12) at 298 K.
Answer:
361.1 J/K
Example $2$: Determination of ΔS°
Calculate the standard entropy change for the following process at 298 K:
$\ce{H2O}(g)⟶\ce{H2O}(l)\nonumber$
Solution
The value of the standard entropy change at room temperature, $ΔS^o_{298}$, is the difference between the standard entropy of the product, H2O(l), and the standard entropy of the reactant, H2O(g).
\begin{align*} ΔS^o_{298} &=\overline{S}^o_{298}(\ce{H2O (l)})−\overline{S}^o_{298}(\ce{H2O(g)})\nonumber \[4pt] &= (70.0\: J\:mol^{−1}K^{−1})−(188.8\: Jmol^{−1}K^{−1})\nonumber \[4pt] &=−118.8\:J\:mol^{−1}K^{−1} \end{align*}
The value for $ΔS^o_{298}$ is negative, as expected for this phase transition (condensation), which the previous section discussed.
Exercise $2$
Calculate the standard entropy change for the following process at 298 K:
$\ce{H2}(g)+\ce{C2H4}(g)⟶\ce{C2H6}(g)\nonumber$
Answer
−120.6 J mol−1 K−1
Example $3$: Determination of ΔS°
Calculate the standard entropy change for the combustion of methanol, CH3OH at 298 K:
$\ce{2CH3OH}(l)+\ce{3O2}(g)⟶\ce{2CO2}(g)+\ce{4H2O}(l)\nonumber$
Solution
The value of the standard entropy change is equal to the difference between the standard entropies of the products and the entropies of the reactants scaled by their stoichiometric coefficients. The standard entropy of formations are found in Table $1$.
\begin{align*} ΔS^o &=ΔS^o_{298} \[4pt] &= ∑ν\overline{S}^o_{298}(\ce{products})−∑ν\overline{S}^o_{298} (\ce{reactants}) \[4pt] & = 2\overline{S}^o_{298}(\ce{CO2}(g))+4\overline{S}^o_{298}(\ce{H2O}(l))]−[2\overline{S}^o_{298}(\ce{CH3OH}(l))+3\overline{S}^o_{298}(\ce{O2}(g))]\nonumber \[4pt] &= [(2 \times 213.8) + (4×70.0)]−[ (2 \times 126.8) + (3 \times 205.03) ]\nonumber \[4pt] &= −161.6 \:J/mol⋅K\nonumber \end{align*}
Exercise $3$
Calculate the standard entropy change for the following reaction at 298 K:
$\ce{Ca(OH)2}(s)⟶\ce{CaO}(s)+\ce{H2O}(l)\nonumber$
Answer
24.7 J/mol•K
Summary
Energy values, as you know, are all relative, and must be defined on a scale that is completely arbitrary; there is no such thing as the absolute energy of a substance, so we can arbitrarily define the enthalpy or internal energy of an element in its most stable form at 298 K and 1 atm pressure as zero. The same is not true of the entropy; since entropy is a measure of the “dilution” of thermal energy, it follows that the less thermal energy available to spread through a system (that is, the lower the temperature), the smaller will be its entropy. In other words, as the absolute temperature of a substance approaches zero, so does its entropy. This principle is the basis of the Third law of thermodynamics, which states that the entropy of a perfectly-ordered solid at 0 K is zero.
In practice, chemists determine the absolute entropy of a substance by measuring the molar heat capacity ($C_p$) as a function of temperature and then plotting the quantity $C_p/T$ versus $T$. The area under the curve between 0 K and any temperature T is the absolute entropy of the substance at $T$. In contrast, other thermodynamic properties, such as internal energy and enthalpy, can be evaluated in only relative terms, not absolute terms.
The second law of thermodynamics states that a spontaneous process increases the entropy of the universe, ΔSuniv > 0. If ΔSuniv < 0, the process is nonspontaneous, and if ΔSuniv = 0, the system is at equilibrium. The third law of thermodynamics establishes the zero for entropy as that of a perfect, pure crystalline solid at 0 K. With only one possible microstate, the entropy is zero. We may compute the standard entropy change for a process by using standard entropy values for the reactants and products involved in the process. | textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Chemical_Thermodynamics_(Supplement_to_Shepherd_et_al.)/16%3A_Fundamental_12_-_Laboratory_Conditions/16.02%3A_The_Third_Law_of_Thermodynamics.txt |
• 17.1: The Maxwell Relations
To fully exploit the power of the state functions we need to develop some mathematical machinery by considering a number of partial derivatives.
17: Extension 12 - Working Equations
Modeling the dependence of the Gibbs and Helmholtz functions behave with varying temperature, pressure, and volume is fundamentally useful. But in order to do that, a little bit more development is necessary. To see the power and utility of these functions, it is useful to combine the First and Second Laws into a single mathematical statement. In order to do that, one notes that since
$dS = \dfrac{dq}{T}$
for a reversible change, it follows that
$dq= TdS$
And since
$dw = - PdV$
for a reversible expansion in which only P-V works is done, it also follows that (since $dU=dq+dw$):
$dU = TdS - PdV$
This is an extraordinarily powerful result. This differential for $dU$ can be used to simplify the differentials for $H$, $A$, and $G$. But even more useful are the constraints it places on the variables T, S, P, and V due to the mathematics of exact differentials!
Maxwell Relations
The above result suggests that the natural variables of internal energy are $S$ and $V$ (or the function can be considered as $U(S, V)$). So the total differential ($dU$) can be expressed:
$dU = \left( \dfrac{\partial U}{\partial S} \right)_V dS + \left( \dfrac{\partial U}{\partial V} \right)_S dV$
Also, by inspection (comparing the two expressions for $dU$) it is apparent that:
$\left( \dfrac{\partial U}{\partial S} \right)_V = T \label{eq5A}$
and
$\left( \dfrac{\partial U}{\partial V} \right)_S = -P \label{eq5B}$
But the value doesn’t stop there! Since $dU$ is an exact differential, the Euler relation must hold that
$\left[ \dfrac{\partial}{\partial V} \left( \dfrac{\partial U}{\partial S} \right)_V \right]_S= \left[ \dfrac{\partial}{\partial S} \left( \dfrac{\partial U}{\partial V} \right)_S \right]_V$
By substituting Equations \ref{eq5A} and \ref{eq5B}, we see that
$\left[ \dfrac{\partial}{\partial V} \left( T \right)_V \right]_S= \left[ \dfrac{\partial}{\partial S} \left( -P \right)_S \right]_V$
or
$\left( \dfrac{\partial T}{\partial V} \right)_S = - \left( \dfrac{\partial P}{\partial S} \right)_V$
This is an example of a Maxwell Relation. These are very powerful relationship that allows one to substitute partial derivatives when one is more convenient (perhaps it can be expressed entirely in terms of $\alpha$ and/or $\kappa_T$ for example.)
A similar result can be derived based on the definition of $H$.
$H \equiv U +PV$
Differentiating (and using the chain rule on $d(PV)$) yields
$dH = dU +PdV + VdP$
Making the substitution using the combined first and second laws ($dU = TdS – PdV$) for a reversible change involving on expansion (P-V) work
$dH = TdS – \cancel{PdV} + \cancel{PdV} + VdP$
This expression can be simplified by canceling the $PdV$ terms.
$dH = TdS + VdP \label{eq2A}$
And much as in the case of internal energy, this suggests that the natural variables of $H$ are $S$ and $P$. Or
$dH = \left( \dfrac{\partial H}{\partial S} \right)_P dS + \left( \dfrac{\partial H}{\partial P} \right)_S dV \label{eq2B}$
Comparing Equations \ref{eq2A} and \ref{eq2B} show that
$\left( \dfrac{\partial H}{\partial S} \right)_P= T \label{eq6A}$
and
$\left( \dfrac{\partial H}{\partial P} \right)_S = V \label{eq6B}$
It is worth noting at this point that both (Equation \ref{eq5A})
$\left( \dfrac{\partial U}{\partial S} \right)_V$
and (Equation \ref{eq6A})
$\left( \dfrac{\partial H}{\partial S} \right)_P$
are equation to $T$. So they are equation to each other
$\left( \dfrac{\partial U}{\partial S} \right)_V = \left( \dfrac{\partial H}{\partial S} \right)_P$
Moreover, the Euler Relation must also hold
$\left[ \dfrac{\partial}{\partial P} \left( \dfrac{\partial H}{\partial S} \right)_P \right]_S= \left[ \dfrac{\partial}{\partial S} \left( \dfrac{\partial H}{\partial P} \right)_S \right]_P$
so
$\left( \dfrac{\partial T}{\partial P} \right)_S = \left( \dfrac{\partial V}{\partial S} \right)_P$
This is the Maxwell relation on $H$. Maxwell relations can also be developed based on $A$ and $G$. The results of those derivations are summarized in Table 6.2.1..
Table 6.2.1: Maxwell Relations
Function Differential Natural Variables Maxwell Relation
$U$ $dU = TdS - PdV$ $S, \,V$ $\left( \dfrac{\partial T}{\partial V} \right)_S = - \left( \dfrac{\partial P}{\partial S} \right)_V$
$H$ $dH = TdS + VdP$ $S, \,P$ $\left( \dfrac{\partial T}{\partial P} \right)_S = \left( \dfrac{\partial V}{\partial S} \right)_P$
$A$ $dA = -PdV - SdT$ $V, \,T$ $\left( \dfrac{\partial P}{\partial T} \right)_V = \left( \dfrac{\partial S}{\partial V} \right)_T$
$G$ $dG = VdP - SdT$ $P, \,T$ $\left( \dfrac{\partial V}{\partial T} \right)_P = - \left( \dfrac{\partial S}{\partial P} \right)_T$
The Maxwell relations are extraordinarily useful in deriving the dependence of thermodynamic variables on the state variables of P, T, and V.
Example $1$
Show that
$\left( \dfrac{\partial V}{\partial T} \right)_P = T\dfrac{\alpha}{\kappa_T} - P \nonumber$
Solution:
Start with the combined first and second laws:
$dU = TdS - PdV \nonumber$
Divide both sides by $dV$ and constraint to constant $T$:
$\left.\dfrac{dU}{dV}\right|_{T} = \left.\dfrac{TdS}{dV}\right|_{T} - P \left.\dfrac{dV}{dV} \right|_{T} \nonumber$
Noting that
$\left.\dfrac{dU}{dV}\right|_{T} =\left( \dfrac{\partial U}{\partial V} \right)_T$
$\left.\dfrac{TdS}{dV}\right|_{T} = \left( \dfrac{\partial S}{\partial V} \right)_T$
$\left.\dfrac{dV}{dV} \right|_{T} = 1$
The result is
$\left( \dfrac{\partial U}{\partial V} \right)_T = T \left( \dfrac{\partial S}{\partial V} \right)_T -P \nonumber$
Now, employ the Maxwell relation on $A$ (Table 6.2.1)
$\left( \dfrac{\partial P}{\partial T} \right)_V = \left( \dfrac{\partial S}{\partial V} \right)_T \nonumber$
to get
$\left( \dfrac{\partial U}{\partial V} \right)_T = T \left( \dfrac{\partial P}{\partial T} \right)_V -P \nonumber$
and since
$\left( \dfrac{\partial P}{\partial T} \right)_V = \dfrac{\alpha}{\kappa_T} \nonumber$
It is apparent that
$\left( \dfrac{\partial V}{\partial T} \right)_P = T\dfrac{\alpha}{\kappa_T} - P \nonumber$
Note: How cool is that? This result was given without proof in Chapter 4, but can now be proven analytically using the Maxwell Relations!
Contributors
• Patrick E. Fleming (Department of Chemistry and Biochemistry; California State University, East Bay) | textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Chemical_Thermodynamics_(Supplement_to_Shepherd_et_al.)/17%3A_Extension_12_-_Working_Equations/17.01%3A_The_Maxwell_Relations.txt |
• 18.1: Partial Molar Quantities
Because they are easy to control in typical laboratory experiments, pressure, temperature, and the number of moles of each component are the independent variables that we find useful most often. Partial derivatives of thermodynamic quantities, taken with respect to the number of moles of a component, at constant pressure, temperature, and θk, are given a special designation; they are called partial molar quantities.
• 18.2: Chemical Potential
The chemical potential tells how the Gibbs function will change as the composition of the mixture changes. And since systems tend to seek a minimum aggregate Gibbs function, the chemical potential will point to the direction the system can move in order to reduce the total Gibbs function.
• 18.3: ∆rG is the rate at which the Gibbs Free Energy Changes with The Extent of Reaction
• 18.4: Molar Reaction Enthalpy
18: Fundamental 13 - Composition Changes
Because they are easy to control in typical laboratory experiments, pressure, temperature, and the number of moles of each component are the independent variables that we find useful most often. Partial derivatives of thermodynamic quantities, taken with respect to the number of moles of a component, at constant pressure and temperature, are given a special designation; they are called partial molar quantities. That is,
${\left(\frac{\partial U}{\partial n_i}\right)}_{P,T,n_{j\neq i}}$
is the partial molar energy of component $i$,
${\left(\frac{\partial G}{\partial n_i}\right)}_{P,T,n_{j\neq i}}$
is the partial molar Gibbs free energy, etc. All partial molar quantities are intensive variables.
Because partial molar quantities are particularly useful, it is helpful to have a distinctive symbol to represent them. We use a horizontal bar over a thermodynamic variable to represent a partial molar quantity. (We have been using the horizontal over-bar to mean simply per mole. When we use it to designate a partial molar quantity, it means per mole of a specific component.) Thus, we write
${\left(\frac{\partial U}{\partial n_i}\right)}_{P,T,n_{j\neq i}}={\overline{U}}_i$ ${\left(\frac{\partial V}{\partial n_i}\right)}_{P,T,n_{j\neq i}}={\overline{V}}_i$ ${\left(\frac{\partial G}{\partial n_i}\right)}_{P,T,n_{j\neq i}}={\overline{G}}_i$ etc.
In Sections 14.1 and 14.2, we introduce the chemical potential for substance $i$, ${\mu }_i$, and find that the chemical potential of substance $i$ is equivalently expressed by several partial derivatives. In particular, we have
${\mu }_i={\left(\frac{\partial G}{\partial n_i}\right)}_{P,T,n_{j\neq i}}={\overline{G}}_i$
that is, the chemical potential is also the partial molar Gibbs free energy.
It is important to recognize that the other partial derivatives that we can use to calculate the chemical potential are not partial molar quantities. Thus,
${\mu }_i={\left(\frac{\partial U}{\partial n_i}\right)}_{S,V,n_{j\neq i}}\neq {\left(\frac{\partial U}{\partial n_i}\right)}_{P,T,n_{j\neq i}}$
That is, ${\mu }_i\neq {\overline{U}}_i$. Similarly, ${\mu }_i\neq {\overline{H}}_i$, ${\mu }_i\neq {\overline{A}}_i$, and ${\mu }_i\neq -T{\overline{S}}_i$.
We can think of a thermodynamic variable as a manifold—a “surface” in a multidimensional space. If there are two independent variables, the dependent thermodynamic variable is a surface in a three-dimensional space. Then we can visualize the partial derivative of the dependent thermodynamic variable with respect to an independent variable as the slope of a line tangent to the surface. This tangent lies in a plane in which the other independent variable is constant. If the independent variables are pressure, temperature, and compositions, the slope of the tangent line at $\left(P,T,,n_1\mathrm{,\ }n_2\mathrm{,\dots ,}\ n_{\omega }\right)$ is the value of a partial molar quantity at that point.
A more concrete way to think of a partial molar quantity for component $i$ is to view it as the change in that quantity when we add one mole of $i$ to a very large system having the specified pressure, temperature, and composition. When we add one mole of $i$ to this system, the relative change in any of the system’s properties is very small; for example, the ratio of the final volume to the initial volume is essentially unity. Nevertheless, the volume of the system changes by a finite amount. This amount approximates the partial molar volume of substance $i$. This approximation becomes better as the size of the system becomes larger. We expect the change in the volume of the system to be approximately equal to the volume of one mole of pure $i$, but we know that in general it will be somewhat different because of the effects of attractive and repulsive forces between the additional $i$ molecules and the molecules comprising the original system.
Partial molar quantities can be expressed as functions of other thermodynamic variables. Because pressure and temperature are conveniently controlled variables, functions involving partial molar quantities are particularly useful for describing chemical change in systems that conform to the assumptions that we introduce in §1. Because the chemical potential is the same thing as the partial molar Gibbs free energy, it plays a prominent role in these equations.
To use these equations to describe a real system, we must develop empirical models that relate the partial molar quantities to the composition of the system. In general, these empirical models are non-linear functions of the system composition. However, simple approximations are sometimes adequate. The simplest approximation is a case we have already considered. If we can ignore the attractive and repulsive interactions among the molecules comprising the system, the effect of increasing $n_i$ by a small amount, ${dn}_i$, is simply the effect of adding $dn_i$ moles of pure component $i$ to the system. If we let ${\overline{U}}^*_i$ be the energy per mole of pure component $i$, the contribution to the energy of the system, at constant temperature and pressure, is
${\left(\frac{\partial U}{\partial n_i}\right)}_{P,T,n_{j\neq i}}dn_i={\overline{U}}^*_i\mathrm{\ }dn_i$
In Chapter 12, we apply the thermodynamic criteria for change to the equilibria between phases of a pure substance. To do so, we use the Gibbs free energies of the pure phases. In Chapter 13, we apply these criteria to chemical reactions of ideal gases, using the Gibbs free energies of the pure gases. In these cases, the properties of a phase of a pure substance are independent of the amounts of any other substances that are present. That is, we use the approximation
${\left(\frac{\partial G}{\partial n_i}\right)}_{P,T,n_{j\neq i}}dn_i={\overline{G}}^*_i\mathrm{\ }dn_i$
albeit without using the over-bar or the bullet superscript to indicate that we are using the partial molar Gibbs free energy of the pure substance. In Section 14.1, we develop the principle that $\sum^N_{i=1}{{\mu }_i{dn}_i}\le 0$ are general criteria for change that are applicable not only to closed systems but also to open systems composed of homogeneous phases.
Thus far in this chapter, we have written each partial derivative with a complete list of the variables that are held constant. This is typographically awkward. Clarity seldom requires that we include the work-related variables and composition variables, $n_i$, in this list. From here on, we usually omit them. | textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Chemical_Thermodynamics_(Supplement_to_Shepherd_et_al.)/18%3A_Fundamental_13_-_Composition_Changes/18.01%3A_Partial_Molar_Quantities.txt |
In much the same fashion as the partial molar volume is defined, the partial molar Gibbs function is defined for compound $i$ in a mixture:
$\mu_i = \left( \dfrac{\partial G}{\partial n_i} \right) _{P,T,n_j\neq i} \label{eq1}$
This particular partial molar function is of particular importance, and is called the chemical potential. The chemical potential tells how the Gibbs function will change as the composition of the mixture changes. And since systems tend to seek a minimum aggregate Gibbs function, the chemical potential will point to the direction the system can move in order to reduce the total Gibbs function. In general, the total change in the Gibbs function ($dG$) can be calculated from
$dG = \left( \dfrac{\partial G}{\partial P} \right) _{T,n_i} dP + \left( \dfrac{\partial G}{\partial T} \right) _{P, n_i }dT + \sum_i \left( \dfrac{\partial G}{\partial n_i} \right) _{T,n_j\neq i} dn_i$
Or, by substituting the definition for the chemical potential, and evaluating the pressure and temperature derivatives as was done in Chapter 6:
$dG = VdP - SdT + \sum_i \mu_i dn_i$
But as it turns out, the chemical potential can be defined as the partial molar derivative of any of the four major thermodynamic functions $U$, $H$, $A$, or $G$:
Table $1$: Chemical potential can be defined as the partial molar derivative any of the four major thermodynamic functions
$dU = TdS - PdV + \sum_i \mu_i dn_i$ $\mu_i = \left( \dfrac{\partial U}{\partial n_i} \right) _{S,V,n_j\neq i}$
$dH = TdS + VdP + \sum_i \mu_i dn_i$ $\mu_i = \left( \dfrac{\partial H}{\partial n_i} \right) _{S,P,n_j\neq i}$
$dA = -SdT - PdV + \sum_i \mu_i dn_i$ $\mu_i = \left( \dfrac{\partial A}{\partial n_i} \right) _{V,T,n_j\neq i}$
$dG = -SdT + VdP + \sum_i \mu_i dn_i$ $\mu_i = \left( \dfrac{\partial G}{\partial n_i} \right) _{P,T,n_j\neq i}$
The last definition, in which the chemical potential is defined as the partial molar Gibbs function is the most commonly used, and perhaps the most useful (Equation \ref{eq1}). As the partial molar Gibbs function, it is easy to show that
$d\mu = -\overline{S}dT + \overline{V}dP$
where $\overline{V}$ is the molar volume, and $\overline{S}$ is the molar entropy. Using this expression, it is easy to show that
$\left( \dfrac{\partial \mu}{\partial P} \right) _{T} = \overline{V}$
and so at constant temperature
$\int_{\mu^o}^{\mu} d\mu = \int_{P^o}^{P} \overline{V}\,dP \label{eq5}$
So that for a substance for which the molar volume is fairly independent of pressure at constant temperature (i. e., $\kappa_T$ is very small, as in a solid or liquid), therefore Equation \ref{eq5} becomes
$\int_{\mu^o}^{\mu} d\mu = \overline{V} \int_{P^o}^{P} dP$
$\mu - \mu^o = \overline{V}(P-P^o)$
or
$\mu = \mu^o + \overline{V}(P-P^o)$
Where $P^o$ is a reference pressure (generally the standard pressure of 1 bar) and $\mu^o$ is the chemical potential at the standard pressure. If the substance is highly compressible (such as a gas) the pressure dependence of the molar volume is needed to complete the integral. If the substance is an ideal gas
$\overline{V}=\dfrac{RT}{P}$
So at constant temperature, Equation \ref{eq5} then becomes
$\int_{\mu^o}^{\mu} d\mu = RT \int_{P^o}^{P} \dfrac{dP}{P} \label{eq5b}$
or
$\mu = \mu^o + RT \ln \left(\dfrac{P}{P^o} \right)$
Contributors
• Patrick E. Fleming (Department of Chemistry and Biochemistry; California State University, East Bay) | textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Chemical_Thermodynamics_(Supplement_to_Shepherd_et_al.)/18%3A_Fundamental_13_-_Composition_Changes/18.02%3A_Chemical_Potential.txt |
For the reaction $a\ A+b\ B\ \rightleftharpoons \ c\ C+d\ D$, let us call the consumption of $a$ moles of $A$ one “unit of reaction.” ${\Delta }_rG$ corresponds to the actual Gibbs free energy change for one unit of reaction only in the limiting case where the reaction occurs in an arbitrarily large system. For a closed system of specified initial composition, $n^o_A$, $n^o_B$, $n^o_C$, and $n^o_D$, whose composition at any time is specified by $n_A$, $n_B$, $n_C$, and $n_D$, the extent of reaction, $\mathrm{\xi }$, is
$\xi =-\left(\frac{n_A-n^o_A}{a}\right)=-\left(\frac{n_B-n^o_B}{b}\right)=\ \ \ \ \ \ \ \frac{n_C-n^o_C}{c}=\ \ \ \ \ \ \ \frac{n_D-n^o_D}{d} \nonumber$
At constant pressure and temperature, every possible state of this system is represented by a point on a plot of ${\Delta }_rG$ versus $\mathrm{\xi }$. Every such state is also represented by a point on a plot of $G_{system}$ versus $\mathrm{\xi }$.
From the general result that ${\left({dG}_{system}\right)}_{PT}\mathrm{=0}$ if and only if the system is at equilibrium, it follows that ${\mathrm{\Delta }}_rG\left({\xi }_{eq}\right)\mathrm{=0}$ if and only if ${\xi }_{eq}$ specifies the equilibrium state. (We can arrive at the same conclusion by considering the heat exchanged for one unit of reaction in an infinitely large system at equilibrium. This process is reversible, and it occurs at constant pressure and temperature, so we have ${\mathrm{\Delta }}_rH = q^{rev}_P$, ${\mathrm{\Delta }}_rS = {q^{rev}_P}/{T}$, and ${\mathrm{\Delta }}_rG = q^{rev}_P + T\left({q^{rev}_P}/{T}\right)\mathrm{=0}$.)
Below, we show that
${\mathrm{\Delta }}_rG\left(\xi \right) = {\left(\frac{\partial G_{system}}{\partial \xi }\right)}_{PT} \nonumber$
for any value of $\mathrm{\xi }$. (In Section 15.9, we use essentially the same argument to show that this conclusion is valid for any reaction among any substances.) Given this result, we see that the equilibrium composition corresponds to the extent of reaction, ${\xi }_{eq}$, for which the Gibbs free energy change for one unit of the reaction is zero
${\mathrm{\Delta }}_rG\left({\xi }_{eq}\right)\mathrm{=0} \nonumber$
and
${\left(\frac{\partial G_{system}}{\partial \xi }\right)}_{PT}=0 \nonumber$
So that the Gibbs free energy of the system is a minimum.
In the next section, we show that the condition ${\mathrm{\Delta }}_rG\left({\xi }_{eq}\right)\mathrm{=0}$ makes it easy to calculate the equilibrium extent of reaction, ${\xi }_{eq}$. Given the stoichiometry and initial composition, the equation for ${\mathrm{\Delta }}_rG\left({\xi }_{eq}\right)$ specifies the equilibrium composition and the partial pressures $P_A$, $P_B$, $P_C$, and $P_D$. This is the usual application of these results. Setting ${\Delta }_rG=0$ enables us to answer the question: If we initiate reaction at a given composition, what will be the equilibrium composition of the system? Usually this is what we want to know. The amount by which the Gibbs free energy changes as the reaction goes to equilibrium is seldom of interest.
To show that ${\mathrm{\Delta }}_rG={\left({\partial G}/{\partial \xi }\right)}_{PT}$ for any reaction, it is helpful to introduce modified stoichiometric coefficients, ${\nu }_j$, defined such that ${\nu }_j>0$ if the j-th species is a product and ${\nu }_j<0$ if the j-th species is a reactant. That is, for the reaction $a\ A+b\ B\ \rightleftharpoons \ c\ C+d\ D$, we define ${\nu }_A=-a$, ${\nu }_B=-b$, ${\nu }_C=c$, and ${\nu }_D=d$. Associating successive integers with the reactants and products, we represent the j-th chemical species as $X_j$ and an arbitrary reaction as
$\left|{\nu }_1\right|X_1+\left|{\nu }_2\right|X_2+\dots +\left|{\nu }_i\right|X_i\rightleftharpoons {\nu }_jX_j+\dots +{\nu }_{\omega }X_{\omega } \nonumber$
Let the initial number of moles of ideal gas $X_j$ be $n^o_j$; then $n_j=n^o_j+{\nu }_j\xi$. (For species that are present but do not participate in the reaction, we have ${\nu }_j=0$.)
We have shown that the Gibbs free energy of a mixture of ideal gases is equal to the sum of the Gibbs free energies of the components. In calculating ${\Delta }_rG$, we assume that this is as true for a mixture undergoing a spontaneous reaction as it is for a mixture at equilibrium. In doing so, we assume that the reacting system is homogeneous and that its temperature and pressure are well defined. In short, we assume that the Gibbs free energy of the system is the same continuous function of temperature, pressure, and composition, $G=G\left(T,P,n_1,n_2,\dots \right)$, whether the system is at equilibrium or undergoing a spontaneous reaction. For the equilibrium system, we have ${\left({\partial G}/{\partial T}\right)}_{P{,n}_j}=-S$ and ${\left({\partial G}/{\partial P}\right)}_{T{,n}_j}=V$. When we assume that these functions are the same for a spontaneously changing system as they are for a reversible system, it follows that
$dG={\left(\frac{\partial G}{\partial T}\right)}_{P,n_j}dT+{\left(\frac{\partial G}{\partial P}\right)}_{T,n_j}dP+\sum_j{{\left(\frac{\partial G}{\partial n_j}\right)}_{P,T,n_{i\neq j}}dn_j}=-SdT+VdP+\sum_j{{\left(\frac{\partial G}{\partial n_j}\right)}_{P,T,n_{i\neq j}}dn_j} \nonumber$
whether the system is at equilibrium or undergoing spontaneous change. At constant temperature and pressure, when pressure–volume work is the only work, the thermodynamic criteria for change, ${dG}_{TP}\le 0$ become
$\sum_j{{\left(\frac{\partial G}{\partial n_j}\right)}_{P,T,n_{i\neq j}}dn_j}\le 0 \nonumber$
When a reaction occurs in the system, the composition is a continuous function of the extent of reaction. We have $G=G\left(T,P,n^o_1+{\nu }_1\xi ,n^o_2+{\nu }_2\xi ,\dots \right)$. At constant temperature and pressure, the dependence of the Gibbs free energy on the extent of reaction is
$\left(\frac{\partial G}{\partial \xi }\right)_{P,T,n_m} =\sum_j \left(\frac{\partial G}{\partial \left(n^o_J+{\nu }_j\xi \right)}\right)_{P,T,n_{m\neq j}} \left(\frac{\partial \left(n^o_J+{\nu }_j\xi \right)}{ \partial \xi} \right)_{P,T,n_{m\neq j}} \nonumber$
Since
${\left(\frac{\partial G}{\partial \left(n^o_J+{\nu }_j\xi \right)}\right)}_{P,T,n_{m\neq j}}={\left(\frac{\partial G}{\partial n_j}\right)}_{P,T,n_{m\neq j}}={\overline{G}}_j \nonumber$
and
${\left(\frac{\partial \left(n^o_J+{\nu }_j\xi \right)}{\partial \xi }\right)}_{P,T,n_{m\neq j}}={\nu }_j \nonumber$
it follows that
${\left(\frac{\partial G}{\partial \xi }\right)}_{P,T,n_m}=\sum_j{{\nu }_j{\overline{G}}_j}={\Delta }_rG \nonumber$
Moreover, we have
${{\left(dG\right)}_{PT}=\left(\frac{\partial G}{\partial \xi }\right)}_{P,T,n_m}d\xi \nonumber$
The criteria for change, ${\left(dG\right)}_{PT}\le 0$, become
${\left(\frac{\partial G}{\partial \xi }\right)}_{P,T,n_m}d\xi \le 0 \nonumber$
From our definition of $\xi$, we have $d\xi >0$ for a process that proceeds spontaneously from left to right, so the criteria become
${\left(\frac{\partial G}{\partial \xi }\right)}_{P,T,n_m}\le 0 \nonumber$
or, equivalently,
$\sum_j{{\nu }_j{\overline{G}}_j}={\Delta }_rG\le 0 \nonumber$ | textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Chemical_Thermodynamics_(Supplement_to_Shepherd_et_al.)/18%3A_Fundamental_13_-_Composition_Changes/18.03%3A_rG_is_the_rate_at_which_the_Gibbs_Free_Energy_Changes_with_The_Ex.txt |
Recall that $\Delta \overline{H}_\text{rxn}$ is a molar integral reaction enthalpy equal to $\Delta H_\text{rxn}/\Delta \xi$, and that $\Delta _rH$ is a molar differential reaction enthalpy defined by $\sum_{i} \nu_{i} \ \overline{H}_{i}$ and equal to $(\partial H / \partial \xi)_{T, P}$.
Molar reaction enthalpy and heat
During a process in a closed system at constant pressure with expansion work only, the enthalpy change equals the energy transferred across the boundary in the form of heat: $\mathrm{d} H=\mathrm{d} q$. Thus for the molar reaction enthalpy $\Delta_{\mathrm{r}} H=(\partial H / \partial \xi)_{T, P}$, which refers to a process not just at constant pressure but also at constant temperature, we can write
$\Delta_{\mathrm{r}} H=\left(\frac{\mathrm{d} q}{\mathrm{d} \xi}\right)_{T, P,w'}\label{1}$
Note that when there is nonexpansion work ($w'$), such as electrical work, the enthalpy change is not equal to the heat. For example, if we compare a reaction taking place in a galvanic cell with the same reaction in a reaction vessel, the heats at constant $T$ and $P$ for a given change of $\xi$ are different, and may even have opposite signs. The value of $\Delta_{\mathrm{r}} H$ is the same in both systems, but the ratio of heat to extent of reaction, $\mathrm{d} q / \mathrm{d} \xi$, is different.
An exothermic reaction is one for which $\Delta_{\mathrm{r}} H$ is negative, and an endothermic reaction is one for which $\Delta_{\mathrm{r}} H$ is positive. Thus in a reaction at constant temperature and pressure with expansion work only, heat is transferred out of the system during an exothermic process and into the system during an endothermic process. If the process takes place at constant pressure in a system with thermally-insulated walls, the temperature increases during an exothermic process and decreases during an endothermic process.
These comments apply not just to chemical reactions, but to the other chemical processes at constant temperature and pressure discussed in this chapter.
Standard molar enthalpies of reaction and formation
A standard molar reaction enthalpy, $\Delta_{\mathrm{r}} H^{\circ}$, is the same as the molar integral reaction enthalpy $\Delta \bar{H}_{\mathrm{rxn}}$ for the reaction taking place under standard state conditions (each reactant and product at unit activity) at constant temperature.
At constant temperature, partial molar enthalpies depend only mildly on pressure. It is therefore usually safe to assume that unless the experimental pressure is much greater than $P^{\circ}$, the reaction is exothermic if $\Delta_{\mathrm{r}} H^{\circ}$ is negative and endothermic if $\Delta_{\mathrm{r}} H^{\circ}$ is positive.
The formation reaction of a substance is the reaction in which the substance, at a given temperature and in a given physical state, is formed from the constituent elements in their reference states at the same temperature. The reference state of an element is usually chosen to be the standard state of the element in the allotropic form and physical state that is stable at the given temperature and the standard pressure. For instance, at 298.15 K and 1 bar the stable allotrope of carbon is crystalline graphite rather than diamond.
Phosphorus is an exception to the rule regarding reference states of elements. Although red phosphorus is the stable allotrope at 298.15 K, it is not well characterized. Instead, the reference state is white phosphorus (crystalline P$_4$) at 1 bar.
At 298.15 K, the reference states of the elements are the following:
• For H$_2$, N$_2$, O$_2$, F$_2$, Cl$_2$, and the noble gases, the reference state is the ideal gas at 1 bar.
• For Br$_2$ and Hg, the reference state is the liquid at 1 bar.
• For P, as mentioned above, the reference state is crystalline white phosphorus at 1 bar.
• For all other elements, the reference state is the stable crystalline allotrope at 1 bar.
The standard molar enthalpy of formation (or standard molar heat of formation), $\Delta_{f} H^{\circ}$, of a substance is the enthalpy change per amount of substance produced in the formation reaction of the substance in its standard state. Thus, the standard molar enthalpy of formation of gaseous methyl bromide at 298.15 K is the molar reaction enthalpy of the reaction
$\mathrm{C}\left(\mathrm{s}, \text { graphite }, P^{\circ}\right)+\frac{3}{2} \mathrm{H}_{2}\left(\text { ideal gas, } P^{\circ}\right)+\frac{1}{2} \mathrm{Br}_{2}\left(1, P^{\circ}\right) \rightarrow \mathrm{CH}_{3} \mathrm{Br}\left(\text { ideal gas, } P^{\circ}\right)\nonumber$
The value of $\Delta_{f} H^{\circ}$ for a given substance depends only on $T$. By definition, $\Delta_{f} H^{\circ}$ for the reference state of an element is zero.
A principle called Hess’s law can be used to calculate the standard molar enthalpy of formation of a substance at a given temperature from standard molar reaction enthalpies at the same temperature, and to calculate a standard molar reaction enthalpy from tabulated values of standard molar enthalpies of formation. The principle is an application of the fact that enthalpy is a state function. Therefore, $\Delta H$ for a given change of the state of the system is independent of the path and is equal to the sum of $\Delta H$ values for any sequence of changes whose net result is the given change. (We may apply the same principle to a change of any state function.)
For example, the following combustion reactions can be carried out experimentally in a bomb calorimeter, yielding the values shown below of standard molar reaction enthalpies (at $T = 298.15 \text{ K}$, $P = P^{\circ} = 1 \text{ bar}$):
\begin{aligned} \mathrm{C}(\mathrm{s}, \text { graphite })+\mathrm{O}_{2}(\mathrm{g}) & \rightarrow \mathrm{CO}_{2}(\mathrm{g}) & \Delta_{\mathrm{r}} H^{\circ} &=-393.51 \text{ kJ} \text{ mol}^{-1} \ \mathrm{CO}(\mathrm{g})+\frac{1}{2} \mathrm{O}_{2}(\mathrm{g}) & \rightarrow \mathrm{CO}_{2}(\mathrm{g}) & \Delta_{\mathrm{r}} H^{\circ} &=-282.98 \text{ kJ} \text{ mol}^{-1} \end{aligned}\nonumber
(Note that the first reaction, in addition to being the combustion reaction of graphite, is also the formation reaction of carbon dioxide.) The change resulting from the first reaction followed by the reverse of the second reaction is the formation reaction of carbon monoxide:
$\mathrm{C}(\mathrm{s}, \text { graphite })+\frac{1}{2} \mathrm{O}_{2}(\mathrm{g}) \rightarrow \mathrm{CO}(\mathrm{g})\nonumber$
It would not be practical to measure the molar enthalpy of this last reaction by allowing graphite to react with oxygen in a calorimeter, because it would be difficult to prevent the formation of some CO$_2$. From Hess’s law, the standard molar enthalpy of formation of CO is the sum of the standard molar enthalpies of the reactions that have the formation reaction as the net result:
\begin{aligned} \Delta_{\mathrm{f}} H^{\circ}(\mathrm{CO}, \mathrm{g}, 298.15 \text{ K}) &=(-393.51+282.98) \text{ kJ} \text{ mol}^{-1} \ &=-110.53 \text{ kJ} \text{ mol}^{-1} \end{aligned}
This value is one of the many standard molar enthalpies of formation to be found in compilations of thermodynamic properties of individual substances. We may use the tabulated values to evaluate the standard molar reaction enthalpy $\Delta_{\mathrm{r}} H^{\circ}$ of a reaction using a formula based on Hess’s law. Imagine the reaction to take place in two steps: First each reactant in its standard state changes to the constituent elements in their reference states (the reverse of a formation reaction), and then these elements form the products in their standard states. The resulting formula is:
$\Delta_{\mathrm{r}} H^{\circ}=\sum_{i} \nu_{i} \Delta_{\mathrm{f}} H^{\circ}_i\label{2}$
where $\Delta_{\mathrm{f}} H^{\circ}_i$ is the standard molar enthalpy of formation of substance $i$. Recall that the stoichiometric number $\nu_i$ of each reactant is negative and that of each product is positive, so according to Hess’s law the standard molar reaction enthalpy is the sum of the standard molar enthalpies of formation of the products minus the sum of the standard molar enthalpies of formation of the reactants. Each term is multiplied by the appropriate stoichiometric coefficient from the reaction equation.
A standard molar enthalpy of formation can be defined for a solute in solution to use in $\ref{2}$. For instance, the formation reaction of aqueous sucrose is:
$12 \mathrm{C}(\mathrm{s}, \text { graphite })+11 \mathrm{H}_{2}(\mathrm{g})+\frac{11}{2} \mathrm{O}_{2}(\mathrm{g}) \rightarrow \mathrm{C}_{12} \mathrm{H}_{22} \mathrm{O}_{11}(\mathrm{aq})\nonumber$
and $\Delta_{\mathrm{f}} H^{\circ}_i$ for C$_{12}$H$_{22}$O$_{11}$(aq) is the enthalpy change per amount of sucrose formed when the reactants and product are in their standard states. Note that this formation reaction does not include the formation of the solvent H$_2$O from H$_2$ and O$_2$. Instead, the solute once formed combines with the amount of pure liquid water needed to form the solution. If the aqueous solute is formed in its standard state, the amount of water needed is very large so as to have the solute exhibit infinite-dilution behavior.
There is no ordinary reaction that would produce an individual ion in solution from its element or elements without producing other species as well. We can, however, prepare a consistent set of standard molar enthalpies of formation of ions by assigning a value to a single reference ion. We can use these values for ions in $\ref{2}$ just like values of $\Delta_{\mathrm{f}} H^{\circ}_i$ for substances and nonionic solutes. Aqueous hydrogen ion is the usual reference ion, to which is assigned the arbitrary value
$\Delta_{\mathrm{f}} H^{\circ}\left(\mathrm{H}^{+}, \mathrm{aq}\right)=0 \quad \text { (at all temperatures) }\nonumber$
To see how we can use this reference value, consider the reaction for the formation of aqueous HCl (hydrochloric acid):
$\frac{1}{2} \mathrm{H}_{2}(\mathrm{g})+\frac{1}{2} \mathrm{Cl}_{2}(\mathrm{g}) \rightarrow \mathrm{H}^{+}(\mathrm{aq})+\mathrm{Cl}^{-}(\mathrm{aq})\nonumber$
The standard molar reaction enthalpy at 298.15 K for this reaction is known, from reaction calorimetry, to have the value $\Delta_{\mathrm{r}} H^{\circ}=-167.08 \text{ kJ} \text{ mol}^{-1}$. The standard states of the gaseous H$_2$ and Cl$_2$ are, of course, the pure gases acting ideally at pressure $P^{\circ}$, and the standard state of each of the aqueous ions is the ion at the standard molality and standard pressure, acting as if its activity coefficient on a molality basis were $1$. From $ref{1}$, we equate the value of $\Delta_{\mathrm{r}} H^{\circ}t$ to the sum
$-\frac{1}{2} \Delta_{\mathrm{f}} H^{\circ}\left(\mathrm{H}_{2}, \mathrm{g}\right)-\frac{1}{2} \Delta_{\mathrm{f}} H^{\circ}\left(\mathrm{Cl}_{2}, \mathrm{g}\right)+\Delta_{\mathrm{f}} H^{\circ}\left(\mathrm{H}^{+}, \mathrm{aq}\right)+\Delta_{\mathrm{f}} H^{\circ}\left(\mathrm{Cl}^{-}, \mathrm{aq}\right)$
But the first three terms of this sum are zero. Therefore, the value of $\Delta_{\mathrm{f}} H^{\circ}$(Cl$^-$, aq) is $-167.08 \text{ kJ mol}^{-1}$.
Next we can combine this value of $\Delta_{\mathrm{f}} H^{\circ}$(Cl$^-$, aq) with the measured standard molar enthalpy of formation of aqueous sodium chloride
$\mathrm{Na}(\mathrm{s})+\frac{1}{2} \mathrm{Cl}_{2}(\mathrm{g}) \rightarrow \mathrm{Na}^{+}(\mathrm{aq})+\mathrm{Cl}^{-}(\mathrm{aq})\nonumber$
to evaluate the standard molar enthalpy of formation of aqueous sodium ion. By continuing this procedure with other reactions, we can build up a consistent set of $\Delta_{\mathrm{f}} H^{\circ}$ values of various ions in aqueous solution.
Molar reaction heat capacity
The molar reaction enthalpy $\Delta_\mathrm{r} \overline{H}$ is in general a function of $T$, $P$, and $\xi$. Using the relations $\Delta_\mathrm{r}H=\sum_i\!\nu_i H_i$ and $C_{P, i}=\left(\partial H_{i} / \partial T\right)_{P, \xi}$, we can write
$\left(\frac{\partial \Delta_{\mathrm{r}} H}{\partial T}\right)_{p, \xi}=\left(\frac{\partial \sum_{i} \nu_{i} H_{i}}{\partial T}\right)_{p, \xi}=\sum_{i} \nu_{i} C_{p, i}=\Delta_{\mathrm{r}} C_{p}\label{3}$
where $\Delta_\mathrm{r}C_P$ is the molar reaction heat capacity at constant pressure, equal to the rate at which the heat capacity $C_P$ changes with $\xi$ at constant $T$ and $P$.
Under standard state conditions, $\ref{3}$ becomes
$\mathrm{d} \Delta_{\mathrm{r}} H^{\circ} / \mathrm{d} T=\Delta_{\mathrm{r}} C_{p}^{\circ}$
Effect of temperature on reaction enthalpy
Consider a reaction occurring with a certain finite change of the extent of reaction in a closed system at temperature $T'$ and at constant pressure. The reaction is characterized by a change of the extent of reaction from $\xi_1$ to $\xi_2$, and the integral reaction enthalpy at this temperature is denoted $\Delta H_{\mathrm{rxn}}\left(T^{\prime}\right)$. We wish to find an expression for the reaction enthalpy $\Delta H_{\mathrm{rxn}}\left(T^{\prime \prime}\right)$ for the same values of $\xi_1$ and $\xi_2$ at the same pressure but at a different temperature, $T''$.
The heat capacity of the system at constant pressure is related to the enthalpy: $C_{P}=(\partial H / \partial T)_{P, \xi}$. We integrate $\mathrm{d} H=C_{p} \mathrm{d} T$ from $T'$ to $T''$ at constant $P$ and $\xi$, for both the final and initial values of the extent of reaction:
$H\left(\xi_{2}, T^{\prime \prime}\right)=H\left(\xi_{2}, T^{\prime}\right)+\int_{T^{\prime}}^{T^{\prime \prime}} C_{p}\left(\xi_{2}\right) \mathrm{d} T\label{4}$
$H\left(\xi_{1}, T^{\prime \prime}\right)=H\left(\xi_{1}, T^{\prime}\right)+\int_{T^{\prime}}^{T^{\prime \prime}} C_{p}\left(\xi_{1}\right) \mathrm{d} T\label{5}$
Subtracting $\ref{5}$ from $\ref{4}$, we obtain
$\Delta H_{\mathrm{rxn}}\left(T^{\prime \prime}\right)=\Delta H_{\mathrm{rxn}}\left(T^{\prime}\right)+\int_{T^{\prime}}^{T^{\prime \prime}} \Delta C_{p} \mathrm{d} T\label{6}$
where $\Delta C_P$ is the difference between the heat capacities of the system at the final and initial values of $\xi$, a function of $T$: $\Delta C_{p}=C_{p}\left(\xi_{2}\right)-C_{p}\left(\xi_{1}\right)$. $\ref{6}$ is the Kirchhoff equation.
When $\Delta C_P$ is essentially constant in the temperature range from $T'$ to $T''$, the Kirchhoff equation becomes
$\Delta H_{\mathrm{rxn}}\left(T^{\prime \prime}\right)=\Delta H_{\mathrm{rxn}}\left(T^{\prime}\right)+\Delta C_{p}\left(T^{\prime \prime}-T^{\prime}\right)\label{7}$
Figure $1$: Dependence of reaction enthalpy on temperature at constant pressure.
Figure $1$ illustrates the principle of the Kirchhoff equation as expressed by $\ref{7}$. $\Delta C_{P}$ equals the difference in the slopes of the two dashed lines in the figure, and the product of $\Delta C_{P}$ and the temperature difference $T''-T'$ equals the change in the value of $\Delta H_\mathrm{rxn}$. The figure illustrates an exothermic reaction with negative $\Delta C_{P}$, resulting in a more negative value of $\Delta H_\mathrm{rxn}$ at the higher temperature.
We can also find the effect of temperature on the molar differential reaction enthalpy $\Delta_\mathrm{r}H$. From $\ref{3}$, we have $C_{P}=(\partial H / \partial T)_{P, \xi}$. Integration from temperature $T'$ to temperature $T''$ yields the relation
$\Delta_{\mathrm{r}} H\left(T^{\prime \prime}, \xi\right)=\Delta_{\mathrm{r}} H\left(T^{\prime}, \xi\right)+\int_{T^{\prime}}^{T^{\prime \prime}} \Delta_{\mathrm{r}} C_{p}(T, \xi) \mathrm{d} T$
This relation is analogous to $\ref{6}$, using molar differential reaction quantities in place of integral reaction quantities. | textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Chemical_Thermodynamics_(Supplement_to_Shepherd_et_al.)/18%3A_Fundamental_13_-_Composition_Changes/18.04%3A_Molar_Reaction_Enthalpy.txt |
• 19.1: How The Enthalpy Change for a Reaction Depends on Temperature
We use tabulated enthalpies of formation to calculate the enthalpy change for a particular chemical reaction. Such tables typically give enthalpies of formation at a number of different temperatures, so that the enthalpy change for a given reaction can also be calculated at these different temperatures; it is just a matter of repeating the same calculation at each temperature.
19: Extension 13 - More Cycles
Previously, we saw how to use tabulated enthalpies of formation to calculate the enthalpy change for a particular chemical reaction. Such tables typically give enthalpies of formation at a number of different temperatures, so that the enthalpy change for a given reaction can also be calculated at these different temperatures; it is just a matter of repeating the same calculation at each temperature.
We often need to find the enthalpy change associated with increasing the temperature of a substance at constant pressure. This enthalpy change is readily calculated by integrating the heat capacity over the temperature change. We may want to know, for example, the enthalpy change for increasing the temperature of one mole of methane from 300 K to 400 K, with the pressure held constant at one bar. From the table, we find
$\Delta_fH^o\left(CH_4 ,g,300\, K\right) =-74.656\ \mathrm{k}\mathrm{J}\ \mathrm{mol}^{-1}$
$\Delta_fH^o\left(CH_4\mathrm{,g,400\ K}\right) = -77.703\ \mathrm{k}\mathrm{J}\ \mathrm{mol}^{-1}$
We might be tempted to think that the difference represents the enthalpy change associated with heating the methane. This is not so! The reason becomes immediately apparent if we consider a cycle in which we go from the elements to a compound at two different temperatures. For methane, this cycle is shown in Figure $1$.
Figure $1$: A thermochemical cycle relating $\Delta_fH^o (CH_4)$ at two temperatures.
The difference between the standard enthalpies of formation of methane at 300 K and 400 K reflects the enthalpy change for increasing the temperatures of all of the reactants and products from 300 K to 400 K. That is,
$\Delta_fH^o\left(CH_4\mathrm{,g,400\ K}\right)-\Delta_fH^o\left(CH_4\mathrm{,g,300\ K}\right)$ $=\int^{400}_{300}{C_P\left(CH_4\mathrm{,g}\right)dT}-\int^{400}_{300}{C_P\left(C\mathrm{,s}\right)dT} -2\int^{400}_{300}{C_P\left(H_2\mathrm{,g}\right)dT}$
Over the temperature range from 300 K to 400 K, the heat capacities of carbon, hydrogen, and methane are approximated by $C_P=a+bT$, with values of $a$ and $b$ given in Table 1. From this information, we calculate the enthalpy change for increasing the temperature of one mole of each substance from 300 K to 400 K at 1 bar: $\Delta H\left(C\right)=1,029\ \mathrm{J}\ {\mathrm{mol}}^{-1}$, $\Delta H\left(H_2\right)=2,902\ \mathrm{J}\ {\mathrm{mol}}^{-1}$, and $\Delta H\left(CH_4\right)=3,819\ \mathrm{J}\ {\mathrm{mol}}^{-1}$. Thus, from the cycle, we calculate:
$\Delta_fH^o\left(CH_4\mathrm{,g,400\ K}\right)=-74,656+3,819-1,029-2\left(2,902\right)\ \mathrm{J}\ {\mathrm{mol}}^{-1}=\ -77,670\ \mathrm{J}\ {\mathrm{mol}}^{-1}$
The tabulated value is $-77,703\ \mathrm{J}\ {\mathrm{mol}}^{-1}$. The two values differ by $33\ \mathrm{J}\ {\mathrm{mol}}^{-1}$, or about 0.04%. This difference arises from the limitations of the two-parameter heat-capacity equations.
As another example of a thermochemical cycle, let us consider the selective oxidation of methane to methanol at 300 K and 400 K. From the enthalpies of formation in Table 1, we calculate the enthalpies for the reaction to be $\Delta_rH^o\left(3\mathrm{00\ K}\right)=-126.412\ \mathrm{k}\mathrm{J}\ {\mathrm{mol}}^{-1}$ and $\Delta_rH^o\left(4\mathrm{00\ K}\right)=-126.919\ \mathrm{k}\mathrm{J}\ {\mathrm{mol}}^{-1}$. As in the previous example, we use the tabulated heat-capacity parameters to calculate the enthalpy change for increasing the temperature of one mole of each of these gases from 300 K to 400 K at 1 bar. We find: $\Delta H\left(CH_3OH\right)=4,797\ \mathrm{J}\ {\mathrm{mol}}^{-1}$, $\Delta H\left(CH_4\right)=3,819\ \mathrm{J}\ {\mathrm{mol}}^{-1}$, and $\Delta H\left(O_2\right)=2,975\ \mathrm{J}\ {\mathrm{mol}}^{-1}$.
Figure $2$: A thermochemical cycle relating $\Delta_rH^o$ at two temperatures.
The cycle is shown in $2$. Inspecting this cycle, we see that we can calculate the enthalpy change for warming one mole of methanol from 300 K to 400 K by summing the enthalpy changes around the bottom, left side, and top of the cycle; that is,
$\Delta H\left(CH_3OH\right)=126,412+3,819+\left(\frac{1}{2}\right)2,975-126,919\ \mathrm{J}\ {\mathrm{mol}}^{-1}=4,800\ \mathrm{J}\ {\mathrm{mol}}^{-1}$
This is 3 J or about 0.06 % larger than the value obtained $\left(4,797\ \mathrm{J}\right)$ by integrating the heat capacity for methanol. | textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Chemical_Thermodynamics_(Supplement_to_Shepherd_et_al.)/19%3A_Extension_13_-_More_Cycles/19.01%3A_How_The_Enthalpy_Change_for_a_Reaction_Depends_on_Temperature.txt |
• 20.1: Prelude to Chemical Equilibria
The small is great, the great is small; all is in equilibrium in necessity... - Victor Hugo in “Les Miserables”
• 20.2: Chemical Potential
Equilibrium can be understood as accruing at the composition of a reaction mixture at which the aggregate chemical potential of the products is equal to that of the reactants.
20: Fundamental 14 - Reaction Equilibrium
The small is great, the great is small; all is in equilibrium in necessity... - Victor Hugo in “Les Miserables”
As was discussed in Chapter 6, the natural tendency of chemical systems is to seek a state of minimum Gibbs function. Once the minimum is achieved, movement in any chemical direction will not be spontaneous. It is at this point that the system achieves a state of equilibrium.
From the diagram above, it should be clear that the direction of spontaneous change is determined by minimizing
$\left(\frac{\partial G}{\partial \xi}\right)_{P,T}.$
If the slope of the curve is negative, the reaction will favor a shift toward products. And if it is positive, the reaction will favor a shift toward reactants. This is a non-trivial point, as it underscores the importance of the composition of the reaction mixture in the determination of the direction of the reaction.
Contributors
• Patrick E. Fleming (Department of Chemistry and Biochemistry; California State University, East Bay)
20.02: Chemical Potential
Equilibrium can be understood as accruing at the composition of a reaction mixture at which the aggregate chemical potential of the products is equal to that of the reactants. Consider the simple reaction
$A(g) \rightleftharpoons B(g)$
The criterion for equilibrium will be
$\mu_A=\mu_B$
If the gases behave ideally, the chemical potentials can be described in terms of the mole fractions of $A$ and $B$
$\mu_A^o + RT \ln\left( \dfrac{P_A}{P_{tot}} \right) = \mu_B^o + RT \ln\left( \dfrac{P_B}{P_{tot}} \right) \label{eq2}$
where Dalton’s Law has been used to express the mole fractions.
$\chi_i = \dfrac{P_i}{P_{tot}}$
Equation \ref{eq2} can be simplified by collecting all chemical potentials terms on the left
$\mu_A^o - \mu_B^o = RT \ln \left( \dfrac{P_B}{P_{tot}} \right) - RT \ln\left( \dfrac{P_A}{P_{tot}} \right) \label{eq3}$
Combining the logarithms terms and recognizing that
$\mu_A^o - \mu_B^o =–\Delta G^o$
for the reaction, one obtains
$–\Delta G^o = RT \ln \left( \dfrac{P_B}{P_{A}} \right)$
And since the equilibrium constant is $P_A/P_B = K_P$ for this reaction (assuming perfectly ideal behavior), one can write
$\Delta G^o = RT \ln K_P$
Another way to achieve this result is to consider the Gibbs function change for a reaction mixture in terms of the reaction quotient. The reaction quotient can be expressed as
$Q_P = \dfrac{\prod_i P_i^{\nu_i}}{\prod_j P_j^{\nu_j}}$
where $\nu_i$ are the stoichiometric coefficients for the products, and $\nu_j$ are those for the reactants. Or if the stoichiometric coefficients are defined by expressing the reaction as a sum
$0 =\sum_i \nu_i X_i$
where $X_i$ refers to one of the species in the reaction, and $\nu_i$ is then the stoichiometric coefficient for that species, it is clear that $\nu_i$ will be negative for a reactant (since its concentration or partial pressure will reduce as the reaction moves forward) and positive for a product (since the concentration or partial pressure will be increasing.) If the stoichiometric coefficients are expressed in this way, the expression for the reaction quotient becomes
$Q_P = \prod_i P_i^{\nu_i}$
Using this expression, the Gibbs function change for the system can be calculated from
$\Delta G =\Delta G^o + RT \ln Q_P$
And since at equilibrium
$\Delta G = 0$
and
$Q_P=K_P$
It is evident that
$\Delta G_{rxn}^o = -RT \ln K_P \label{triangle}$
It is in this simple way that $K_P$ and $\Delta G^o$ are related.
It is also of value to note that the criterion for a spontaneous chemical process is that $\Delta G_{rxn}\ < 0$, rather than $\Delta G_{rxn}^o$, as is stated in many texts! Recall that $\Delta G_{rxn}^o$ is a function of all of the reactants and products being in their standard states of unit fugacity or activity. However, the direction of spontaneous change for a chemical reaction is dependent on the composition of the reaction mixture. Similarly, the magnitude of the equilibrium constant is insufficient to determine whether a reaction will spontaneously form reactants or products, as the direction the reaction will shift is also a function of not just the equilibrium constant, but also the composition of the reaction mixture!
Example $1$:
Based on the data below at 298 K, calculate the value of the equilibrium constant ($K_P$) for the reaction
$2 NO(g) + O_2(g) \rightleftharpoons 2 NO_2(g)$
$NO(g)$ $NO_2(g)$
$G_f^o$ (kJ/mol) 86.55 51.53
Solution:
First calculate the value of $\Delta G_{rxn}^o$ from the $\Delta G_{f}^o$ data.
$\Delta G_{rxn}^o = 2 \times (51.53 \,kJ/mol) - 2 \times (86.55 \,kJ/mol) = -70.04 \,kJ/mol$
And now use the value to calculate $K_p$ using Equation \ref{triangle}.
$-70040\, J/mol = -(8.314 J/(mol\, K) (298 \, K) \ln K_p$
$K_p = 1.89 \times 10^{12}$
Note: as expected for a reaction with a very large negative $\Delta G_{rxn}^o$, the equilibrium constant is very large, favoring the formation of the products.
Contributors
• Patrick E. Fleming (Department of Chemistry and Biochemistry; California State University, East Bay) | textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Chemical_Thermodynamics_(Supplement_to_Shepherd_et_al.)/20%3A_Fundamental_14_-_Reaction_Equilibrium/20.01%3A_Prelude_to_Chemical_Equilibria.txt |
• 21.1: Temperature Dependence of Equilibrium Constants - the van’t Hoff Equation
The value of Kp is independent of pressure, although the composition of a system at equilibrium may be very much dependent on pressure. Temperature dependence is another matter. Because the value of is dependent on temperature, the value of Kp is as well. The form of the temperature dependence can be taken from the definition of the Gibbs function.
21: Extension 14 - Temperature Dependence of Equilibrium
The value of $K_p$ is independent of pressure, although the composition of a system at equilibrium may be very much dependent on pressure. Temperature dependence is another matter. Because the value of $\Delta G_{rxm}^o$ is dependent on temperature, the value of $K_p$ is as well. The form of the temperature dependence can be taken from the definition of the Gibbs function. At constant temperature and pressure
$\dfrac{\Delta G^o_{T_2}}{T_2} - \dfrac{\Delta G^o_{T_1}}{T_1} = \Delta H^o \left(\dfrac{1}{T_2} - \dfrac{1}{T_1} \right) \nonumber$
Substituting
$\Delta G^o = -RT \ln K \nonumber$
For the two values of $\Delta G_{}^o$ and using the appropriate temperatures, yields
$\dfrac{-R{T_2} \ln K_2}{T_2} - \dfrac{-R{T_1} \ln K_1}{T_1} = \Delta H^o \left(\dfrac{1}{T_2} - \dfrac{1}{T_1} \right) \nonumber$
And simplifying the expression so that only terms involving $K$ are on the left and all other terms are on the right results in the van ’t Hoff equation, which describes the temperature dependence of the equilibrium constant.
$\ln \left(\dfrac{\ K_2}{\ K_1}\right) = - \dfrac{\Delta H^o}{R} \left(\dfrac{1}{T_2} - \dfrac{1}{T_1} \right) \label{vH}$
Because of the assumptions made in the derivation of the Gibbs-Helmholtz equation, this relationship only holds if $\Delta H^o$ is independent of temperature over the range being considered. This expression also suggests that a plot of $\ln(K)$ as a function of $1/T$ should produce a straight line with a slope equal to $–\Delta H^o/R$. Such a plot is known as a van ’t Hoff plot, and can be used to determine the reaction enthalpy.
Example $1$
A certain reaction has a value of $K_p = 0.0260$ at 25 °C and $\Delta H_{rxm}^o = 32.4 \,kJ/mol$. Calculate the value of $K_p$ at 37 °C.
Solution
This is a job for the van ’t Hoff equation!
• T1 = 298 K
• T2 = 310 K
• $\Delta H_{rxm}^o = 32.4 \,kJ/mol$
• K1 = 0.0260
• K2 = ?
So Equation \ref{vH} becomes
\begin{align*} \ln \left( \dfrac{\ K_2}{0.0260} \right) &= - \dfrac{32400 \,J/mol}{8.314 \,K/(mol \,K)} \left(\dfrac{1}{310\, K} - \dfrac{1}{298 \,K} \right) \[4pt] K_2 &= 0.0431 \end{align*}
Note: the value of $K_2$ increased with increasing temperature, which is what is expected for an endothermic reaction. An increase in temperature should result in an increase of product formation in the equilibrium mixture. But unlike a change in pressure, a change in temperature actually leads to a change in the value of the equilibrium constant!
Example $2$
Given the following average bond enthalpies for $\ce{P-Cl}$ and $\ce{Cl-Cl}$ bonds, predict whether or not an increase in temperature will lead to a larger or smaller degree of dissociation for the reaction
$\ce{PCl_5 \rightleftharpoons PCl_3 + Cl_2} \nonumber$
X-Y D(X-Y) (kJ/mol)
P-Cl 326
Cl-Cl 240
Solution
The estimated reaction enthalpy is given by the total energy expended breaking bonds minus the energy recovered by the formation of bonds. Since this reaction involves breaking two P-Cl bonds (costing 652 kJ/mol) and the formation of one Cl-Cl bond (recovering 240 kJ/mol), it is clear that the reaction is endothermic (by approximately 412 kJ/mol). As such, an increase in temperature should increase the value of the equilibrium constant, causing the degree of dissociation to be increased at the higher temperature. | textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Chemical_Thermodynamics_(Supplement_to_Shepherd_et_al.)/21%3A_Extension_14_-_Temperature_Dependence_of_Equilibrium/21.01%3A_Temperature_Dependence_of_Equilibrium_Constants_-.txt |
• 22.1: Fundamentals of Phase Transitions
Phase transition is when a substance changes from a solid, liquid, or gas state to a different state. Every element and substance can transition from one phase to another at a specific combination of temperature and pressure.
• 22.2: Phase Diagrams
Phase diagram is a graphical representation of the physical states of a substance under different conditions of temperature and pressure. A typical phase diagram has pressure on the y-axis and temperature on the x-axis. As we cross the lines or curves on the phase diagram, a phase change occurs. In addition, two states of the substance coexist in equilibrium on the lines or curves.
22: Fundamental 15 - Phase Equilibrium
Phase transition is when a substance changes from a solid, liquid, or gas state to a different state. Every element and substance can transition from one phase to another at a specific combination of temperature and pressure.
Phase Changes
Each substance has three phases it can change into; solid, liquid, or gas(1). Every substance is in one of these three phases at certain temperatures. The temperature and pressure at which the substance will change is very dependent on the intermolecular forces that are acting on the molecules and atoms of the substance(2). There can be two phases coexisting in a single container at the same time. This typically happens when the substance is transitioning from one phase to another. This is called a two-phase state(4). In the example of ice melting, while the ice is melting, there is both solid water and liquid water in the cup.
There are six ways a substance can change between these three phases; melting, freezing, evaporating, condensing, sublimination, and deposition(2). These processes are reversible and each transfers between phases differently:
• Melting: The transition from the solid to the liquid phase
• Freezing: The transition from the liquid phase to the solid phase
• Evaporating: The transition from the liquid phase to the gas phase
• Condensing:The transition from the gas phase to the liquid phase
• Sublimination: The transition from the solid phase to the gas phase
• Deposition: The transition from the gas phase to the solid phase
How Phase Transition works
There are two variables to consider when looking at phase transition, pressure (P) and temperature (T). For the gas state, The relationship between temperature and pressure is defined by the equations below:
Ideal Gas Law:
$PV=nRT$
van der Waals Equation of State:
$\left(P+a*\frac{n^2}{V^2}\right)\left(V-nb\right)=nRT$
Where V is volume, R is the gas constant, and n is the number of moles of gas.
The ideal gas law assumes that no intermolecular forces are affecting the gas in any way, while the van der Waals equation includes two constants, a and b, that account for any intermolecular forces acting on the molecules of the gas.
Temperature
Temperature can change the phase of a substance. One common example is putting water in a freezer to change it into ice. In the picture above, we have a solid substance in a container. When we put it on a heat source, like a burner, heat is transferred to the substance increasing the kinetic energy of the molecules in the substance. The temperature increases until the substance reaches its melting point(2). As more and more heat is transferred beyond the melting point, the substance begins to melt and become a liquid(3). This type of phase change is called an isobaric process because the pressure of the system stays at a constant level.
Melting point (Tf)
Each substance has a melting point. The melting point is the temperature that a solid will become a liquid. At different pressures, different temperatures are required to melt a substance. Each pure element on the periodic table has a normal melting point, the temperature that the element will become liquid when the pressure is 1 atmosphere(2).
Boiling Point (Tb)
Each substance also has a boiling point. The boiling point is the temperature that a liquid will evaporate into a gas. The boiling point will change based on the temperature and pressure. Just like the melting point, each pure element has a normal boiling point at 1 atmosphere(2).
Pressure
Pressure can also be used to change the phase of the substance. In the picture above, we have a container fitted with a piston that seals in a gas. As the piston compresses the gas, the pressure increases. Once the boiling point has been reached, the gas will condense into a liquid. As the piston continues to compress the liquid, the pressure will increase until the melting point has been reached. The liquid will then freeze into a solid. This example is for an isothermal process where the temperature is constant and only the pressure is changing.
A Brief Explanation of a Phase Diagram
Phase transition can be represented with a phase diagram. A phase diagram is a visual representation of how a substance changes phases.
This is an example of a phase diagram. Often, when you are asked about a phase transition, you will need to refer to a phase diagram to answer it. These diagrams usually have the normal boiling point and normal melting point marked on them, and have the pressures on the y-axis and temperatures on the x-axis. The bottom curve marks the temperature and pressure combinations in which the substance will subliminate (1). The left left marks the temperature and pressure combinations in which the substance will melt (1). Finally, the right line marks the conditions under which the substance will evaporate (1).
Problems
1. Using the phase diagram for carbon dioxide below, explain what phase carbon dioxide is normally in at standard temperature and pressure, 1 atm and 273.15 K.
Phase diagram for CO2.from Wikipedia.
2: Looking at the same diagram, we see that carbon dioxide does not have a normal melting point or a normal boiling point. Explain what kind of a change carbon dioxide makes at 1 atm and estimate the temperature of this point.
Solutions
1: Before we can completely answer the question, we need to convert the given information to match the units in the diagram. First we convert 25 degrees Kelvin into Celsius: $K=273.15+C$ $298.15-273.25C$ Now we can look at the diagram and determine its phase. At 25 degrees Celsius and 1 atm carbon dioxide is in the gas phase.
2: Carbon dioxide sublimes at 1 atm because it transitions from the solid phase directly to the gas phase. The temperature of sublimation at 1 atm is about -80 degrees Celsius.
Contributors and Attributions
• Kirsten Amdahl (UC Davis) | textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Chemical_Thermodynamics_(Supplement_to_Shepherd_et_al.)/22%3A_Fundamental_15_-_Phase_Equilibrium/22.01%3A_Fundamentals_of_Phase_Transitions.txt |
Phase diagram is a graphical representation of the physical states of a substance under different conditions of temperature and pressure. A typical phase diagram has pressure on the y-axis and temperature on the x-axis. As we cross the lines or curves on the phase diagram, a phase change occurs. In addition, two states of the substance coexist in equilibrium on the lines or curves.
Introduction
A phase transition is the transition from one state of matter to another. There are three states of matter: liquid, solid, and gas.
• Liquid: A state of matter that consists of loose, free moving particles which form the shape set by the boundaries of the container in which the liquid is in. This happens because the motion of the individual particles within a liquid is much less restricted than in a solid. One may notice that some liquids flow readily whereas some liquids flow slowly. A liquid's relative resistance to flow is viscosity.
• Solid: A state of matter with tightly packed particles which do not change the shape or volume of the container that it is in. However, this does not mean that the volume of a solid is a constant. Solids can expand and contract when temperatures change. This is why when you look up the density of a solid, it will indicate the temperature at which the value for density is listed. Solids have strong intermolecular forces that keep particles in close proximity to one another. Another interesting thing to think about is that all true solids have crystalline structures. This means that their particles are arranged in a three-dimensional, orderly pattern. Solids will undergo phase changes when they come across energy changes.
• Gas: A state of matter where particles are spread out with no definite shape or volume. The particles of a gas will take the shape and fill the volume of the container that it is placed in. In a gas, there are no intermolecular forces holding the particles of a gas together since each particle travels at its own speed in its own direction. The particles of a gas are often separated by great distances.
Phase diagrams illustrate the variations between the states of matter of elements or compounds as they relate to pressure and temperatures. The following is an example of a phase diagram for a generic single-component system:
• Triple point – the point on a phase diagram at which the three states of matter: gas, liquid, and solid coexist
• Critical point – the point on a phase diagram at which the substance is indistinguishable between liquid and gaseous states
• Fusion(melting) (or freezing) curve – the curve on a phase diagram which represents the transition between liquid and solid states
• Vaporization (or condensation) curve – the curve on a phase diagram which represents the transition between gaseous and liquid states
• Sublimation (or deposition) curve – the curve on a phase diagram which represents the transition between gaseous and solid states
Phase diagrams plot pressure (typically in atmospheres) versus temperature (typically in degrees Celsius or Kelvin). The labels on the graph represent the stable states of a system in equilibrium. The lines represent the combinations of pressures and temperatures at which two phases can exist in equilibrium. In other words, these lines define phase change points. The red line divides the solid and gas phases, represents sublimation (solid to gas) and deposition (gas to solid). The green line divides the solid and liquid phases and represents melting (solid to liquid) and freezing (liquid to solid). The blue divides the liquid and gas phases, represents vaporization (liquid to gas) and condensation (gas to liquid). There are also two important points on the diagram, the triple point and the critical point. The triple point represents the combination of pressure and temperature that facilitates all phases of matter at equilibrium. The critical point terminates the liquid/gas phase line and relates to the critical pressure, the pressure above which a supercritical fluid forms.
With most substances, the temperature and pressure related to the triple point lie below standard temperature and pressure and the pressure for the critical point lies above standard pressure. Therefore at standard pressure as temperature increases, most substances change from solid to liquid to gas, and at standard temperature as pressure increases, most substances change from gas to liquid to solid.
Exception: Water
Normally the solid/liquid phase line slopes positively to the right (as in the diagram for carbon dioxide below). However for other substances, notably water, the line slopes to the left as the diagram for water shows. This indicates that the liquid phase is more dense than the solid phase. This phenomenon is caused by the crystal structure of the solid phase. In the solid forms of water and some other substances, the molecules crystalize in a lattice with greater average space between molecules, thus resulting in a solid with a lower density than the liquid. Because of this phenomenon, one is able to melt ice simply by applying pressure and not by adding heat.
Moving About the Diagram
Moving about the phase diagram reveals information about the phases of matter. Moving along a constant temperature line reveals relative densities of the phases. When moving from the bottom of the diagram to the top, the relative density increases. Moving along a constant pressure line reveals relative energies of the phases. When moving from the left of the diagram to the right, the relative energies increases.
Important Definitions
• Sublimation is when the substance goes directly from solid to the gas state.
• Deposition occurs when a substance goes from a gas state to a solid state; it is the reverse process of sublimation.
• Melting occurs when a substance goes from a solid to a liquid state.
• Fusion is when a substance goes from a liquid to a solid state, the reverse of melting.
• Vaporization (or evaporation) is when a substance goes from a liquid to a gaseous state.
• Condensation occurs when a substance goes from a gaseous to a liquid state, the reverse of vaporization.
• Critical Point – the point in temperature and pressure on a phase diagram where the liquid and gaseous phases of a substance merge together into a single phase. Beyond the temperature of the critical point, the merged single phase is known as a supercritical fluid.
• Triple Point occurs when both the temperature and pressure of the three phases of the substance coexist in equilibrium.
Problems
Imagine a substance with the following points on the phase diagram: a triple point at .5 atm and -5°C; a normal melting point at 20°C; a normal boiling point at 150°C; and a critical point at 5 atm and 1000°C. The solid liquid line is "normal" (meaning positive sloping). For this, complete the following:
1. Roughly sketch the phase diagram, using units of atmosphere and Kelvin.
Answer
1-solid, 2-liquid, 3-gas, 4-supercritical fluid, point O-triple point, C-critical point -78.5 °C (The phase of dry ice changes from solid to gas at -78.5 °C)
2. Rank the states with respect to increasing density and increasing energy.
3. Describe what one would see at pressures and temperatures above 5 atm and 1000°C.
Answer
One would see a super-critical fluid, when approaching the point, one would see the meniscus between the liquid and gas disappear.
4. Describe what will happen to the substance when it begins in a vaccum at -15°C and is slowly pressurized.
Answer
The substance would begin as a gas and as the pressure increases, it would compress and eventually solidify without liquefying as the temperature is below the triple point temperature.
5. Describe the phase changes from -80°C to 500°C at 2 atm.
Answer
The substance would melt at somewhere around, but above 20°C and then boil at somewhere around, but above 150°C. It would not form a super-critical fluid as the neither the pressure nor temperature reach the critical pressure or temperature.
6. What exists in a system that is at 1 atm and 150°?
Answer
Depending on how much energy is in the system, there will be different amounts of liquid and gas at equilibrium. If just enough energy was added to raise the temperature of the liquid to 150°C, there will just be liquid. If more was added, there will be some liquid and some gas. If just enough energy was added to change the state of all of the liquid without raising the temperature of the gas, there will just be gas.
7. Label the area 1, 2, 3, and 4 and points O and C on the diagram.
8. A sample of dry ice (solid CO2) is cooled to -100 °C, and is set on a table at room temperature (25 °C). At what temperature is the rate of sublimation and deposition the same? (Assume pressure is held constant at 1 atm).
Contributors and Attributions
• Matthew McKinnell (UCD), Jessie Verhein (UCD), Pei Yu (UCD), Lok Ka Chan (UCD), Jessica Dhaliwal (UCD), Shyall Bhela (UCD), Candace Wong-Sing (UCD) | textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Chemical_Thermodynamics_(Supplement_to_Shepherd_et_al.)/22%3A_Fundamental_15_-_Phase_Equilibrium/22.02%3A_Phase_Diagrams.txt |
• 23.1: Criterion for Phase Equilibrium
The thermodynamic criterion for phase equilibrium is simple. It is based upon the chemical potentials of the components in a system. For simplicity, consider a system with only one component. For the overall system to be in equilibrium, the chemical potential of the compound in each phase present must be the same.
23: Extension 15 - Phase Rule
The thermodynamic criterion for phase equilibrium is simple. It is based upon the chemical potentials of the components in a system. For simplicity, consider a system with only one component. For the overall system to be in equilibrium, the chemical potential of the compound in each phase present must be the same. Otherwise, there will be some mass migration from one phase to another, decreasing the total chemical potential of the phase from which material is being removed, and increasing the total chemical potential of the phase into which the material is being deposited. So for each pair of phases present ($\alpha$ and $\beta$) the following must be true:
$\mu_\alpha = \mu_\beta$
Gibbs Phase Rule
The Gibbs phase rule describes the number of compositional and phase variables that can be varied freely for a system at equilibrium. For each phase present in a system, the mole fraction of all but one component can be varied independently. However, the relationship
$\sum_i \chi_i =1$
places a constraint on the last mole fraction. As such, there are $c – 1$ compositional degrees of freedom for each phase present, where $c$ is the number of components in the mixture. Similarly, all but one of the chemical potentials of each phase present must be equal, leaving only one that can be varied independently, leading to $p – 1$ thermodynamic constraints placed on each component. Finally, there are two state variables that can be varied (such as pressure and temperature), adding two additional degrees of freedom to the system. The net number of degrees of freedom is determined by adding all of the degrees of freedom and subtracting the number of thermodynamic constraints.
\begin{align} F &= 2+ p(c-1) - c(p-1) \nonumber \[4pt] &= 2 + pc - p -pc +c \nonumber \[4pt] &= 2+c-p \label{Phase} \end{align}
Equation \ref{Phase} is the Gibbs phase rule.
Example $1$:
Show that the maximum number of phases that can co-exist at equilibrium for a single component system is $p = 3$.
Solution:
The maximum number of components will occur when the number of degrees of freedom is zero.
\begin{align*} 0 &= 2+1 -p \[4pt] p&=3 \end{align*}
Note: This shows that there can never be a “quadruple point” for a single component system!
Because a system at its triple point has no degrees of freedom, the triple point makes a very convenient physical condition at which to define a temperature. For example, the International Practical Temperature Scale of 1990 (IPT-90) uses the triple points of hydrogen, neon, oxygen, argon, mercury, and water to define several low temperatures. (The calibration of a platinum resistance thermometer at the triple point of argon, for example, is described by Strouse (Strouse, 2008)). The advantage to using a triple point is that the compound sets both the temperature and pressure, rather than forcing the researcher to set a pressure and then measure the temperature of a phase change, introducing an extra parameter than can introduce uncertainty into the measurement.
Contributors
• Patrick E. Fleming (Department of Chemistry and Biochemistry; California State University, East Bay) | textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Chemical_Thermodynamics_(Supplement_to_Shepherd_et_al.)/23%3A_Extension_15_-_Phase_Rule/23.01%3A_Criterion_for_Phase_Equilibrium.txt |
• 24.1: Ideal Solutions - Raoult's Law
How does mixing two volatile liquids affect vapor pressure? Read on to find out...
• 24.2: Thermodynamics of Mixing
When solids, liquids or gases are combined, the thermodynamic quantities of the system experience a change as a result of the mixing. This module will discuss the effect that mixing has on a solution’s Gibbs energy, enthalpy, and entropy, with a specific focus on the mixing of two gases.
24: Fundamental 16 - Solution Equilibrium
When two substances whose molecules are very similar form a liquid solution, the vapor pressure of the mixture is very simply related to the vapor pressures of the pure substances. Suppose, for example, we mix 1 mol benzene with 1 mol toluene as shown in the figure below.
The mole fraction of benzene, $X_b$, and the mole fraction of toluene, $X_t$, are both equal to 0.5. At 79.6°C the measured vapor pressure of this mixture is 516 mmHg, slightly less than 517 mmHg, the average of the vapor pressures of pure benzene (744 mmHg) and of pure toluene (290 mmHg) at the same temperature.
It is easy to explain this behavior if we assume that because benzene and toluene molecules are so nearly alike, they behave the same way in solution as they do in the pure liquids. Since there are only half as many benzene molecules in the mixture as in pure benzene, the rate at which benzene molecules escape from the surface of the solution will be half the rate at which they would escape from the pure liquid. In consequence the partial vapor pressure of benzene above the mixture will be one-half the vapor pressure of pure benzene. By a similar argument the partial vapor pressure of the toluene above the solution is also one-half that of pure toluene. Accordingly, we can write
$P_b =\frac{1}{2} P_b^*$
and
$P_t=\frac{1}{2} P_t^*$
where $P_b$ and $P_t$ are the partial pressures of benzene and toluene vapors, respectively, and $P_b^*$ and $P_y^*$ are the vapor pressures of the pure liquids. The total vapor pressure of the solution is
$P=P_b+P_t=\frac{1}{2}P_b^{*}+\frac{1}{2}P_t^{*}=\frac{P_b^{*}+P_t^{*}}2$
The vapor pressure of the mixture is equal to the mean of the vapor pressures of the two pure liquids.
We can generalize the above argument to apply to a liquid solution of any composition involving any two substances $A$ and $B$ whose molecules are very similar. The partial vapor pressure of $A$ above the liquid mixture, $P_A$, will then be the vapor pressure of pure $A$, $P_A^*$, multiplied by the fraction of the molecules in the liquid which are of type $A$, that is, the mole fraction of $A$, $X_A$. In equation form
$P_A=X_AP_A^* \label{3}$
Similarly for component $B$
$P_B=X_BP_B^* \label{4}$
Adding these two partial pressures, we obtain the total vapor pressure
$P=P_A + P_B = X_AP_A^* + X_BP_B^* \label{5}$
Liquid solutions which conform to Eqs. $\ref{3}$ and $\ref{5}$ are said to obey Raoult’s law and to be ideal mixtures or ideal solutions.
In addition to its use in predicting the vapor pressure of a solution, Raoult’s law may be applied to the solubility of a gas in a liquid. Dividing both sides of Equation $\ref{3}$ by $P_A^*$ gives
$X_A=\frac{1}{P_A^{*}}\times P_A=k_A\times P_A\label{6}$
Since the vapor pressure of any substance has a specific value at a given temperature, Equation $\ref{6}$ tells us that the mole fraction $X_A$ of a gaseous solute is proportional to the partial pressure $P_A$ of that gas above the solution.
For an ideal solution the proportionality constant $k_A$ is the reciprocal of the vapor pressure of the pure solute at the temperature in question. Since vapor pressure increases as temperature increases, $k_A$, which is $1/P_A^*$, must decrease. Thus we expect the solubility of a gas in a liquid to increase as the partial pressure of gas above the solution increases, but to decrease as temperature increases. Equation $\ref{6}$ is known as Henry’s law. It also applies to gaseous solutes which do not form ideal solutions, but in such cases the Henry’s-law constant $k_A$ does not equal the reciprocal of the vapor pressure.
The video below shows the effect of varied pressure on the amount of CO2 dissolved in soda. The amount of dissolved CO2 is monitored by a pH indicator. The more dissolved CO2, the lower the pH (the more red the solution). Watch the video to find out how the solubility of CO2 is related to the pressure, paying particular attention to the color of the solution.
In actual fact very few liquid mixtures obey Raoult’s law exactly. Even for molecules as similar as benzene and toluene, we noted a deviation of 517 mmHg – 516 mmHg, or 1 mmHg at 79.6°C. Much larger deviations occur if the molecules are not very similar. These deviations are of two kinds. As can be seen from Figure $2$ , a plot of the vapor pressure against the mole fraction of one component yields a straight line for an ideal solution. For non-ideal mixtures the actual vapor pressure can be larger than the ideal value (positive deviation from Raoult’s law) or smaller (negative deviation). Negative deviations correspond to cases where attractions between unlike molecules are greater than those between like molecules.
In the case illustrated below, acetone (CH3COCH3) and chloroform (CHCl3) can form a weak hydrogen bond:
Because of this extra intermolecular attraction, molecules have more difficulty escaping the solution and the vapor pressure is lower. The opposite is true of a mixture of benzene and methanol. When C6H6 molecules are randomly distributed among CH3OH molecules, the latter cannot hydrogen bond effectively. Molecules can escape more readily from the solution, and the vapor pressure is higher than Raoult’s law would predict. | textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Chemical_Thermodynamics_(Supplement_to_Shepherd_et_al.)/24%3A_Fundamental_16_-_Solution_Equilibrium/24.01%3A_Ideal_Solutions_-_Raoult%27s_Law.txt |
When solids, liquids or gases are combined, the thermodynamic quantities of the system experience a change as a result of the mixing. This module will discuss the effect that mixing has on a solution’s Gibbs energy, enthalpy, and entropy, with a specific focus on the mixing of two gases.
Introduction
A solution is created when two or more components mix homogeneously to form a single phase. Studying solutions is important because most chemical and biological life processes occur in systems with multiple components. Understanding the thermodynamic behavior of mixtures is integral to the study of any system involving either ideal or non-ideal solutions because it provides valuable information on the molecular properties of the system.
Most real gases behave like ideal gases at standard temperature and pressure. This allows us to combine our knowledge of ideal systems and solutions with standard state thermodynamics in order to derive a set of equations that quantitatively describe the effect that mixing has on a given gas-phase solution’s thermodynamic quantities.
Gibbs Free Energy of Mixing
Unlike the extensive properties of a one-component system, which rely only on the amount of the system present, the extensive properties of a solution depend on its temperature, pressure and composition. This means that a mixture must be described in terms of the partial molar quantities of its components. The total Gibbs free energy of a two-component solution is given by the expression
$G=n_A\overline{G}_A+n_B\overline{G} _B \label{1}$
where
• $G$ is the total Gibbs energy of the system,
• $n_i$ is the number of moles of component $i$, and
• $\overline{G}_i$ is the partial molar Gibbs energy of component $i$.
The molar Gibbs energy of an ideal gas can be found using the equation
$\overline{G}=\overline{G}^\circ+RT\ln \frac{P}{1\text{ bar}} \label{2}$
where $\overline{G}^\circ$ is the standard molar Gibbs energy of the gas at 1 bar, and P is the pressure of the system. In a mixture of ideal gases, we find that the system’s partial molar Gibbs energy is equivalent to its chemical potential, or that
$\overline{G}_i=\mu_i \label{3}$
This means that for a solution of ideal gases, Equation $\ref{2}$ can become
$\overline{G}_i=\mu_i=\mu^\circ_i+RT \ln \frac{P_i}{1\text{ bar}} \label{4}$
where
• $\mu_i$ is the chemical potential of the $i$th component,
• $\mu_i^\circ$ is the standard chemical potential of component $i$ at 1 bar, and
• $P_i$ is the partial pressure of component $i$.
Now pretend we have two gases at the same temperature and pressure, gas A and gas B. The Gibbs energy of the system before the gases are mixed is given by Equation $\ref{1}$, which can be combined with Equation $\ref{4}$ to give the expression
$G_{initial}=n_A(\mu^\circ_A+RT \ln P)+n_B(\mu^\circ_B+RT \ln P) \label{5}$
If gas A and gas B are then mixed together, they will each exert a partial pressure on the total system, $P_A$ and $P_B$, so that $P_A+ P_B= P$. This means that the final Gibbs energy of the final solution can be found using the equation
$G_{final}=n_A(\mu^\circ_A+RT \ln P_A)+n_B(\mu^\circ_B+RT \ln P_B) \label{6}$
The Gibbs energy of mixing, $Δ_{mix}G$, can then be found by subtracting $G_{initial}$ from $G_{final}$.
\begin{align} Δ_{mix}G &= G_{final} - G_{initial}\[4pt] &=n_ART \ln \frac{P_A}{P}+n_BRT \ln \frac{P_B}{P} \[4pt] &=n_A RT \ln X_A+n_B RT \ln X_B \label{7} \end{align}
where
$P_i = X_iP$
and $X_i$ is the mole fraction of gas $i$. This equation can be simplified further by knowing that the mole fraction of a component is equal to the number of moles of that component over the total moles of the system, or
$X_i = \dfrac{n_i}{n}.$
Equation \ref{7} then becomes
$\Delta_{mix} G=nRT(X_A \ln X_A + X_B \ln X_B) \label{8}$
This expression gives us the effect that mixing has on the Gibbs free energy of a solution. Since $X_A$ and $X_B$ are mole fractions that range from 0 to 1, we can conclude that $Δ_{mix}G$ will be a negative number. This is consistent with the idea that gases mix spontaneously at constant pressure and temperature.
Entropy of mixing
Figure $1$ shows that when two gases mix, it can really be seen as two gases expanding into twice their original volume. This greatly increases the number of available microstates, and so we would therefore expect the entropy of the system to increase as well.
Thermodynamic studies of an ideal gas’s dependence of Gibbs free energy of temperature have shown that
$\left( \dfrac {d G} {d T} \right )_P=-S \label{9}$
This means that differentiating Equation $\ref{8}$ at constant pressure with respect to temperature will give an expression for the effect that mixing has on the entropy of a solution. We see that
\begin{align} \left( \dfrac {d G_{mix}} {d T} \right)_P &=nR(X_A \ln X_A+X_B \ln X_B) \[4pt] &=-\Delta_{mix} S \end{align}
$\Delta_{mix} S=-nR(X_A \ln X_A+X_2 \ln X_2) \label{10}$
Since the mole fractions again lead to negative values for $\ln X_A$ and $\ln X_B$, the negative sign in front of the equation makes $\Delta_{mix} S$ positive, as expected. This agrees with the idea that mixing is a spontaneous process.
Enthalpy of mixing
We know that in an ideal system $\Delta G= \Delta H-T \Delta S$, but this equation can also be applied to the thermodynamics of mixing and solved for the enthalpy of mixing so that it reads
$\Delta_{mix} H=\Delta_{mix} G+T\Delta_{mix} S \label{11}$
Plugging in our expressions for $Δ_{mix}G$ (Equation $\ref{8}$) and $Δ_{mix}S$ (Equation $\ref{10}$) , we get
$\Delta_{mix} H=nRT(X_A \ln X_A+X_B \ln X_B)+T \left[-nR(X_A \ln X_A+X_B \ln X_B) \right] = 0$
This result makes sense when considering the system. The molecules of ideal gas are spread out enough that they do not interact with one another when mixed, which implies that no heat is absorbed or produced and results in a $Δ_{mix}H$ of zero. Figure $2$ illustrates how $TΔ_{mix}S$ and $Δ_{mix}G$ change as a function of the mole fraction so that $Δ_{mix}H$ of a solution will always be equal to zero (this is for the mixing of two ideal gasses).
Outside Links
• Satter, S. (2000). Thermodynamics of Mixing Real Gases. J. Chem. Educ. 77, 1361-1365.
• Brandani, V., Evangelista, F. (1987). Correlation and prediction of enthalpies of mixing for systems containing alcohols with UNIQUAC associated-solution theory. Ind. Eng. Chem. Res. 26 (12), 2423–2430.
Problems
1. Use Figure 2 to find the x1 that has the largest impact on the thermodynamic quantities of the final solution. Explain why this is true.
2. Calculate the effect that mixing 2 moles of nitrogen and 3 moles of oxygen has on the entropy of the final solution.
3. Another way to find the entropy of a system is using the equation ΔS = nRln(V2/V1). Use this equation and the fact that volume is directly proportional to the number of moles of gas at constant temperature and pressure to derive the final expression for $T\Delta_{mix}S$ . (Hint: Use the derivation of $T\Delta_{mix}G$ as a guide).
Answers
1. x1= 0.5
2. Increases the entropy of the system by 27.98 J/molK
Contributors
• Elizabeth Billquist (Hope College) | textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Chemical_Thermodynamics_(Supplement_to_Shepherd_et_al.)/24%3A_Fundamental_16_-_Solution_Equilibrium/24.02%3A_Thermodynamics_of_Mixing.txt |
• 25.1: Raoult's Law and Ideal Mixtures of Liquids
This page deals with Raoult's Law and how it applies to mixtures of two volatile liquids. It covers cases where the two liquids are entirely miscible in all proportions to give a single liquid - NOT those where one liquid floats on top of the other (immiscible liquids). The page explains what is meant by an ideal mixture and looks at how the phase diagram for such a mixture is built up and used.
• 25.2: Phase Diagrams for Binary Mixtures
As suggested by the Gibbs Phase Rule, the most important variables describing a mixture are pressure, temperature and composition. In the case of single component systems, composition is not important so only pressure and temperature are typically depicted on a phase diagram. However, for mixtures with two components, the composition is of vital important, so there is generally a choice that must be made as to whether the other variable to be depicted is temperature or pressure.
• 25.3: Liquid-Vapor Systems - Raoult’s Law
Liquids tend to be volatile, and as such will enter the vapor phase when the temperature is increased to a high enough value (provided they do not decompose first!) A volatile liquid is one that has an appreciable vapor pressure at the specified temperature. An ideal mixture continuing at least one volatile liquid can be described using Raoult’s Law.
25: Extension 16 - Vapor-Solution Phase Diagrams
This page deals with Raoult's Law and how it applies to mixtures of two volatile liquids. It covers cases where the two liquids are entirely miscible in all proportions to give a single liquid - NOT those where one liquid floats on top of the other (immiscible liquids). The page explains what is meant by an ideal mixture and looks at how the phase diagram for such a mixture is built up and used.
Ideal Mixtures
An ideal mixture is one which obeys Raoult's Law, but I want to look at the characteristics of an ideal mixture before actually stating Raoult's Law. The page will flow better if I do it this way around. There is actually no such thing as an ideal mixture! However, some liquid mixtures get fairly close to being ideal. These are mixtures of two very closely similar substances. Commonly quoted examples include:
• hexane and heptane
• benzene and methylbenzene
• propan-1-ol and propan-2-ol
In a pure liquid, some of the more energetic molecules have enough energy to overcome the intermolecular attractions and escape from the surface to form a vapor. The smaller the intermolecular forces, the more molecules will be able to escape at any particular temperature.
If you have a second liquid, the same thing is true. At any particular temperature a certain proportion of the molecules will have enough energy to leave the surface.
In an ideal mixture of these two liquids, the tendency of the two different sorts of molecules to escape is unchanged.
You might think that the diagram shows only half as many of each molecule escaping - but the proportion of each escaping is still the same. The diagram is for a 50/50 mixture of the two liquids. That means that there are only half as many of each sort of molecule on the surface as in the pure liquids. If the proportion of each escaping stays the same, obviously only half as many will escape in any given time. If the red molecules still have the same tendency to escape as before, that must mean that the intermolecular forces between two red molecules must be exactly the same as the intermolecular forces between a red and a blue molecule.
If the forces were any different, the tendency to escape would change. Exactly the same thing is true of the forces between two blue molecules and the forces between a blue and a red. They must also be the same otherwise the blue ones would have a different tendency to escape than before. If you follow the logic of this through, the intermolecular attractions between two red molecules, two blue molecules or a red and a blue molecule must all be exactly the same if the mixture is to be ideal.
This is why mixtures like hexane and heptane get close to ideal behavior. They are similarly sized molecules and so have similarly sized van der Waals attractions between them. However, they obviously are not identical - and so although they get close to being ideal, they are not actually ideal. For the purposes of this topic, getting close to ideal is good enough!
Ideal Mixtures and the Enthalpy of Mixing
When you make any mixture of liquids, you have to break the existing intermolecular attractions (which needs energy), and then remake new ones (which releases energy). If all these attractions are the same, there won't be any heat either evolved or absorbed. That means that an ideal mixture of two liquids will have zero enthalpy change of mixing. If the temperature rises or falls when you mix the two liquids, then the mixture is not ideal. You may have come cross a slightly simplified version of Raoult's Law if you have studied the effect of a non-volatile solute like salt on the vapor pressure of solvents like water. The definition below is the one to use if you are talking about mixtures of two volatile liquids.
Definition: Raoult's Law
The partial vapor pressure of a component in a mixture is equal to the vapor pressure of the pure component at that temperature multiplied by its mole fraction in the mixture.
Raoult's Law only works for ideal mixtures. In equation form, for a mixture of liquids $A$ and $B$, this reads:
$P_A = X_A P^*_A \label{1}$
$P_B = X_B P^*_B \label{2}$
In this equation, $P_A$ and $P_B$ are the partial vapor pressures of the components $A$ and $B$. In any mixture of gases, each gas exerts its own pressure. This is called its partial pressure and is independent of the other gases present. Even if you took all the other gases away, the remaining gas would still be exerting its own partial pressure. The total vapor pressure of the mixture is equal to the sum of the individual partial pressures:
$P_{\text{total}} = P_A + P_B \label{3}$
The $P^*$ values are the vapor pressures of $A$ and $B$ if they were on their own as pure liquids. $X_A$ and $X_B$ are the mole fractions of $A$ and $B$. That is exactly what it says it is - the fraction of the total number of moles present which is $A$ or $B$. You calculate mole fraction using, for example:
$X_A = \dfrac{\text{moles of A}}{\text{total number of moles}} \label{4}$
Example $1$
Suppose you had a mixture of 2 moles of methanol and 1 mole of ethanol at a particular temperature. The vapor pressure of pure methanol at this temperature is 81 kPa, and the vapor pressure of pure ethanol is 45 kPa. What is total vapor pressure of this solution?
Solution
There are 3 moles in the mixture in total.
• 2 of these are methanol. The mole fraction of methanol is 2/3.
• Similarly, the mole fraction of ethanol is 1/3.
You can easily find the partial vapor pressures using Raoult's Law - assuming that a mixture of methanol and ethanol is ideal.
First for methanol:
$P_{methanol} = \dfrac{2}{3} \times 81\; kPa$
$= 54\; kPa$
Then for ethanol:
$P_{ethanol} = \dfrac{1}{3} \times 45\; kPa$
$= 15\; kPa$
You get the total vapor pressure of the liquid mixture by adding these together.
$P_{total} = 54\; kPa + 15 \; kPa = 69 kPa$
In practice, this is all a lot easier than it looks when you first meet the definition of Raoult's Law and the equations!
Vapor Pressure and Composition Diagrams
Suppose you have an ideal mixture of two liquids A and B. Each of A and B is making its own contribution to the overall vapor pressure of the mixture - as we've seen above. Let's focus on one of these liquids - A, for example. Suppose you double the mole fraction of A in the mixture (keeping the temperature constant). According to Raoult's Law, you will double its partial vapor pressure. If you triple the mole fraction, its partial vapor pressure will triple - and so on. In other words, the partial vapor pressure of A at a particular temperature is proportional to its mole fraction. If you plot a graph of the partial vapor pressure of A against its mole fraction, you will get a straight line.
Now we'll do the same thing for B - except that we will plot it on the same set of axes. The mole fraction of B falls as A increases so the line will slope down rather than up. As the mole fraction of B falls, its vapor pressure will fall at the same rate.
Notice that the vapor pressure of pure B is higher than that of pure A. That means that molecules must break away more easily from the surface of B than of A. B is the more volatile liquid. To get the total vapor pressure of the mixture, you need to add the values for A and B together at each composition. The net effect of that is to give you a straight line as shown in the next diagram.
Boiling point and Composition Diagrams
The relationship between boiling point and vapor pressure
• If a liquid has a high vapor pressure at a particular temperature, it means that its molecules are escaping easily from the surface.
• If, at the same temperature, a second liquid has a low vapor pressure, it means that its molecules are not escaping so easily.
What do these two aspects imply about the boiling points of the two liquids? There are two ways of looking at the above question:
Either:
• If the molecules are escaping easily from the surface, it must mean that the intermolecular forces are relatively weak. That means that you won't have to supply so much heat to break them completely and boil the liquid. Therefore, the liquid with the higher vapor pressure at a particular temperature is the one with the lower boiling point.
Or:
• Liquids boil when their vapor pressure becomes equal to the external pressure. If a liquid has a high vapor pressure at some temperature, you won't have to increase the temperature very much until the vapor pressure reaches the external pressure. On the other hand if the vapor pressure is low, you will have to heat it up a lot more to reach the external pressure. Therefore, the liquid with the higher vapor pressure at a particular temperature is the one with the lower boiling point.
For two liquids at the same temperature, the liquid with the higher vapor pressure is the one with the lower boiling point.
Constructing a boiling point / composition diagram
To remind you - we've just ended up with this vapor pressure / composition diagram:
We're going to convert this into a boiling point / composition diagram. We'll start with the boiling points of pure A and B. Since B has the higher vapor pressure, it will have the lower boiling point. If that is not obvious to you, go back and read the last section again!
For mixtures of A and B, you might perhaps have expected that their boiling points would form a straight line joining the two points we've already got. Not so! In fact, it turns out to be a curve.
To make this diagram really useful (and finally get to the phase diagram we've been heading towards), we are going to add another line. This second line will show the composition of the vapor over the top of any particular boiling liquid.
If you boil a liquid mixture, you would expect to find that the more volatile substance escapes to form a vapor more easily than the less volatile one. That means that in the case we've been talking about, you would expect to find a higher proportion of B (the more volatile component) in the vapor than in the liquid. You can discover this composition by condensing the vapor and analyzing it. That would give you a point on the diagram.
The diagram just shows what happens if you boil a particular mixture of A and B. Notice that the vapor over the top of the boiling liquid has a composition which is much richer in B - the more volatile component. If you repeat this exercise with liquid mixtures of lots of different compositions, you can plot a second curve - a vapor composition line.
This is now our final phase diagram.
Using the phase diagram
The diagram is used in exactly the same way as it was built up. If you boil a liquid mixture, you can find out the temperature it boils at, and the composition of the vapor over the boiling liquid. For example, in the next diagram, if you boil a liquid mixture C1, it will boil at a temperature T1 and the vapor over the top of the boiling liquid will have the composition C2.
All you have to do is to use the liquid composition curve to find the boiling point of the liquid, and then look at what the vapor composition would be at that temperature. Notice again that the vapor is much richer in the more volatile component B than the original liquid mixture was.
The beginnings of fractional distillation
Suppose that you collected and condensed the vapor over the top of the boiling liquid and reboiled it. You would now be boiling a new liquid which had a composition C2. That would boil at a new temperature T2, and the vapor over the top of it would have a composition C3.
You can see that we now have a vapor which is getting quite close to being pure B. If you keep on doing this (condensing the vapor, and then reboiling the liquid produced) you will eventually get pure B. This is obvious the basis for fractional distillation. However, doing it like this would be incredibly tedious, and unless you could arrange to produce and condense huge amounts of vapor over the top of the boiling liquid, the amount of B which you would get at the end would be very small. Real fractionating columns (whether in the lab or in industry) automate this condensing and reboiling process. How these work will be explored on another page. | textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Chemical_Thermodynamics_(Supplement_to_Shepherd_et_al.)/25%3A_Extension_16_-_Vapor-Solution_Phase_Diagrams/25.01%3A_Raoult%27s_Law_and_Ideal_Mixtures_of_Liquids.txt |
As suggested by the Gibbs Phase Rule, the most important variables describing a mixture are pressure, temperature and composition. In the case of single component systems, composition is not important so only pressure and temperature are typically depicted on a phase diagram. However, for mixtures with two components, the composition is of vital important, so there is generally a choice that must be made as to whether the other variable to be depicted is temperature or pressure.
Temperature-composition diagrams are very useful in the description of binary systems, many of which will for two-phase compositions at a variety of temperatures and compositions. In this section, we will consider several types of cases where the composition of binary mixtures are conveniently depicted using these kind of phase diagrams.
Partially Miscible Liquids
A pair of liquids is considered partially miscible if there is a set of compositions over which the liquids will form a two-phase liquid system. This is a common situation and is the general case for a pair of liquids where one is polar and the other non-polar (such as water and vegetable oil.) Another case that is commonly used in the organic chemistry laboratory is the combination of diethyl ether and water. In this case, the differential solubility in the immiscible solvents allows the two-phase liquid system to be used to separate solutes using a separatory funnel method.
As is the case for most solutes, their solubility is dependent on temperature. For many binary mixtures of immiscible liquids, miscibility increases with increasing temperature. And then at some temperature (known as the upper critical temperature), the liquids become miscible in all compositions. An example of a phase diagram that demonstrates this behavior is shown in Figure \(1\). An example of a binary combination that shows this kind of behavior is that of methyl acetate and carbon disufide, for which the critical temperature is approximately 230 K at one atmosphere (Ferloni & Spinolo, 1974). Similar behavior is seen for hexane/nitrobenzene mixtures, for which the critical temperature is 293 K.
Another condition that can occur is for the two immiscible liquids to become completely miscible below a certain temperature, or to have a lower critical temperature. An example of a pair of compounds that show this behavior is water and trimethylamine. A typical phase diagram for such a mixture is shown in Figure \(2\). Some combinations of substances show both an upper and lower critical temperature, forming two-phase liquid systems at temperatures between these two temperatures. An example of a combination of substances that demonstrate the behavior is nicotine and water.
The Lever Rule
The composition and amount of material in each phase of a two phase liquid can be determined using the lever rule. This rule can be explained using the following diagram.
Suppose that the temperature and composition of the mixture is given by point b in the above diagram. The horizontal line segment that passes through point b, is terminated at points a and c, which indicate the compositions of the two liquid phases. Point a indicates the mole faction of compound B (\(X_B^A\)) in the layer that is predominantly A, whereas the point c indicates the composition (\(X_B^B\) )of the layer that is predominantly compound B. The relative amounts of material in the two layers is then inversely proportional to the length of the tie-lines a-b and b-c, which are given by \(l_A\) and \(l_B\) respectively. In terms of mole fractions,
\[ l_A = X_B - X_B^A\]
and
\[ l_B = X_B^B - X_B \]
The number of moles of material in the A layer (\(n_A\)) and the number of moles in the B layer (\(n_B\)) are inversely proportional to the lengths of the two lines \(l_A\) and \(l_B\).
\[ n_A l_A = n_B l_B\]
Or, substituting the above definitions of the lengths \(l_A\) and \(l_B\), the ratio of these two lengths gives the ratio of moles in the two phases.
\[ \dfrac{n_A}{n_B} = \dfrac{l_B}{l_A} = \dfrac{ X_B^B - X_B}{X_B - X_B^A}\]
Contributors
• Patrick E. Fleming (Department of Chemistry and Biochemistry; California State University, East Bay)
25.03: Liquid-Vapor Systems - Raoults Law
Liquids tend to be volatile, and as such will enter the vapor phase when the temperature is increased to a high enough value (provided they do not decompose first!) A volatile liquid is one that has an appreciable vapor pressure at the specified temperature. An ideal mixture continuing at least one volatile liquid can be described using Raoult’s Law.
Raoult’s Law
Raoult’s law can be used to predict the total vapor pressure above a mixture of two volatile liquids. As it turns out, the composition of the vapor will be different than that of the two liquids, with the more volatile compound having a larger mole fraction in the vapor phase than in the liquid phase. This is summarized in the following theoretical diagram for an ideal mixture of two compounds, one having a pure vapor pressure of \(P_A^* = 450\, Torr\) and the other having a pure vapor pressure of \(P_B^* = 350\, Torr\). In Figure \(1\), the liquid phase is represented at the top of the graph where the pressure is higher.
Oftentimes, it is desirable to depict the phase diagram at a single pressure so that temperature and composition are the variables included in the graphical representation. In such a diagram, the vapor, which exists at higher temperatures) is indicated at the top of the diagram, while the liquid is at the bottom. A typical temperature vs. composition diagram is depicted in Figure \(2\) for an ideal mixture of two volatile liquids.
In this diagram, \(T_A^*\) and \(T_B^*\) represent the boiling points of pure compounds \(A\) and \(B\). If a system having the composition indicated by \(X_B^c\) has its temperature increased to that indicated by point c, The system will consist of two phases, a liquid phase, with a composition indicated by \(X_B^d\) and a vapor phase indicated with a composition indicated by \(X_B^b\). The relative amounts of material in each phase can be described by the lever rule, as described previously.
Further, if the vapor with composition \(X_B^b\) is condensed (the temperature is lowered to that indicated by point b') and revaporized, the new vapor will have the composition consistent with \(X_B^{a}\). This demonstrates how the more volatile liquid (the one with the lower boiling temperature, which is A in the case of the above diagram) can be purified from the mixture by collecting and re-evaporating fractions of the vapor. If the liquid was the desired product, one would collect fractions of the residual liquid to achieve the desired result. This process is known as distillation.
Contributors
• Patrick E. Fleming (Department of Chemistry and Biochemistry; California State University, East Bay) | textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Chemical_Thermodynamics_(Supplement_to_Shepherd_et_al.)/25%3A_Extension_16_-_Vapor-Solution_Phase_Diagrams/25.02%3A_Phase_Diagrams_for_Binary_Mixtures.txt |
• 26.1: Colligative Properties
Colligative properties are important properties of solutions as they describe how the properties of the solvent will change as solute (or solutes) is (are) added.
26: Fundamental 17 - Colligative Properties
Colligative properties are important properties of solutions as they describe how the properties of the solvent will change as solute (or solutes) is (are) added. Before discussing these important properties, let us first review some definitions.
• Solution – a homogeneous mixture.
• Solvent – The component of a solution with the largest mole fraction
• Solute – Any component of a solution that is not the solvent.
Solutions can exist in solid (alloys of metals are an example of solid-phase solutions), liquid, or gaseous (aerosols are examples of gas-phase solutions) forms. For the most part, this discussion will focus on liquid-phase solutions.
Freezing Point Depression
In general (and as will be discussed in Chapter 8 in more detail) a liquid will freeze when
$\mu_{solid} \le \mu_{liquid}$
As such, the freezing point of the solvent in a solution will be affected by anything that changes the chemical potential of the solvent. As it turns out, the chemical potential of the solvent is reduced by the presence of a solute.
In a mixture, the chemical potential of component $A$ can be calculated by
$\mu_A=\mu_A^o + RT \ln X_A \label{chemp}$
And because $X_A$ is always less than (or equal to) 1, the chemical potential is always reduced by the addition of another component.
The condition under which the solvent will freeze is
$\mu_{A,solid} = \mu_{A,liquid}$
where the chemical potential of the liquid is given by Equation \ref{chemp}, which rearrangement to
$\dfrac{ \mu_A -\mu_A^o}{RT} = \ln X_A$
To evaluate the temperature dependence of the chemical potential, it is useful to consider the temperature derivative at constant pressure.
$\left[ \dfrac{\partial}{\partial T} \left( \dfrac{\mu_A-\mu_A^o}{RT} \right) \right]_{P} = \left( \dfrac{\partial \ln X_A}{\partial T} \right)_P$
$- \dfrac{\mu_A - \mu_A^o}{RT^2} + \dfrac{1}{RT} \left[ \left( \dfrac{\partial \mu_A}{\partial T} \right)_P -\left( \dfrac{\partial \mu_A^o}{\partial T} \right)_P \right] =\left( \dfrac{\partial \ln X_A}{\partial T} \right)_P \label{bigeq}$
Recalling that
$\mu = \overline{H} - T\overline{S}$
and
$\left( \dfrac{\partial \mu}{\partial T} \right)_P =-\overline{S}$
Equation \ref{bigeq} becomes
$- \dfrac{(\overline{H}_A -T\overline{S}_A - \overline{H}_A^o + T\overline{S}^o_A)}{RT^2} + \dfrac{1}{RT} \left[ -\overline{S}_A + \overline{S}_A^o\right] =\left( \dfrac{\partial \ln X_A}{\partial T} \right)_P \label{bigeq2}$
And noting that in the case of the solvent freezing, $\overline{H}_A^o$ is the enthalpy of the pure solvent in solid form, and $\overline{H}_A$ is the enthalpy of the solvent in the liquid solution. So
$\overline{H}_A^o - \overline{H}_A = \Delta \overline{H}_{fus}$
Equation \ref{bigeq2} then becomes
$\dfrac{\Delta \overline{H}_{fus}}{RT^2} - \cancel{ \dfrac{-\overline{S}_A + \overline{S}_A^o}{RT}} + \cancel{\dfrac{-\overline{S}_A + \overline{S}_A^o}{RT}}=\left( \dfrac{\partial \ln X_A}{\partial T} \right)_P$
or
$\dfrac{\Delta \overline{H}_{fus}}{RT^2} = \left( \dfrac{\partial \ln X_A}{\partial T} \right)_P$
Separating the variables puts the equation into an integrable form.
$\int_{T^o}^T \dfrac{\Delta \overline{H}_{fus}}{RT^2} dT = \int d \ln X_A \label{int1}$
where $T^{o}$ is the freezing point of the pure solvent and $T$ is the temperature at which the solvent will begin to solidify in the solution. After integration of Equation \ref{int1}:
$- \dfrac{\Delta \overline{H}_{fus}}{R} \left( \dfrac{1}{T} - \dfrac{1}{T^{o}} \right) = \ln X_A \label{int3}$
This can be simplified further by noting that
$\dfrac{1}{T} - \dfrac{1}{T^o} = \dfrac{T^o - T}{TT^o} = \dfrac{\Delta T}{TT^o}$
where $\Delta T$ is the difference between the freezing temperature of the pure solvent and that of the solvent in the solution. Also, for small deviations from the pure freezing point, $TT^o$ can be replaced by the approximate value $(T^o)^2$. So the Equation \ref{int3} becomes
$- \dfrac{\Delta \overline{H}_{fus}}{R(T^o)^2} \Delta T = \ln X_A \label{int4}$
Further, for dilute solutions, for which $X_A$, the mole fraction of the solvent is very nearly 1, then
$\ln X_A \approx -(1 -X_A) = -X_B$
where $X_B$ is the mole fraction of the solute. After a small bit of rearrangement, this results in an expression for freezing point depression of
$\Delta T = \left( \dfrac{R(T^o)^2}{\Delta \overline{H}_{fus}} \right) X_B$
The first factor can be replaced by $K_f$:
$\dfrac{R(T^o)^2}{\Delta \overline{H}_{fus}} = K_f$
which is the cryoscopic constant for the solvent.
$\Delta T$ gives the magnitude of the reduction of freezing point for the solution. Since $\Delta \overline{H}_{fus}$ and $T^o$ are properties of the solvent, the freezing point depression property is independent of the solute and is a property based solely on the nature of the solvent. Further, since $X_B$ was introduced as $(1 - X_A)$, it represents the sum of the mole fractions of all solutes present in the solution.
It is important to keep in mind that for a real solution, freezing of the solvent changes the composition of the solution by decreasing the mole fraction of the solvent and increasing that of the solute. As such, the magnitude of $\Delta T$ will change as the freezing process continually removes solvent from the liquid phase of the solution.
Boiling Point Elevation
The derivation of an expression describing boiling point elevation is similar to that for freezing point depression. In short, the introduction of a solute into a liquid solvent lowers the chemical potential of the solvent, cause it to favor the liquid phase over the vapor phase. As such, the temperature must be increased to increase the chemical potential of the solvent in the liquid solution until it is equal to that of the vapor-phase solvent. The increase in the boiling point can be expressed as
$\Delta T = K_b X_B$
where
$\dfrac{R(T^o)^2}{\Delta \overline{H}_{vap}} = K_b$
is called the ebullioscopic constant and, like the cryoscopic constant, is a property of the solvent that is independent of the solute or solutes. A very elegant derivation of the form of the models for freezing point depression and boiling point elevation has been shared by F. E. Schubert (Schubert, 1983).
Cryoscopic and ebullioscopic constants are generally tabulated using molality as the unit of solute concentration rather than mole fraction. In this form, the equation for calculating the magnitude of the freezing point decrease or the boiling point increase is
$\Delta T = K_f \,m$
or
$\Delta T = K_b \,m$
where $m$ is the concentration of the solute in moles per kg of solvent. Some values of $K_f$ and $K_b$ are shown in the table below.
Substance $K_f$ (°C kg mol-1) $T^o_f$ (°C) $K_b$ (°C kg mol-1) $T^o_b$ (°C)
Water 1.86 0.0 0.51 100.0
Benzene 5.12 5.5 2.53 80.1
Ethanol 1.99 -114.6 1.22 78.4
CCl4 29.8 -22.3 5.02 76.8
Example $1$:
The boiling point of a solution of 3.00 g of an unknown compound in 25.0 g of CCl4 raises the boiling point to 81.5 °C. What is the molar mass of the compound?
Solution:
The approach here is to find the number of moles of solute in the solution. First, find the concentration of the solution:
$(85.5\, °C- 76.8\, °C) = \left( 5.02\, °C\,kg/mol \right) m$
$m= 0.936\, mol/kg$
Using the number of kg of solvent, one finds the number for moles of solute:
$\left( 0.936 \,mol/\cancel{kg} \right) (0.02\,\cancel{kg}) =0.0234 \, mol$
The ratio of mass to moles yields the final answer:
$\dfrac{3.00 \,g}{0.0234} = 128 g/mol$
Vapor Pressure Lowering
For much the same reason as the lowering of freezing points and the elevation of boiling points for solvents into which a solute has been introduced, the vapor pressure of a volatile solvent will be decreased due to the introduction of a solute. The magnitude of this decrease can be quantified by examining the effect the solute has on the chemical potential of the solvent.
In order to establish equilibrium between the solvent in the solution and the solvent in the vapor phase above the solution, the chemical potentials of the two phases must be equal.
$\mu_{vapor} = \mu_{solvent}$
If the solute is not volatile, the vapor will be pure, so (assuming ideal behavior)
$\mu_{vap}^o + RT \ln \dfrac{P'}{P^o} = \mu_A^o + RT \ln X_A \label{eq3}$
Where $P’$ is the vapor pressure of the solvent over the solution. Similarly, for the pure solvent in equilibrium with its vapor
$\mu_A^o = \mu_{vap}^o + RT \ln \dfrac{P_A}{P^o} \label{eq4}$
where $P^o$ is the standard pressure of 1 bar, and $P_A$ is the vapor pressure of the pure solvent. Substituting Equation \ref{eq4} into Equation \ref{eq3} yields
$\cancel{\mu_{vap}^o} + RT \ln \dfrac{P'}{P^o}= \left ( \cancel{\mu_{vap}^o} + RT \ln \dfrac{P_A}{P^o} \right) + RT \ln X_A$
The terms for $\mu_{vap}^o$ cancel, leaving
$RT \ln \dfrac{P'}{P^o}= RT \ln \dfrac{P_A}{P^o} + RT \ln X_A$
Subtracting $RT \ln(P_A/P^o)$ from both side produces
$RT \ln \dfrac{P'}{P^o} - RT \ln \dfrac{P_A}{P^o} = RT \ln X_A$
which rearranges to
$RT \ln \dfrac{P'}{P_A} = RT \ln X_A$
Dividing both sides by $RT$ and then exponentiating yields
$\dfrac{P'}{P_A} = X_A$
or
$P'=X_AP_A \label{RL}$
This last result is Raoult’s Law. A more formal derivation would use the fugacities of the vapor phases, but would look essentially the same. Also, as in the case of freezing point depression and boiling point elevations, this derivation did not rely on the nature of the solute! However, unlike freezing point depression and boiling point elevation, this derivation did not rely on the solute being dilute, so the result should apply the entire range of concentrations of the solution.
Example $2$:
Consider a mixture of two volatile liquids A and B. The vapor pressure of pure A is 150 Torr at some temperature, and that of pure B is 300 Torr at the same temperature. What is the total vapor pressure above a mixture of these compounds with the mole fraction of B of 0.600. What is the mole fraction of B in the vapor that is in equilibrium with the liquid mixture?
Solution:
Using Raoult’s Law (Equation \ref{RL})
$P_A = (0.400)(150\, Torr) =60.0 \,Torr$
$P_B = (0.600)(300\, Torr) =180.0 \,Torr$
$P_{\text{total}} = P_A + P_B = 240 \,Torr$
To get the mole fractions in the gas phase, one can use Dalton’s Law of partial pressures.
$X_A = \dfrac{ P_A}{P_{tot}} = \dfrac{60.0 \,Torr}{240\,Torr} = 0.250$
$X_B = \dfrac{ P_B}{P_{tot}} = \dfrac{180.0 \,Torr}{240\,Torr} = 0.750$
And, of course, it is also useful to note that the sum of the mole fractions is 1 (as it must be!)
$X_A+X_B =1$
Osmotic Pressure
Osmosis is a process by which solvent can pass through a semi-permeable membrane (a membrane through which solvent can pass, but not solute) from an area of low solute concentration to a region of high solute concentration. The osmotic pressure, $\Pi$, is the pressure that when exerted on the region of high solute concentration will halt the process of osmosis.
The nature of osmosis and the magnitude of the osmotic pressure can be understood by examining the chemical potential of a pure solvent and that of the solvent in a solution. The chemical potential of the solvent in the solution (before any extra pressure is applied) is given by
$\mu_A = \mu_A^o + RT \ln X_A$
And since $X_A$ < 1, the chemical potential of the solvent in a solution is always lower than that of the pure solvent. So, to prevent osmosis from occurring, something needs to be done to raise the chemical potential of the solvent in the solution. This can be accomplished by applying pressure to the solution. Specifically, the process of osmosis will stop when the chemical potential solvent in the solution is increased to the point of being equal to that of the pure solvent. The criterion, therefore, for osmosis to cease is
$\mu_A^o(P) = \mu_A(P +\Pi)$
To solve the problem to determine the magnitude of $P$, the pressure dependence of the chemical potential is needed in addition to understanding the effect the solute has on lowering the chemical potential of the solvent in the solution. The magnitude, therefore, of the increase in chemical potential due to the application of excess pressure P must be equal to the magnitude of the reduction of chemical potential by the reduced mole fraction of the solvent in the solution. We already know that the chemical potential of the solvent in the solution is reduced by an amount given by
$\mu^o_A - \mu_A = RT \ln X_A$
And the increase in chemical potential due to the application of excess pressure is given by
$\mu(P+\Pi) = \mu(P) + \int _{P}^{P+\Pi} \left( \dfrac{\partial \mu}{\partial P} \right)_T dP$
The integrals on the right can be evaluated by recognizing
$\left( \dfrac{\partial \mu}{\partial P} \right)_T = \overline{V}$
where $\overline{V}$ is the molar volume of the substance. Combining these expressions results in
$-RT \ln X_A = \int_{P}^{P+\Pi} \overline{V}\,dP$
If the molar volume of the solvent is independent of pressure (has a very small value of $\kappa_T$ – which is the case for most liquids) the term on the right becomes.
$\int_{P}^{P+\Pi} \overline{V}\,dP = \left. \overline{V} P \right |_{P}^{P+\Pi} = \overline{V}\pi$
Also, for values of $X_A$ very close to 1
$\ln X_A \approx -(1- X_A) = - X_B$
So, for dilute solutions
$X_B RT = \overline{V}\Pi$
Or after rearrangement
$\Pi =\dfrac{X_B RT}{\overline{V}}$
again, where $\overline{V}$ is the molar volume of the solvent. And finally, since $X_B/\overline{V}$ is the concentration of the solute $B$ for cases where $n_B \ll n_A$. This allows one to write a simplified version of the expression which can be used in the case of very dilute solutions
$\Pi = [B]RT$
When a pressure exceeding the osmotic pressure $\Pi$ is applied to the solution, the chemical potential of the solvent in the solution can be made to exceed that of the pure solvent on the other side of the membrane, causing reverse osmosis to occur. This is a very effective method, for example, for recovering pure water from a mixture such as a salt/water solution.
Contributors
• Patrick E. Fleming (Department of Chemistry and Biochemistry; California State University, East Bay) | textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Chemical_Thermodynamics_(Supplement_to_Shepherd_et_al.)/26%3A_Fundamental_17_-_Colligative_Properties/26.01%3A_Colligative_Properties.txt |
• 27.1: Solid-Liquid Systems - Eutectic Points
Phase diagrams are often complex with multiple phases that exhibit differing non-ideal behavior like minimum boiling azeotropes, eutectic points (omposition for which the mixture of the two solids has the lowest melting point), incongruent melting where the stable compound formed by two solids is only stable in the solid phase and will decompose upon melting.
• 27.2: Cooling Curves
The method that is used to map the phase boundaries on a phase diagram is to measure the rate of cooling for a sample of known composition. The rate of cooling will change as the sample (or some portion of it) begins to undergo a phase change. These “breaks” will appear as changes in slope in the temperature-time curve.
27: Extension 17 - Solid-Solution Phase Diagrams
A phase diagram for two immiscible solids and the liquid phase (which is miscible in all proportions) is shown in Figure $1$. The point labeled “e2” is the eutectic point, meaning the composition for which the mixture of the two solids has the lowest melting point. The four main regions can be described as below:
1. Two-phase solid
2. Solid (mostly A) and liquid (A and B)
3. Solid (mostly B) and liquid (A and B)
4. Single phase liquid (A and B)
The unlabeled regions on the sides of the diagram indicate regions where one solid is so miscible in the other, that only a single phase solid forms. This is different than the “two-phase solid” region where there are two distinct phases, meaning there are regions (crystals perhaps) that are distinctly A or B, even though they are intermixed within on another. Region I contains two phases: a solid phase that is mostly compound A, and a liquid phase which contains both A and B. A sample in region II (such as the temperature/composition combination depicted by point b) will consist of two phases: 1 is a liquid mixture of A and B with a composition given by that at point a, and the other is a single phase solid that is mostly pure compound B, but with traces of A entrained within it. As always, the lever rule applies in determining the relative amounts of material in the two phases.
In the case where the widths of the small regions on either side of the phase diagram are negligibly small, a simplified diagram with a form similar to that shown in Figure $2$ can be used. In this case, it is assumed that the solids never form a single phase! The tin-lead system exhibits such behavor.
Another important case is that for which the two compounds A and B can react to form a third chemical compound C. If the compound C is stable in the liquid phase (does not decompose upon melting), the phase diagram will look like Figure $3$.
In this diagram, the vertical boundary at $X_B = 0.33$ is indicative of the compound $C$ formed by $A$ and $B$. From the mole fraction of $B$, it is evident that the formula of compound $C$ is $A_2B$. The reaction that forms compound C is
$2 A + B \rightarrow C$
Thus, at overall compositions where $X_B < 0.33$ , there is excess compound A (B is the limiting reagent) and for $X_B$ there is an excess of compound $B$ ($A$ is now the limiting reagent.) With this in mind, the makeup of the sample in each region can be summarized as
• Two phase solid (A and C)
• Two phase solid (C and B)
• Solid A and liquid (A and C)
• Solid C and liquid (A and C)
• Solid C and liquid (C and B)
• Solid B and liquid (C and B)
• liquid. Single phase liquid (A and C or C and B, depending on which is present in excess)
Zinc and Magnesium are an example of two compounds that demonstrate this kind of behavior, with the third compound having the formula $Zn_2Mg$ (Ghosh, Mezbahul-Islam, & Medraj, 2011).
Incongruent Melting
Oftentimes, the stable compound formed by two solids is only stable in the solid phase. In other words, it will decompose upon melting. As a result, the phase diagram will take a lightly different form, as is shown in Figure $4$.
In this diagram, the formula of the stable compound is $AB_3$ (consistent with $X_B < 0.75$). But you will notice that the boundary separating the two two-phase solid regions does not extend all of the way to the single phase liquid portion of the diagram. This is because the compound will decompose upon melting. The process of decomposition upon melting is also called incongruent melting. The makeup of each region can be summarized as
1. Two phase solid (A and C)
2. Two phase solid (C and B)
3. Solid A and liquid (A and B)
4. Solid C and liquid (A and B)
5. Solid B and liquid (A and B)
There are many examples of pairs of compounds that show this kind of behavior. One combination is sodium and potassium, which form a compound ($Na_2K$) that is unstable in the liquid phase and so it melts incongruently (Rossen & Bleiswijk, 1912).
Contributors
• Patrick E. Fleming (Department of Chemistry and Biochemistry; California State University, East Bay)
27.02: Cooling Curves
The method that is used to map the phase boundaries on a phase diagram is to measure the rate of cooling for a sample of known composition. The rate of cooling will change as the sample (or some portion of it) begins to undergo a phase change. These “breaks” will appear as changes in slope in the temperature-time curve. Consider a binary mixture for which the phase diagram is as shown in Figure \(\PageIndex{1A}\). A cooling curve for a sample that begins at the temperature and composition given by point a is shown in Figure \(\PageIndex{1B}\).
As the sample cools from point a, the temperature will decrease at a rate determined by the sample composition, and the geometry of the experiment (for example, one expects more rapid cooling is the sample has more surface area exposed to the cooler surroundings) and the temperature difference between the sample and the surroundings.
When the temperature reaches that at point b, some solid compound B will begin to form. This will lead to a slowing of the cooling due to the exothermic nature of solid formation. But also, the composition of the liquid will change, becoming richer in compound A as B is removed from the liquid phase in the form of a solid. This will continue until the liquid attains the composition at the eutectic point (point c in the diagram.)
When the temperature reaches that at point c, both compounds A and B will solidify, and the composition of the liquid phase will remain constant. As such, the temperature will stop changing, creating what is called the eutectic halt. Once all of the material has solidified (at the time indicated by point c’), the cooling will continue at a rate determined by the heat capacities of the two solids A and B, the composition, and (of course) the geometry of the experimental set up. By measuring cooling curves for samples of varying composition, one can map the entire phase diagram. | textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Chemical_Thermodynamics_(Supplement_to_Shepherd_et_al.)/27%3A_Extension_17_-_Solid-Solution_Phase_Diagrams/27.01%3A_Solid-Liquid_Systems_-_Eutectic_Points.txt |
$\newcommand{\tx}[1]{\text{#1}} % text in math mode$
$\newcommand{\subs}[1]{_{\text{#1}}} % subscript text$
$\newcommand{\sups}[1]{^{\text{#1}}} % superscript text$
$\newcommand{\st}{^\circ} % standard state symbol$
$\newcommand{\id}{^{\text{id}}} % ideal$
$\newcommand{\rf}{^{\text{ref}}} % reference state$
$\newcommand{\units}[1]{\mbox{\thinspace#1}}$
$\newcommand{\K}{\units{K}} % kelvins$
$\newcommand{\degC}{^\circ\text{C}} % degrees Celsius$
$\newcommand{\br}{\units{bar}} % bar (\bar is already defined)$
$\newcommand{\Pa}{\units{Pa}}$
$\newcommand{\mol}{\units{mol}} % mole$
$\newcommand{\V}{\units{V}} % volts$
$\newcommand{\timesten}[1]{\mbox{\,\times\,10^{#1}}}$
$\newcommand{\per}{^{-1}} % minus one power$
$\newcommand{\m}{_{\text{m}}} % subscript m for molar quantity$
$\newcommand{\CVm}{C_{V,\text{m}}} % molar heat capacity at const.V$
$\newcommand{\Cpm}{C_{p,\text{m}}} % molar heat capacity at const.p$
$\newcommand{\kT}{\kappa_T} % isothermal compressibility$
$\newcommand{\A}{_{\text{A}}} % subscript A for solvent or state A$
$\newcommand{\B}{_{\text{B}}} % subscript B for solute or state B$
$\newcommand{\bd}{_{\text{b}}} % subscript b for boundary or boiling point$
$\newcommand{\C}{_{\text{C}}} % subscript C$
$\newcommand{\f}{_{\text{f}}} % subscript f for freezing point$
$\newcommand{\mA}{_{\text{m},\text{A}}} % subscript m,A (m=molar)$
$\newcommand{\mB}{_{\text{m},\text{B}}} % subscript m,B (m=molar)$
$\newcommand{\mi}{_{\text{m},i}} % subscript m,i (m=molar)$
$\newcommand{\fA}{_{\text{f},\text{A}}} % subscript f,A (for fr. pt.)$
$\newcommand{\fB}{_{\text{f},\text{B}}} % subscript f,B (for fr. pt.)$
$\newcommand{\xbB}{_{x,\text{B}}} % x basis, B$
$\newcommand{\xbC}{_{x,\text{C}}} % x basis, C$
$\newcommand{\cbB}{_{c,\text{B}}} % c basis, B$
$\newcommand{\mbB}{_{m,\text{B}}} % m basis, B$
$\newcommand{\kHi}{k_{\text{H},i}} % Henry's law constant, x basis, i$
$\newcommand{\kHB}{k_{\text{H,B}}} % Henry's law constant, x basis, B$
$\newcommand{\arrow}{\,\rightarrow\,} % right arrow with extra spaces$
$\newcommand{\arrows}{\,\rightleftharpoons\,} % double arrows with extra spaces$
$\newcommand{\ra}{\rightarrow} % right arrow (can be used in text mode)$
$\newcommand{\eq}{\subs{eq}} % equilibrium state$
$\newcommand{\onehalf}{\textstyle\frac{1}{2}\D} % small 1/2 for display equation$
$\newcommand{\sys}{\subs{sys}} % system property$
$\newcommand{\sur}{\sups{sur}} % surroundings$
$\renewcommand{\in}{\sups{int}} % internal$
$\newcommand{\lab}{\subs{lab}} % lab frame$
$\newcommand{\cm}{\subs{cm}} % center of mass$
$\newcommand{\rev}{\subs{rev}} % reversible$
$\newcommand{\irr}{\subs{irr}} % irreversible$
$\newcommand{\fric}{\subs{fric}} % friction$
$\newcommand{\diss}{\subs{diss}} % dissipation$
$\newcommand{\el}{\subs{el}} % electrical$
$\newcommand{\cell}{\subs{cell}} % cell$
$\newcommand{\As}{A\subs{s}} % surface area$
$\newcommand{\E}{^\mathsf{E}} % excess quantity (superscript)$
$\newcommand{\allni}{\{n_i \}} % set of all n_i$
$\newcommand{\sol}{\hspace{-.1em}\tx{(sol)}}$
$\newcommand{\solmB}{\tx{(sol,\,m\B)}}$
$\newcommand{\dil}{\tx{(dil)}}$
$\newcommand{\sln}{\tx{(sln)}}$
$\newcommand{\mix}{\tx{(mix)}}$
$\newcommand{\rxn}{\tx{(rxn)}}$
$\newcommand{\expt}{\tx{(expt)}}$
$\newcommand{\solid}{\tx{(s)}}$
$\newcommand{\liquid}{\tx{(l)}}$
$\newcommand{\gas}{\tx{(g)}}$
$\newcommand{\pha}{\alpha} % phase alpha$
$\newcommand{\phb}{\beta} % phase beta$
$\newcommand{\phg}{\gamma} % phase gamma$
$\newcommand{\aph}{^{\alpha}} % alpha phase superscript$
$\newcommand{\bph}{^{\beta}} % beta phase superscript$
$\newcommand{\gph}{^{\gamma}} % gamma phase superscript$
$\newcommand{\aphp}{^{\alpha'}} % alpha prime phase superscript$
$\newcommand{\bphp}{^{\beta'}} % beta prime phase superscript$
$\newcommand{\gphp}{^{\gamma'}} % gamma prime phase superscript$
$\newcommand{\apht}{\small\aph} % alpha phase tiny superscript$
$\newcommand{\bpht}{\small\bph} % beta phase tiny superscript$
$\newcommand{\gpht}{\small\gph} % gamma phase tiny superscript$
$\newcommand{\upOmega}{\Omega}$
$\newcommand{\dif}{\mathop{}\!\mathrm{d}} % roman d in math mode, preceded by space$
$\newcommand{\Dif}{\mathop{}\!\mathrm{D}} % roman D in math mode, preceded by space$
$\newcommand{\df}{\dif\hspace{0.05em} f} % df$
$\newcommand{\dBar}{\mathop{}\!\mathrm{d}\hspace-.3em\raise1.05ex{\Rule{.8ex}{.125ex}{0ex}}} % inexact differential$
$\newcommand{\dq}{\dBar q} % heat differential$
$\newcommand{\dw}{\dBar w} % work differential$
$\newcommand{\dQ}{\dBar Q} % infinitesimal charge$
$\newcommand{\dx}{\dif\hspace{0.05em} x} % dx$
$\newcommand{\dt}{\dif\hspace{0.05em} t} % dt$
$\newcommand{\difp}{\dif\hspace{0.05em} p} % dp$
$\newcommand{\Del}{\Delta}$
$\newcommand{\Delsub}[1]{\Delta_{\text{#1}}}$
$\newcommand{\pd}[3]{(\partial #1 / \partial #2 )_{#3}} % \pd{}{}{} - partial derivative, one line$
$\newcommand{\Pd}[3]{\left( \dfrac {\partial #1} {\partial #2}\right)_{#3}} % Pd{}{}{} - Partial derivative, built-up$
$\newcommand{\bpd}[3]{[ \partial #1 / \partial #2 ]_{#3}}$
$\newcommand{\bPd}[3]{\left[ \dfrac {\partial #1} {\partial #2}\right]_{#3}}$
$\newcommand{\dotprod}{\small\bullet}$
$\newcommand{\fug}{f} % fugacity$
$\newcommand{\g}{\gamma} % solute activity coefficient, or gamma in general$
$\newcommand{\G}{\varGamma} % activity coefficient of a reference state (pressure factor)$
$\newcommand{\ecp}{\widetilde{\mu}} % electrochemical or total potential$
$\newcommand{\Eeq}{E\subs{cell, eq}} % equilibrium cell potential$
$\newcommand{\Ej}{E\subs{j}} % liquid junction potential$
$\newcommand{\mue}{\mu\subs{e}} % electron chemical potential$
$\newcommand{\defn}{\,\stackrel{\mathrm{def}}{=}\,} % "equal by definition" symbol$
$\newcommand{\D}{\displaystyle} % for a line in built-up$
$\newcommand{\s}{\smash[b]} % use in equations with conditions of validity$
$\newcommand{\cond}[1]{\[-2.5pt]{}\tag*{#1}}$
$\newcommand{\nextcond}[1]{\[-5pt]{}\tag*{#1}}$
$\newcommand{\R}{8.3145\units{J\,K\per\,mol\per}} % gas constant value$
$\newcommand{\Rsix}{8.31447\units{J\,K\per\,mol\per}} % gas constant value - 6 sig figs$
$\newcommand{\jn}{\hspace3pt\lower.3ex{\Rule{.6pt}{2ex}{0ex}}\hspace3pt}$
$\newcommand{\ljn}{\hspace3pt\lower.3ex{\Rule{.6pt}{.5ex}{0ex}}\hspace-.6pt\raise.45ex{\Rule{.6pt}{.5ex}{0ex}}\hspace-.6pt\raise1.2ex{\Rule{.6pt}{.5ex}{0ex}} \hspace3pt}$
$\newcommand{\lljn}{\hspace3pt\lower.3ex{\Rule{.6pt}{.5ex}{0ex}}\hspace-.6pt\raise.45ex{\Rule{.6pt}{.5ex}{0ex}}\hspace-.6pt\raise1.2ex{\Rule{.6pt}{.5ex}{0ex}}\hspace1.4pt\lower.3ex{\Rule{.6pt}{.5ex}{0ex}}\hspace-.6pt\raise.45ex{\Rule{.6pt}{.5ex}{0ex}}\hspace-.6pt\raise1.2ex{\Rule{.6pt}{.5ex}{0ex}}\hspace3pt}$
Thermodynamics is a quantitative subject. It allows us to derive relations between the values of numerous physical quantities. Some physical quantities, such as a mole fraction, are dimensionless; the value of one of these quantities is a pure number. Most quantities, however, are not dimensionless and their values must include one or more units. This chapter reviews the SI system of units, which are the preferred units in science applications. The chapter then discusses some useful mathematical manipulations of physical quantities using quantity calculus, and certain general aspects of dimensional analysis.
01: Introduction
$\newcommand{\tx}[1]{\text{#1}} % text in math mode$
$\newcommand{\subs}[1]{_{\text{#1}}} % subscript text$
$\newcommand{\sups}[1]{^{\text{#1}}} % superscript text$
$\newcommand{\st}{^\circ} % standard state symbol$
$\newcommand{\id}{^{\text{id}}} % ideal$
$\newcommand{\rf}{^{\text{ref}}} % reference state$
$\newcommand{\units}[1]{\mbox{\thinspace#1}}$
$\newcommand{\K}{\units{K}} % kelvins$
$\newcommand{\degC}{^\circ\text{C}} % degrees Celsius$
$\newcommand{\br}{\units{bar}} % bar (\bar is already defined)$
$\newcommand{\Pa}{\units{Pa}}$
$\newcommand{\mol}{\units{mol}} % mole$
$\newcommand{\V}{\units{V}} % volts$
$\newcommand{\timesten}[1]{\mbox{\,\times\,10^{#1}}}$
$\newcommand{\per}{^{-1}} % minus one power$
$\newcommand{\m}{_{\text{m}}} % subscript m for molar quantity$
$\newcommand{\CVm}{C_{V,\text{m}}} % molar heat capacity at const.V$
$\newcommand{\Cpm}{C_{p,\text{m}}} % molar heat capacity at const.p$
$\newcommand{\kT}{\kappa_T} % isothermal compressibility$
$\newcommand{\A}{_{\text{A}}} % subscript A for solvent or state A$
$\newcommand{\B}{_{\text{B}}} % subscript B for solute or state B$
$\newcommand{\bd}{_{\text{b}}} % subscript b for boundary or boiling point$
$\newcommand{\C}{_{\text{C}}} % subscript C$
$\newcommand{\f}{_{\text{f}}} % subscript f for freezing point$
$\newcommand{\mA}{_{\text{m},\text{A}}} % subscript m,A (m=molar)$
$\newcommand{\mB}{_{\text{m},\text{B}}} % subscript m,B (m=molar)$
$\newcommand{\mi}{_{\text{m},i}} % subscript m,i (m=molar)$
$\newcommand{\fA}{_{\text{f},\text{A}}} % subscript f,A (for fr. pt.)$
$\newcommand{\fB}{_{\text{f},\text{B}}} % subscript f,B (for fr. pt.)$
$\newcommand{\xbB}{_{x,\text{B}}} % x basis, B$
$\newcommand{\xbC}{_{x,\text{C}}} % x basis, C$
$\newcommand{\cbB}{_{c,\text{B}}} % c basis, B$
$\newcommand{\mbB}{_{m,\text{B}}} % m basis, B$
$\newcommand{\kHi}{k_{\text{H},i}} % Henry's law constant, x basis, i$
$\newcommand{\kHB}{k_{\text{H,B}}} % Henry's law constant, x basis, B$
$\newcommand{\arrow}{\,\rightarrow\,} % right arrow with extra spaces$
$\newcommand{\arrows}{\,\rightleftharpoons\,} % double arrows with extra spaces$
$\newcommand{\ra}{\rightarrow} % right arrow (can be used in text mode)$
$\newcommand{\eq}{\subs{eq}} % equilibrium state$
$\newcommand{\onehalf}{\textstyle\frac{1}{2}\D} % small 1/2 for display equation$
$\newcommand{\sys}{\subs{sys}} % system property$
$\newcommand{\sur}{\sups{sur}} % surroundings$
$\renewcommand{\in}{\sups{int}} % internal$
$\newcommand{\lab}{\subs{lab}} % lab frame$
$\newcommand{\cm}{\subs{cm}} % center of mass$
$\newcommand{\rev}{\subs{rev}} % reversible$
$\newcommand{\irr}{\subs{irr}} % irreversible$
$\newcommand{\fric}{\subs{fric}} % friction$
$\newcommand{\diss}{\subs{diss}} % dissipation$
$\newcommand{\el}{\subs{el}} % electrical$
$\newcommand{\cell}{\subs{cell}} % cell$
$\newcommand{\As}{A\subs{s}} % surface area$
$\newcommand{\E}{^\mathsf{E}} % excess quantity (superscript)$
$\newcommand{\allni}{\{n_i \}} % set of all n_i$
$\newcommand{\sol}{\hspace{-.1em}\tx{(sol)}}$
$\newcommand{\solmB}{\tx{(sol,\,m\B)}}$
$\newcommand{\dil}{\tx{(dil)}}$
$\newcommand{\sln}{\tx{(sln)}}$
$\newcommand{\mix}{\tx{(mix)}}$
$\newcommand{\rxn}{\tx{(rxn)}}$
$\newcommand{\expt}{\tx{(expt)}}$
$\newcommand{\solid}{\tx{(s)}}$
$\newcommand{\liquid}{\tx{(l)}}$
$\newcommand{\gas}{\tx{(g)}}$
$\newcommand{\pha}{\alpha} % phase alpha$
$\newcommand{\phb}{\beta} % phase beta$
$\newcommand{\phg}{\gamma} % phase gamma$
$\newcommand{\aph}{^{\alpha}} % alpha phase superscript$
$\newcommand{\bph}{^{\beta}} % beta phase superscript$
$\newcommand{\gph}{^{\gamma}} % gamma phase superscript$
$\newcommand{\aphp}{^{\alpha'}} % alpha prime phase superscript$
$\newcommand{\bphp}{^{\beta'}} % beta prime phase superscript$
$\newcommand{\gphp}{^{\gamma'}} % gamma prime phase superscript$
$\newcommand{\apht}{\small\aph} % alpha phase tiny superscript$
$\newcommand{\bpht}{\small\bph} % beta phase tiny superscript$
$\newcommand{\gpht}{\small\gph} % gamma phase tiny superscript$
$\newcommand{\upOmega}{\Omega}$
$\newcommand{\dif}{\mathop{}\!\mathrm{d}} % roman d in math mode, preceded by space$
$\newcommand{\Dif}{\mathop{}\!\mathrm{D}} % roman D in math mode, preceded by space$
$\newcommand{\df}{\dif\hspace{0.05em} f} % df$
$\newcommand{\dBar}{\mathop{}\!\mathrm{d}\hspace-.3em\raise1.05ex{\Rule{.8ex}{.125ex}{0ex}}} % inexact differential$
$\newcommand{\dq}{\dBar q} % heat differential$
$\newcommand{\dw}{\dBar w} % work differential$
$\newcommand{\dQ}{\dBar Q} % infinitesimal charge$
$\newcommand{\dx}{\dif\hspace{0.05em} x} % dx$
$\newcommand{\dt}{\dif\hspace{0.05em} t} % dt$
$\newcommand{\difp}{\dif\hspace{0.05em} p} % dp$
$\newcommand{\Del}{\Delta}$
$\newcommand{\Delsub}[1]{\Delta_{\text{#1}}}$
$\newcommand{\pd}[3]{(\partial #1 / \partial #2 )_{#3}} % \pd{}{}{} - partial derivative, one line$
$\newcommand{\Pd}[3]{\left( \dfrac {\partial #1} {\partial #2}\right)_{#3}} % Pd{}{}{} - Partial derivative, built-up$
$\newcommand{\bpd}[3]{[ \partial #1 / \partial #2 ]_{#3}}$
$\newcommand{\bPd}[3]{\left[ \dfrac {\partial #1} {\partial #2}\right]_{#3}}$
$\newcommand{\dotprod}{\small\bullet}$
$\newcommand{\fug}{f} % fugacity$
$\newcommand{\g}{\gamma} % solute activity coefficient, or gamma in general$
$\newcommand{\G}{\varGamma} % activity coefficient of a reference state (pressure factor)$
$\newcommand{\ecp}{\widetilde{\mu}} % electrochemical or total potential$
$\newcommand{\Eeq}{E\subs{cell, eq}} % equilibrium cell potential$
$\newcommand{\Ej}{E\subs{j}} % liquid junction potential$
$\newcommand{\mue}{\mu\subs{e}} % electron chemical potential$
$\newcommand{\defn}{\,\stackrel{\mathrm{def}}{=}\,} % "equal by definition" symbol$
$\newcommand{\D}{\displaystyle} % for a line in built-up$
$\newcommand{\s}{\smash[b]} % use in equations with conditions of validity$
$\newcommand{\cond}[1]{\[-2.5pt]{}\tag*{#1}}$
$\newcommand{\nextcond}[1]{\[-5pt]{}\tag*{#1}}$
$\newcommand{\R}{8.3145\units{J\,K\per\,mol\per}} % gas constant value$
$\newcommand{\Rsix}{8.31447\units{J\,K\per\,mol\per}} % gas constant value - 6 sig figs$
$\newcommand{\jn}{\hspace3pt\lower.3ex{\Rule{.6pt}{2ex}{0ex}}\hspace3pt}$
$\newcommand{\ljn}{\hspace3pt\lower.3ex{\Rule{.6pt}{.5ex}{0ex}}\hspace-.6pt\raise.45ex{\Rule{.6pt}{.5ex}{0ex}}\hspace-.6pt\raise1.2ex{\Rule{.6pt}{.5ex}{0ex}} \hspace3pt}$
$\newcommand{\lljn}{\hspace3pt\lower.3ex{\Rule{.6pt}{.5ex}{0ex}}\hspace-.6pt\raise.45ex{\Rule{.6pt}{.5ex}{0ex}}\hspace-.6pt\raise1.2ex{\Rule{.6pt}{.5ex}{0ex}}\hspace1.4pt\lower.3ex{\Rule{.6pt}{.5ex}{0ex}}\hspace-.6pt\raise.45ex{\Rule{.6pt}{.5ex}{0ex}}\hspace-.6pt\raise1.2ex{\Rule{.6pt}{.5ex}{0ex}}\hspace3pt}$
There is international agreement that the units used for physical quantities in science and technology should be those of the International System of Units, or SI (standing for the French Système International d’Unités). The Physical Chemistry Division of the International Union of Pure and Applied Chemistry, or IUPAC, produces a manual of recommended symbols and terminology for physical quantities and units based on the SI. The manual has become known as the Green Book (from the color of its cover) and is referred to here as the IUPAC Green Book. This e-book will, with a few exceptions, use symbols recommended in the third edition (2007) of the IUPAC Green Book (E. Richard Cohen et al, Quantities, Units and Symbols in Physical Chemistry, 3rd edition. RSC Publishing, Cambridge, 2007). These symbols are listed for convenient reference in Appendices C and D.
Any of the symbols for units listed in Tables 1.1–1.3, except kg and $\degC$, may be preceded by one of the prefix symbols of Table 1.4 to construct a decimal fraction or multiple of the unit. (The symbol g may be preceded by a prefix symbol to construct a fraction or multiple of the gram.) The combination of prefix symbol and unit symbol is taken as a new symbol that can be raised to a power without using parentheses, as in the following examples:
• The physical quantity formally called amount of substance is a counting quantity for particles, such as atoms or molecules, or for other chemical entities. The counting unit is invariably the mole, defined as the amount of substance containing as many particles as the number of atoms in exactly $12$ grams of pure carbon-12 nuclide, ${}^{12}$C. See Appendix A for the wording of the official IUPAC definition. This definition is such that one mole of H$_{2}$O molecules, for example, has a mass of $18.0153$ grams (where $18.0153$ is the relative molecular mass of H$_{2}$O) and contains $6.02214\timesten{23}$ molecules (where $6.02214 \timesten{23}\units{mol\(^{-1}$}\) is the Avogadro constant to six significant digits). The same statement can be made for any other substance if $18.0153$ is replaced by the appropriate atomic mass or molecular mass value.
The symbol for amount of substance is $n$. It is admittedly awkward to refer to $n$(H$_2$O) as “the amount of substance of water.” This e-book simply shortens “amount of substance” to amount. An alternative name suggested for $n$ is “chemical amount.” Thus, “the amount of water in the system” refers not to the mass or volume of water, but to the number of H$_2$O molecules in the system expressed in a counting unit such as the mole. | textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/DeVoes_Thermodynamics_and_Chemistry/01%3A_Introduction/1.01%3A_Units.txt |
$\newcommand{\tx}[1]{\text{#1}} % text in math mode$
$\newcommand{\subs}[1]{_{\text{#1}}} % subscript text$
$\newcommand{\sups}[1]{^{\text{#1}}} % superscript text$
$\newcommand{\st}{^\circ} % standard state symbol$
$\newcommand{\id}{^{\text{id}}} % ideal$
$\newcommand{\rf}{^{\text{ref}}} % reference state$
$\newcommand{\units}[1]{\mbox{\thinspace#1}}$
$\newcommand{\K}{\units{K}} % kelvins$
$\newcommand{\degC}{^\circ\text{C}} % degrees Celsius$
$\newcommand{\br}{\units{bar}} % bar (\bar is already defined)$
$\newcommand{\Pa}{\units{Pa}}$
$\newcommand{\mol}{\units{mol}} % mole$
$\newcommand{\V}{\units{V}} % volts$
$\newcommand{\timesten}[1]{\mbox{\,\times\,10^{#1}}}$
$\newcommand{\per}{^{-1}} % minus one power$
$\newcommand{\m}{_{\text{m}}} % subscript m for molar quantity$
$\newcommand{\CVm}{C_{V,\text{m}}} % molar heat capacity at const.V$
$\newcommand{\Cpm}{C_{p,\text{m}}} % molar heat capacity at const.p$
$\newcommand{\kT}{\kappa_T} % isothermal compressibility$
$\newcommand{\A}{_{\text{A}}} % subscript A for solvent or state A$
$\newcommand{\B}{_{\text{B}}} % subscript B for solute or state B$
$\newcommand{\bd}{_{\text{b}}} % subscript b for boundary or boiling point$
$\newcommand{\C}{_{\text{C}}} % subscript C$
$\newcommand{\f}{_{\text{f}}} % subscript f for freezing point$
$\newcommand{\mA}{_{\text{m},\text{A}}} % subscript m,A (m=molar)$
$\newcommand{\mB}{_{\text{m},\text{B}}} % subscript m,B (m=molar)$
$\newcommand{\mi}{_{\text{m},i}} % subscript m,i (m=molar)$
$\newcommand{\fA}{_{\text{f},\text{A}}} % subscript f,A (for fr. pt.)$
$\newcommand{\fB}{_{\text{f},\text{B}}} % subscript f,B (for fr. pt.)$
$\newcommand{\xbB}{_{x,\text{B}}} % x basis, B$
$\newcommand{\xbC}{_{x,\text{C}}} % x basis, C$
$\newcommand{\cbB}{_{c,\text{B}}} % c basis, B$
$\newcommand{\mbB}{_{m,\text{B}}} % m basis, B$
$\newcommand{\kHi}{k_{\text{H},i}} % Henry's law constant, x basis, i$
$\newcommand{\kHB}{k_{\text{H,B}}} % Henry's law constant, x basis, B$
$\newcommand{\arrow}{\,\rightarrow\,} % right arrow with extra spaces$
$\newcommand{\arrows}{\,\rightleftharpoons\,} % double arrows with extra spaces$
$\newcommand{\ra}{\rightarrow} % right arrow (can be used in text mode)$
$\newcommand{\eq}{\subs{eq}} % equilibrium state$
$\newcommand{\onehalf}{\textstyle\frac{1}{2}\D} % small 1/2 for display equation$
$\newcommand{\sys}{\subs{sys}} % system property$
$\newcommand{\sur}{\sups{sur}} % surroundings$
$\renewcommand{\in}{\sups{int}} % internal$
$\newcommand{\lab}{\subs{lab}} % lab frame$
$\newcommand{\cm}{\subs{cm}} % center of mass$
$\newcommand{\rev}{\subs{rev}} % reversible$
$\newcommand{\irr}{\subs{irr}} % irreversible$
$\newcommand{\fric}{\subs{fric}} % friction$
$\newcommand{\diss}{\subs{diss}} % dissipation$
$\newcommand{\el}{\subs{el}} % electrical$
$\newcommand{\cell}{\subs{cell}} % cell$
$\newcommand{\As}{A\subs{s}} % surface area$
$\newcommand{\E}{^\mathsf{E}} % excess quantity (superscript)$
$\newcommand{\allni}{\{n_i \}} % set of all n_i$
$\newcommand{\sol}{\hspace{-.1em}\tx{(sol)}}$
$\newcommand{\solmB}{\tx{(sol,\,m\B)}}$
$\newcommand{\dil}{\tx{(dil)}}$
$\newcommand{\sln}{\tx{(sln)}}$
$\newcommand{\mix}{\tx{(mix)}}$
$\newcommand{\rxn}{\tx{(rxn)}}$
$\newcommand{\expt}{\tx{(expt)}}$
$\newcommand{\solid}{\tx{(s)}}$
$\newcommand{\liquid}{\tx{(l)}}$
$\newcommand{\gas}{\tx{(g)}}$
$\newcommand{\pha}{\alpha} % phase alpha$
$\newcommand{\phb}{\beta} % phase beta$
$\newcommand{\phg}{\gamma} % phase gamma$
$\newcommand{\aph}{^{\alpha}} % alpha phase superscript$
$\newcommand{\bph}{^{\beta}} % beta phase superscript$
$\newcommand{\gph}{^{\gamma}} % gamma phase superscript$
$\newcommand{\aphp}{^{\alpha'}} % alpha prime phase superscript$
$\newcommand{\bphp}{^{\beta'}} % beta prime phase superscript$
$\newcommand{\gphp}{^{\gamma'}} % gamma prime phase superscript$
$\newcommand{\apht}{\small\aph} % alpha phase tiny superscript$
$\newcommand{\bpht}{\small\bph} % beta phase tiny superscript$
$\newcommand{\gpht}{\small\gph} % gamma phase tiny superscript$
$\newcommand{\upOmega}{\Omega}$
$\newcommand{\dif}{\mathop{}\!\mathrm{d}} % roman d in math mode, preceded by space$
$\newcommand{\Dif}{\mathop{}\!\mathrm{D}} % roman D in math mode, preceded by space$
$\newcommand{\df}{\dif\hspace{0.05em} f} % df$
$\newcommand{\dBar}{\mathop{}\!\mathrm{d}\hspace-.3em\raise1.05ex{\Rule{.8ex}{.125ex}{0ex}}} % inexact differential$
$\newcommand{\dq}{\dBar q} % heat differential$
$\newcommand{\dw}{\dBar w} % work differential$
$\newcommand{\dQ}{\dBar Q} % infinitesimal charge$
$\newcommand{\dx}{\dif\hspace{0.05em} x} % dx$
$\newcommand{\dt}{\dif\hspace{0.05em} t} % dt$
$\newcommand{\difp}{\dif\hspace{0.05em} p} % dp$
$\newcommand{\Del}{\Delta}$
$\newcommand{\Delsub}[1]{\Delta_{\text{#1}}}$
$\newcommand{\pd}[3]{(\partial #1 / \partial #2 )_{#3}} % \pd{}{}{} - partial derivative, one line$
$\newcommand{\Pd}[3]{\left( \dfrac {\partial #1} {\partial #2}\right)_{#3}} % Pd{}{}{} - Partial derivative, built-up$
$\newcommand{\bpd}[3]{[ \partial #1 / \partial #2 ]_{#3}}$
$\newcommand{\bPd}[3]{\left[ \dfrac {\partial #1} {\partial #2}\right]_{#3}}$
$\newcommand{\dotprod}{\small\bullet}$
$\newcommand{\fug}{f} % fugacity$
$\newcommand{\g}{\gamma} % solute activity coefficient, or gamma in general$
$\newcommand{\G}{\varGamma} % activity coefficient of a reference state (pressure factor)$
$\newcommand{\ecp}{\widetilde{\mu}} % electrochemical or total potential$
$\newcommand{\Eeq}{E\subs{cell, eq}} % equilibrium cell potential$
$\newcommand{\Ej}{E\subs{j}} % liquid junction potential$
$\newcommand{\mue}{\mu\subs{e}} % electron chemical potential$
$\newcommand{\defn}{\,\stackrel{\mathrm{def}}{=}\,} % "equal by definition" symbol$
$\newcommand{\D}{\displaystyle} % for a line in built-up$
$\newcommand{\s}{\smash[b]} % use in equations with conditions of validity$
$\newcommand{\cond}[1]{\[-2.5pt]{}\tag*{#1}}$
$\newcommand{\nextcond}[1]{\[-5pt]{}\tag*{#1}}$
$\newcommand{\R}{8.3145\units{J\,K\per\,mol\per}} % gas constant value$
$\newcommand{\Rsix}{8.31447\units{J\,K\per\,mol\per}} % gas constant value - 6 sig figs$
$\newcommand{\jn}{\hspace3pt\lower.3ex{\Rule{.6pt}{2ex}{0ex}}\hspace3pt}$
$\newcommand{\ljn}{\hspace3pt\lower.3ex{\Rule{.6pt}{.5ex}{0ex}}\hspace-.6pt\raise.45ex{\Rule{.6pt}{.5ex}{0ex}}\hspace-.6pt\raise1.2ex{\Rule{.6pt}{.5ex}{0ex}} \hspace3pt}$
$\newcommand{\lljn}{\hspace3pt\lower.3ex{\Rule{.6pt}{.5ex}{0ex}}\hspace-.6pt\raise.45ex{\Rule{.6pt}{.5ex}{0ex}}\hspace-.6pt\raise1.2ex{\Rule{.6pt}{.5ex}{0ex}}\hspace1.4pt\lower.3ex{\Rule{.6pt}{.5ex}{0ex}}\hspace-.6pt\raise.45ex{\Rule{.6pt}{.5ex}{0ex}}\hspace-.6pt\raise1.2ex{\Rule{.6pt}{.5ex}{0ex}}\hspace3pt}$
This section gives examples of how we may manipulate physical quantities by the rules of algebra. The method is called quantity calculus, although a better term might be “quantity algebra.”
Quantity calculus is based on the concept that a physical quantity, unless it is dimensionless, has a value equal to the product of a numerical value (a pure number) and one or more units: $\tx{physical quantity = numerical value $\times$ units} \tag{1.2.1}$ (If the quantity is dimensionless, it is equal to a pure number without units.) The physical property may be denoted by a symbol, but the symbol does not imply a particular choice of units. For instance, this e-book uses the symbol $\rho$ for density, but $\rho$ can be expressed in any units having the dimensions of mass divided by volume.
A simple example illustrates the use of quantity calculus. We may express the density of water at $25\units{\(\degC$}\) to four significant digits in SI base units by the equation $\rho = 9.970 \timesten{2} \units{kg m$^{-3}$} \tag{1.2.2}$ and in different density units by the equation $\rho = 0.9970 \units{g cm$^{-3}$} \tag{1.2.3}$ We may divide both sides of the last equation by $1\units{g cm\(^{-3}$}\) to obtain a new equation $\rho / \tx{g cm$^{-3}$}=0.9970 \tag{1.2.4}$ Now the pure number $0.9970$ appearing in this equation is the number of grams in one cubic centimeter of water, so we may call the ratio $\rho/$g cm$^{-3}$ “the number of grams per cubic centimeter.” By the same reasoning, $\rho/$kg m$^{-3}$ is the number of kilograms per cubic meter. In general, a physical quantity divided by particular units for the physical quantity is a pure number representing the number of those units.
Just as it would be incorrect to call $\rho$ “the number of grams per cubic centimeter,” because that would refer to a particular choice of units for $\rho$, the common practice of calling $n$ “the number of moles” is also strictly speaking not correct. It is actually the ratio $n/\tx{mol}$ that is the number of moles.
In a table, the ratio $\rho/$g cm$^{-3}$ makes a convenient heading for a column of density values because the column can then show pure numbers. Likewise, it is convenient to use $\rho/$g cm$^{-3}$ as the label of a graph axis and to show pure numbers at the grid marks of the axis. You will see many examples of this usage in the tables and figures in this e-book.
A major advantage of using SI base units and SI derived units is that they are coherent. That is, values of a physical quantity expressed in different combinations of these units have the same numerical value.
For example, suppose we wish to evaluate the pressure of a gas according to the ideal gas equation \begin{gather} \s{ p=\frac{nRT}{V} } \tag{1.2.5} \cond{(ideal gas)} \end{gather} This is the first equation that, like many others to follow, shows conditions of validity in parentheses immediately below the equation number at the right. Thus, Eq. 1.2.5 is valid for an ideal gas. In this equation, $p$, $n$, $T$, and $V$ are the symbols for the physical quantities pressure, amount (amount of substance), thermodynamic temperature, and volume, respectively, and $R$ is the gas constant.
The calculation of $p$ for $5.000$ moles of an ideal gas at a temperature of $298.15$ kelvins, in a volume of $4.000$ cubic meters, is $p = \frac{(5.000\mol)(\R)(298.15\K)}{4.000\units{m$^{3}$}} = 3.099\timesten{3}\units{J m$^{-3}$} \tag{1.2.6}$ The mole and kelvin units cancel, and we are left with units of J m$^{-3}$, a combination of an SI derived unit (the joule) and an SI base unit (the meter). The units J m$^{-3}$ must have dimensions of pressure, but are not commonly used to express pressure.
To convert J m$^{-3}$ to the SI derived unit of pressure, the pascal (Pa), we can use the following relations from Table 1.2: $1\units{J} = 1\units{N m} \qquad 1\Pa =1\units{N m$^{-2}$} \tag{1.2.7}$ When we divide both sides of the first relation by $1\units{J}$ and divide both sides of the second relation by $1\units{N m\(^{-2}$}\), we obtain the two new relations $1=(1\units{N m}/\tx{J}) \qquad (1\units{Pa}/\tx{N m$^{-2}$})=1 \tag{1.2.8}$ The ratios in parentheses are conversion factors. When a physical quantity is multiplied by a conversion factor that, like these, is equal to the pure number $1$, the physical quantity changes its units but not its value. When we multiply Eq. 1.2.6 by both of these conversion factors, all units cancel except Pa: $\begin{split} p & = (3.099\timesten{3}\units{J m$^{-3}$}) \times (1\units{N m}/\tx{J}) \times (1\Pa/\tx{N m$^{-2}$}) \ & = 3.099\timesten{3}\Pa \end{split} \tag{1.2.9}$
This example illustrates the fact that to calculate a physical quantity, we can simply enter into a calculator numerical values expressed in SI units, and the result is the numerical value of the calculated quantity expressed in SI units. In other words, as long as we use only SI base units and SI derived units (without prefixes), all conversion factors are unity.
Of course we do not have to limit the calculation to SI units. Suppose we wish to express the calculated pressure in torrs, a non-SI unit. In this case, using a conversion factor obtained from the definition of the torr in Table 1.3, the calculation becomes $\begin{split} p & = (3.099\timesten{3}\Pa) \times (760\units{Torr}/101,325\Pa) \ & = 23.24\units{Torr} \end{split} \tag{1.2.10}$ | textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/DeVoes_Thermodynamics_and_Chemistry/01%3A_Introduction/1.02%3A_Quantity_Calculus.txt |
$\newcommand{\tx}[1]{\text{#1}} % text in math mode$
$\newcommand{\subs}[1]{_{\text{#1}}} % subscript text$
$\newcommand{\sups}[1]{^{\text{#1}}} % superscript text$
$\newcommand{\st}{^\circ} % standard state symbol$
$\newcommand{\id}{^{\text{id}}} % ideal$
$\newcommand{\rf}{^{\text{ref}}} % reference state$
$\newcommand{\units}[1]{\mbox{\thinspace#1}}$
$\newcommand{\K}{\units{K}} % kelvins$
$\newcommand{\degC}{^\circ\text{C}} % degrees Celsius$
$\newcommand{\br}{\units{bar}} % bar (\bar is already defined)$
$\newcommand{\Pa}{\units{Pa}}$
$\newcommand{\mol}{\units{mol}} % mole$
$\newcommand{\V}{\units{V}} % volts$
$\newcommand{\timesten}[1]{\mbox{\,\times\,10^{#1}}}$
$\newcommand{\per}{^{-1}} % minus one power$
$\newcommand{\m}{_{\text{m}}} % subscript m for molar quantity$
$\newcommand{\CVm}{C_{V,\text{m}}} % molar heat capacity at const.V$
$\newcommand{\Cpm}{C_{p,\text{m}}} % molar heat capacity at const.p$
$\newcommand{\kT}{\kappa_T} % isothermal compressibility$
$\newcommand{\A}{_{\text{A}}} % subscript A for solvent or state A$
$\newcommand{\B}{_{\text{B}}} % subscript B for solute or state B$
$\newcommand{\bd}{_{\text{b}}} % subscript b for boundary or boiling point$
$\newcommand{\C}{_{\text{C}}} % subscript C$
$\newcommand{\f}{_{\text{f}}} % subscript f for freezing point$
$\newcommand{\mA}{_{\text{m},\text{A}}} % subscript m,A (m=molar)$
$\newcommand{\mB}{_{\text{m},\text{B}}} % subscript m,B (m=molar)$
$\newcommand{\mi}{_{\text{m},i}} % subscript m,i (m=molar)$
$\newcommand{\fA}{_{\text{f},\text{A}}} % subscript f,A (for fr. pt.)$
$\newcommand{\fB}{_{\text{f},\text{B}}} % subscript f,B (for fr. pt.)$
$\newcommand{\xbB}{_{x,\text{B}}} % x basis, B$
$\newcommand{\xbC}{_{x,\text{C}}} % x basis, C$
$\newcommand{\cbB}{_{c,\text{B}}} % c basis, B$
$\newcommand{\mbB}{_{m,\text{B}}} % m basis, B$
$\newcommand{\kHi}{k_{\text{H},i}} % Henry's law constant, x basis, i$
$\newcommand{\kHB}{k_{\text{H,B}}} % Henry's law constant, x basis, B$
$\newcommand{\arrow}{\,\rightarrow\,} % right arrow with extra spaces$
$\newcommand{\arrows}{\,\rightleftharpoons\,} % double arrows with extra spaces$
$\newcommand{\ra}{\rightarrow} % right arrow (can be used in text mode)$
$\newcommand{\eq}{\subs{eq}} % equilibrium state$
$\newcommand{\onehalf}{\textstyle\frac{1}{2}\D} % small 1/2 for display equation$
$\newcommand{\sys}{\subs{sys}} % system property$
$\newcommand{\sur}{\sups{sur}} % surroundings$
$\renewcommand{\in}{\sups{int}} % internal$
$\newcommand{\lab}{\subs{lab}} % lab frame$
$\newcommand{\cm}{\subs{cm}} % center of mass$
$\newcommand{\rev}{\subs{rev}} % reversible$
$\newcommand{\irr}{\subs{irr}} % irreversible$
$\newcommand{\fric}{\subs{fric}} % friction$
$\newcommand{\diss}{\subs{diss}} % dissipation$
$\newcommand{\el}{\subs{el}} % electrical$
$\newcommand{\cell}{\subs{cell}} % cell$
$\newcommand{\As}{A\subs{s}} % surface area$
$\newcommand{\E}{^\mathsf{E}} % excess quantity (superscript)$
$\newcommand{\allni}{\{n_i \}} % set of all n_i$
$\newcommand{\sol}{\hspace{-.1em}\tx{(sol)}}$
$\newcommand{\solmB}{\tx{(sol,\,m\B)}}$
$\newcommand{\dil}{\tx{(dil)}}$
$\newcommand{\sln}{\tx{(sln)}}$
$\newcommand{\mix}{\tx{(mix)}}$
$\newcommand{\rxn}{\tx{(rxn)}}$
$\newcommand{\expt}{\tx{(expt)}}$
$\newcommand{\solid}{\tx{(s)}}$
$\newcommand{\liquid}{\tx{(l)}}$
$\newcommand{\gas}{\tx{(g)}}$
$\newcommand{\pha}{\alpha} % phase alpha$
$\newcommand{\phb}{\beta} % phase beta$
$\newcommand{\phg}{\gamma} % phase gamma$
$\newcommand{\aph}{^{\alpha}} % alpha phase superscript$
$\newcommand{\bph}{^{\beta}} % beta phase superscript$
$\newcommand{\gph}{^{\gamma}} % gamma phase superscript$
$\newcommand{\aphp}{^{\alpha'}} % alpha prime phase superscript$
$\newcommand{\bphp}{^{\beta'}} % beta prime phase superscript$
$\newcommand{\gphp}{^{\gamma'}} % gamma prime phase superscript$
$\newcommand{\apht}{\small\aph} % alpha phase tiny superscript$
$\newcommand{\bpht}{\small\bph} % beta phase tiny superscript$
$\newcommand{\gpht}{\small\gph} % gamma phase tiny superscript$
$\newcommand{\upOmega}{\Omega}$
$\newcommand{\dif}{\mathop{}\!\mathrm{d}} % roman d in math mode, preceded by space$
$\newcommand{\Dif}{\mathop{}\!\mathrm{D}} % roman D in math mode, preceded by space$
$\newcommand{\df}{\dif\hspace{0.05em} f} % df$
$\newcommand{\dBar}{\mathop{}\!\mathrm{d}\hspace-.3em\raise1.05ex{\Rule{.8ex}{.125ex}{0ex}}} % inexact differential$
$\newcommand{\dq}{\dBar q} % heat differential$
$\newcommand{\dw}{\dBar w} % work differential$
$\newcommand{\dQ}{\dBar Q} % infinitesimal charge$
$\newcommand{\dx}{\dif\hspace{0.05em} x} % dx$
$\newcommand{\dt}{\dif\hspace{0.05em} t} % dt$
$\newcommand{\difp}{\dif\hspace{0.05em} p} % dp$
$\newcommand{\Del}{\Delta}$
$\newcommand{\Delsub}[1]{\Delta_{\text{#1}}}$
$\newcommand{\pd}[3]{(\partial #1 / \partial #2 )_{#3}} % \pd{}{}{} - partial derivative, one line$
$\newcommand{\Pd}[3]{\left( \dfrac {\partial #1} {\partial #2}\right)_{#3}} % Pd{}{}{} - Partial derivative, built-up$
$\newcommand{\bpd}[3]{[ \partial #1 / \partial #2 ]_{#3}}$
$\newcommand{\bPd}[3]{\left[ \dfrac {\partial #1} {\partial #2}\right]_{#3}}$
$\newcommand{\dotprod}{\small\bullet}$
$\newcommand{\fug}{f} % fugacity$
$\newcommand{\g}{\gamma} % solute activity coefficient, or gamma in general$
$\newcommand{\G}{\varGamma} % activity coefficient of a reference state (pressure factor)$
$\newcommand{\ecp}{\widetilde{\mu}} % electrochemical or total potential$
$\newcommand{\Eeq}{E\subs{cell, eq}} % equilibrium cell potential$
$\newcommand{\Ej}{E\subs{j}} % liquid junction potential$
$\newcommand{\mue}{\mu\subs{e}} % electron chemical potential$
$\newcommand{\defn}{\,\stackrel{\mathrm{def}}{=}\,} % "equal by definition" symbol$
$\newcommand{\D}{\displaystyle} % for a line in built-up$
$\newcommand{\s}{\smash[b]} % use in equations with conditions of validity$
$\newcommand{\cond}[1]{\$-2.5pt]{}\tag*{#1}}$ $\newcommand{\nextcond}[1]{\[-5pt]{}\tag*{#1}}$ $\newcommand{\R}{8.3145\units{J\,K\per\,mol\per}} % gas constant value$ $\newcommand{\Rsix}{8.31447\units{J\,K\per\,mol\per}} % gas constant value - 6 sig figs$ $\newcommand{\jn}{\hspace3pt\lower.3ex{\Rule{.6pt}{2ex}{0ex}}\hspace3pt}$ $\newcommand{\ljn}{\hspace3pt\lower.3ex{\Rule{.6pt}{.5ex}{0ex}}\hspace-.6pt\raise.45ex{\Rule{.6pt}{.5ex}{0ex}}\hspace-.6pt\raise1.2ex{\Rule{.6pt}{.5ex}{0ex}} \hspace3pt}$ $\newcommand{\lljn}{\hspace3pt\lower.3ex{\Rule{.6pt}{.5ex}{0ex}}\hspace-.6pt\raise.45ex{\Rule{.6pt}{.5ex}{0ex}}\hspace-.6pt\raise1.2ex{\Rule{.6pt}{.5ex}{0ex}}\hspace1.4pt\lower.3ex{\Rule{.6pt}{.5ex}{0ex}}\hspace-.6pt\raise.45ex{\Rule{.6pt}{.5ex}{0ex}}\hspace-.6pt\raise1.2ex{\Rule{.6pt}{.5ex}{0ex}}\hspace3pt}$ Sometimes you can catch an error in the form of an equation or expression, or in the dimensions of a quantity used for a calculation, by checking for dimensional consistency. Here are some rules that must be satisfied: • In this e-book the differential of a function, such as $\df$, refers to an infinitesimal quantity. If one side of an equation is an infinitesimal quantity, the other side must also be. Thus, the equation $\df = a\dx + b\dif y$ (where $ax$ and $by$ have the same dimensions as $f$) makes mathematical sense, but $\df = ax+b\dif y$ does not. Derivatives, partial derivatives, and integrals have dimensions that we must take into account when determining the overall dimensions of an expression that includes them. For instance: • Some examples of applying these principles are given here using symbols described in Sec. 1.2. Example 1. Since the gas constant $R$ may be expressed in units of J K$^{-1}$ mol$^{-1}$, it has dimensions of energy divided by thermodynamic temperature and amount. Thus, $RT$ has dimensions of energy divided by amount, and $nRT$ has dimensions of energy. The products $RT$ and $nRT$ appear frequently in thermodynamic expressions. Example 3. Find the dimensions of the constants $a$ and $b$ in the van der Waals equation \[ p = \frac {nRT}{V-nb} - \frac {n^{2}a} {V^2}$ Dimensional analysis tells us that, because $nb$ is subtracted from $V$, $nb$ has dimensions of volume and therefore $b$ has dimensions of volume/amount. Furthermore, since the right side of the equation is a difference of two terms, these terms have the same dimensions as the left side, which is pressure. Therefore, the second term $n^{2}a/V^{2}$ has dimensions of pressure, and $a$ has dimensions of pressure $\times$ volume$^{2}$ $\times$ amount$^{-2}$.
Example 4. Consider an equation of the form $\Pd{\ln x}{T}{\!p} = \frac {y}{R}$ What are the SI units of $y$? $\ln x$ is dimensionless, so the left side of the equation has the dimensions of $1/T$, and its SI units are K$^{-1}$. The SI units of the right side are therefore also K$^{-1}$. Since $R$ has the units J K$^{-1}$ mol$^{-1}$, the SI units of $y$ are J K$^{-2}$ mol$^{-1}$. | textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/DeVoes_Thermodynamics_and_Chemistry/01%3A_Introduction/1.03%3A_Dimensional_Analysis.txt |
$\newcommand{\tx}[1]{\text{#1}} % text in math mode$
$\newcommand{\subs}[1]{_{\text{#1}}} % subscript text$
$\newcommand{\sups}[1]{^{\text{#1}}} % superscript text$
$\newcommand{\st}{^\circ} % standard state symbol$
$\newcommand{\id}{^{\text{id}}} % ideal$
$\newcommand{\rf}{^{\text{ref}}} % reference state$
$\newcommand{\units}[1]{\mbox{\thinspace#1}}$
$\newcommand{\K}{\units{K}} % kelvins$
$\newcommand{\degC}{^\circ\text{C}} % degrees Celsius$
$\newcommand{\br}{\units{bar}} % bar (\bar is already defined)$
$\newcommand{\Pa}{\units{Pa}}$
$\newcommand{\mol}{\units{mol}} % mole$
$\newcommand{\V}{\units{V}} % volts$
$\newcommand{\timesten}[1]{\mbox{\,\times\,10^{#1}}}$
$\newcommand{\per}{^{-1}} % minus one power$
$\newcommand{\m}{_{\text{m}}} % subscript m for molar quantity$
$\newcommand{\CVm}{C_{V,\text{m}}} % molar heat capacity at const.V$
$\newcommand{\Cpm}{C_{p,\text{m}}} % molar heat capacity at const.p$
$\newcommand{\kT}{\kappa_T} % isothermal compressibility$
$\newcommand{\A}{_{\text{A}}} % subscript A for solvent or state A$
$\newcommand{\B}{_{\text{B}}} % subscript B for solute or state B$
$\newcommand{\bd}{_{\text{b}}} % subscript b for boundary or boiling point$
$\newcommand{\C}{_{\text{C}}} % subscript C$
$\newcommand{\f}{_{\text{f}}} % subscript f for freezing point$
$\newcommand{\mA}{_{\text{m},\text{A}}} % subscript m,A (m=molar)$
$\newcommand{\mB}{_{\text{m},\text{B}}} % subscript m,B (m=molar)$
$\newcommand{\mi}{_{\text{m},i}} % subscript m,i (m=molar)$
$\newcommand{\fA}{_{\text{f},\text{A}}} % subscript f,A (for fr. pt.)$
$\newcommand{\fB}{_{\text{f},\text{B}}} % subscript f,B (for fr. pt.)$
$\newcommand{\xbB}{_{x,\text{B}}} % x basis, B$
$\newcommand{\xbC}{_{x,\text{C}}} % x basis, C$
$\newcommand{\cbB}{_{c,\text{B}}} % c basis, B$
$\newcommand{\mbB}{_{m,\text{B}}} % m basis, B$
$\newcommand{\kHi}{k_{\text{H},i}} % Henry's law constant, x basis, i$
$\newcommand{\kHB}{k_{\text{H,B}}} % Henry's law constant, x basis, B$
$\newcommand{\arrow}{\,\rightarrow\,} % right arrow with extra spaces$
$\newcommand{\arrows}{\,\rightleftharpoons\,} % double arrows with extra spaces$
$\newcommand{\ra}{\rightarrow} % right arrow (can be used in text mode)$
$\newcommand{\eq}{\subs{eq}} % equilibrium state$
$\newcommand{\onehalf}{\textstyle\frac{1}{2}\D} % small 1/2 for display equation$
$\newcommand{\sys}{\subs{sys}} % system property$
$\newcommand{\sur}{\sups{sur}} % surroundings$
$\renewcommand{\in}{\sups{int}} % internal$
$\newcommand{\lab}{\subs{lab}} % lab frame$
$\newcommand{\cm}{\subs{cm}} % center of mass$
$\newcommand{\rev}{\subs{rev}} % reversible$
$\newcommand{\irr}{\subs{irr}} % irreversible$
$\newcommand{\fric}{\subs{fric}} % friction$
$\newcommand{\diss}{\subs{diss}} % dissipation$
$\newcommand{\el}{\subs{el}} % electrical$
$\newcommand{\cell}{\subs{cell}} % cell$
$\newcommand{\As}{A\subs{s}} % surface area$
$\newcommand{\E}{^\mathsf{E}} % excess quantity (superscript)$
$\newcommand{\allni}{\{n_i \}} % set of all n_i$
$\newcommand{\sol}{\hspace{-.1em}\tx{(sol)}}$
$\newcommand{\solmB}{\tx{(sol,\,m\B)}}$
$\newcommand{\dil}{\tx{(dil)}}$
$\newcommand{\sln}{\tx{(sln)}}$
$\newcommand{\mix}{\tx{(mix)}}$
$\newcommand{\rxn}{\tx{(rxn)}}$
$\newcommand{\expt}{\tx{(expt)}}$
$\newcommand{\solid}{\tx{(s)}}$
$\newcommand{\liquid}{\tx{(l)}}$
$\newcommand{\gas}{\tx{(g)}}$
$\newcommand{\pha}{\alpha} % phase alpha$
$\newcommand{\phb}{\beta} % phase beta$
$\newcommand{\phg}{\gamma} % phase gamma$
$\newcommand{\aph}{^{\alpha}} % alpha phase superscript$
$\newcommand{\bph}{^{\beta}} % beta phase superscript$
$\newcommand{\gph}{^{\gamma}} % gamma phase superscript$
$\newcommand{\aphp}{^{\alpha'}} % alpha prime phase superscript$
$\newcommand{\bphp}{^{\beta'}} % beta prime phase superscript$
$\newcommand{\gphp}{^{\gamma'}} % gamma prime phase superscript$
$\newcommand{\apht}{\small\aph} % alpha phase tiny superscript$
$\newcommand{\bpht}{\small\bph} % beta phase tiny superscript$
$\newcommand{\gpht}{\small\gph} % gamma phase tiny superscript$
$\newcommand{\upOmega}{\Omega}$
$\newcommand{\dif}{\mathop{}\!\mathrm{d}} % roman d in math mode, preceded by space$
$\newcommand{\Dif}{\mathop{}\!\mathrm{D}} % roman D in math mode, preceded by space$
$\newcommand{\df}{\dif\hspace{0.05em} f} % df$
$\newcommand{\dBar}{\mathop{}\!\mathrm{d}\hspace-.3em\raise1.05ex{\Rule{.8ex}{.125ex}{0ex}}} % inexact differential$
$\newcommand{\dq}{\dBar q} % heat differential$
$\newcommand{\dw}{\dBar w} % work differential$
$\newcommand{\dQ}{\dBar Q} % infinitesimal charge$
$\newcommand{\dx}{\dif\hspace{0.05em} x} % dx$
$\newcommand{\dt}{\dif\hspace{0.05em} t} % dt$
$\newcommand{\difp}{\dif\hspace{0.05em} p} % dp$
$\newcommand{\Del}{\Delta}$
$\newcommand{\Delsub}[1]{\Delta_{\text{#1}}}$
$\newcommand{\pd}[3]{(\partial #1 / \partial #2 )_{#3}} % \pd{}{}{} - partial derivative, one line$
$\newcommand{\Pd}[3]{\left( \dfrac {\partial #1} {\partial #2}\right)_{#3}} % Pd{}{}{} - Partial derivative, built-up$
$\newcommand{\bpd}[3]{[ \partial #1 / \partial #2 ]_{#3}}$
$\newcommand{\bPd}[3]{\left[ \dfrac {\partial #1} {\partial #2}\right]_{#3}}$
$\newcommand{\dotprod}{\small\bullet}$
$\newcommand{\fug}{f} % fugacity$
$\newcommand{\g}{\gamma} % solute activity coefficient, or gamma in general$
$\newcommand{\G}{\varGamma} % activity coefficient of a reference state (pressure factor)$
$\newcommand{\ecp}{\widetilde{\mu}} % electrochemical or total potential$
$\newcommand{\Eeq}{E\subs{cell, eq}} % equilibrium cell potential$
$\newcommand{\Ej}{E\subs{j}} % liquid junction potential$
$\newcommand{\mue}{\mu\subs{e}} % electron chemical potential$
$\newcommand{\defn}{\,\stackrel{\mathrm{def}}{=}\,} % "equal by definition" symbol$
$\newcommand{\D}{\displaystyle} % for a line in built-up$
$\newcommand{\s}{\smash[b]} % use in equations with conditions of validity$
$\newcommand{\cond}[1]{\$-2.5pt]{}\tag*{#1}}$ $\newcommand{\nextcond}[1]{\[-5pt]{}\tag*{#1}}$ $\newcommand{\R}{8.3145\units{J\,K\per\,mol\per}} % gas constant value$ $\newcommand{\Rsix}{8.31447\units{J\,K\per\,mol\per}} % gas constant value - 6 sig figs$ $\newcommand{\jn}{\hspace3pt\lower.3ex{\Rule{.6pt}{2ex}{0ex}}\hspace3pt}$ $\newcommand{\ljn}{\hspace3pt\lower.3ex{\Rule{.6pt}{.5ex}{0ex}}\hspace-.6pt\raise.45ex{\Rule{.6pt}{.5ex}{0ex}}\hspace-.6pt\raise1.2ex{\Rule{.6pt}{.5ex}{0ex}} \hspace3pt}$ $\newcommand{\lljn}{\hspace3pt\lower.3ex{\Rule{.6pt}{.5ex}{0ex}}\hspace-.6pt\raise.45ex{\Rule{.6pt}{.5ex}{0ex}}\hspace-.6pt\raise1.2ex{\Rule{.6pt}{.5ex}{0ex}}\hspace1.4pt\lower.3ex{\Rule{.6pt}{.5ex}{0ex}}\hspace-.6pt\raise.45ex{\Rule{.6pt}{.5ex}{0ex}}\hspace-.6pt\raise1.2ex{\Rule{.6pt}{.5ex}{0ex}}\hspace3pt}$ 1.1 Consider the following equations for the pressure of a real gas. For each equation, find the dimensions of the constants $a$ and $b$ and express these dimensions in SI units. (a) The Dieterici equation: \[ p = \frac {RT e^{-(an/VRT)}}{(V/n) - b}$
(b) The Redlich–Kwong equation: $p = \frac {RT} {(V/n) - b} - \frac {an^2} {T^{1/2}V(V+nb)}$ | textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/DeVoes_Thermodynamics_and_Chemistry/01%3A_Introduction/1.04%3A_Chapter_1_Problem.txt |
$\newcommand{\tx}[1]{\text{#1}} % text in math mode$
$\newcommand{\subs}[1]{_{\text{#1}}} % subscript text$
$\newcommand{\sups}[1]{^{\text{#1}}} % superscript text$
$\newcommand{\st}{^\circ} % standard state symbol$
$\newcommand{\id}{^{\text{id}}} % ideal$
$\newcommand{\rf}{^{\text{ref}}} % reference state$
$\newcommand{\units}[1]{\mbox{\thinspace#1}}$
$\newcommand{\K}{\units{K}} % kelvins$
$\newcommand{\degC}{^\circ\text{C}} % degrees Celsius$
$\newcommand{\br}{\units{bar}} % bar (\bar is already defined)$
$\newcommand{\Pa}{\units{Pa}}$
$\newcommand{\mol}{\units{mol}} % mole$
$\newcommand{\V}{\units{V}} % volts$
$\newcommand{\timesten}[1]{\mbox{\,\times\,10^{#1}}}$
$\newcommand{\per}{^{-1}} % minus one power$
$\newcommand{\m}{_{\text{m}}} % subscript m for molar quantity$
$\newcommand{\CVm}{C_{V,\text{m}}} % molar heat capacity at const.V$
$\newcommand{\Cpm}{C_{p,\text{m}}} % molar heat capacity at const.p$
$\newcommand{\kT}{\kappa_T} % isothermal compressibility$
$\newcommand{\A}{_{\text{A}}} % subscript A for solvent or state A$
$\newcommand{\B}{_{\text{B}}} % subscript B for solute or state B$
$\newcommand{\bd}{_{\text{b}}} % subscript b for boundary or boiling point$
$\newcommand{\C}{_{\text{C}}} % subscript C$
$\newcommand{\f}{_{\text{f}}} % subscript f for freezing point$
$\newcommand{\mA}{_{\text{m},\text{A}}} % subscript m,A (m=molar)$
$\newcommand{\mB}{_{\text{m},\text{B}}} % subscript m,B (m=molar)$
$\newcommand{\mi}{_{\text{m},i}} % subscript m,i (m=molar)$
$\newcommand{\fA}{_{\text{f},\text{A}}} % subscript f,A (for fr. pt.)$
$\newcommand{\fB}{_{\text{f},\text{B}}} % subscript f,B (for fr. pt.)$
$\newcommand{\xbB}{_{x,\text{B}}} % x basis, B$
$\newcommand{\xbC}{_{x,\text{C}}} % x basis, C$
$\newcommand{\cbB}{_{c,\text{B}}} % c basis, B$
$\newcommand{\mbB}{_{m,\text{B}}} % m basis, B$
$\newcommand{\kHi}{k_{\text{H},i}} % Henry's law constant, x basis, i$
$\newcommand{\kHB}{k_{\text{H,B}}} % Henry's law constant, x basis, B$
$\newcommand{\arrow}{\,\rightarrow\,} % right arrow with extra spaces$
$\newcommand{\arrows}{\,\rightleftharpoons\,} % double arrows with extra spaces$
$\newcommand{\ra}{\rightarrow} % right arrow (can be used in text mode)$
$\newcommand{\eq}{\subs{eq}} % equilibrium state$
$\newcommand{\onehalf}{\textstyle\frac{1}{2}\D} % small 1/2 for display equation$
$\newcommand{\sys}{\subs{sys}} % system property$
$\newcommand{\sur}{\sups{sur}} % surroundings$
$\renewcommand{\in}{\sups{int}} % internal$
$\newcommand{\lab}{\subs{lab}} % lab frame$
$\newcommand{\cm}{\subs{cm}} % center of mass$
$\newcommand{\rev}{\subs{rev}} % reversible$
$\newcommand{\irr}{\subs{irr}} % irreversible$
$\newcommand{\fric}{\subs{fric}} % friction$
$\newcommand{\diss}{\subs{diss}} % dissipation$
$\newcommand{\el}{\subs{el}} % electrical$
$\newcommand{\cell}{\subs{cell}} % cell$
$\newcommand{\As}{A\subs{s}} % surface area$
$\newcommand{\E}{^\mathsf{E}} % excess quantity (superscript)$
$\newcommand{\allni}{\{n_i \}} % set of all n_i$
$\newcommand{\sol}{\hspace{-.1em}\tx{(sol)}}$
$\newcommand{\solmB}{\tx{(sol,\,m\B)}}$
$\newcommand{\dil}{\tx{(dil)}}$
$\newcommand{\sln}{\tx{(sln)}}$
$\newcommand{\mix}{\tx{(mix)}}$
$\newcommand{\rxn}{\tx{(rxn)}}$
$\newcommand{\expt}{\tx{(expt)}}$
$\newcommand{\solid}{\tx{(s)}}$
$\newcommand{\liquid}{\tx{(l)}}$
$\newcommand{\gas}{\tx{(g)}}$
$\newcommand{\pha}{\alpha} % phase alpha$
$\newcommand{\phb}{\beta} % phase beta$
$\newcommand{\phg}{\gamma} % phase gamma$
$\newcommand{\aph}{^{\alpha}} % alpha phase superscript$
$\newcommand{\bph}{^{\beta}} % beta phase superscript$
$\newcommand{\gph}{^{\gamma}} % gamma phase superscript$
$\newcommand{\aphp}{^{\alpha'}} % alpha prime phase superscript$
$\newcommand{\bphp}{^{\beta'}} % beta prime phase superscript$
$\newcommand{\gphp}{^{\gamma'}} % gamma prime phase superscript$
$\newcommand{\apht}{\small\aph} % alpha phase tiny superscript$
$\newcommand{\bpht}{\small\bph} % beta phase tiny superscript$
$\newcommand{\gpht}{\small\gph} % gamma phase tiny superscript$
$\newcommand{\upOmega}{\Omega}$
$\newcommand{\dif}{\mathop{}\!\mathrm{d}} % roman d in math mode, preceded by space$
$\newcommand{\Dif}{\mathop{}\!\mathrm{D}} % roman D in math mode, preceded by space$
$\newcommand{\df}{\dif\hspace{0.05em} f} % df$
$\newcommand{\dBar}{\mathop{}\!\mathrm{d}\hspace-.3em\raise1.05ex{\Rule{.8ex}{.125ex}{0ex}}} % inexact differential$
$\newcommand{\dq}{\dBar q} % heat differential$
$\newcommand{\dw}{\dBar w} % work differential$
$\newcommand{\dQ}{\dBar Q} % infinitesimal charge$
$\newcommand{\dx}{\dif\hspace{0.05em} x} % dx$
$\newcommand{\dt}{\dif\hspace{0.05em} t} % dt$
$\newcommand{\difp}{\dif\hspace{0.05em} p} % dp$
$\newcommand{\Del}{\Delta}$
$\newcommand{\Delsub}[1]{\Delta_{\text{#1}}}$
$\newcommand{\pd}[3]{(\partial #1 / \partial #2 )_{#3}} % \pd{}{}{} - partial derivative, one line$
$\newcommand{\Pd}[3]{\left( \dfrac {\partial #1} {\partial #2}\right)_{#3}} % Pd{}{}{} - Partial derivative, built-up$
$\newcommand{\bpd}[3]{[ \partial #1 / \partial #2 ]_{#3}}$
$\newcommand{\bPd}[3]{\left[ \dfrac {\partial #1} {\partial #2}\right]_{#3}}$
$\newcommand{\dotprod}{\small\bullet}$
$\newcommand{\fug}{f} % fugacity$
$\newcommand{\g}{\gamma} % solute activity coefficient, or gamma in general$
$\newcommand{\G}{\varGamma} % activity coefficient of a reference state (pressure factor)$
$\newcommand{\ecp}{\widetilde{\mu}} % electrochemical or total potential$
$\newcommand{\Eeq}{E\subs{cell, eq}} % equilibrium cell potential$
$\newcommand{\Ej}{E\subs{j}} % liquid junction potential$
$\newcommand{\mue}{\mu\subs{e}} % electron chemical potential$
$\newcommand{\defn}{\,\stackrel{\mathrm{def}}{=}\,} % "equal by definition" symbol$
$\newcommand{\D}{\displaystyle} % for a line in built-up$
$\newcommand{\s}{\smash[b]} % use in equations with conditions of validity$
$\newcommand{\cond}[1]{\[-2.5pt]{}\tag*{#1}}$
$\newcommand{\nextcond}[1]{\[-5pt]{}\tag*{#1}}$
$\newcommand{\R}{8.3145\units{J\,K\per\,mol\per}} % gas constant value$
$\newcommand{\Rsix}{8.31447\units{J\,K\per\,mol\per}} % gas constant value - 6 sig figs$
$\newcommand{\jn}{\hspace3pt\lower.3ex{\Rule{.6pt}{2ex}{0ex}}\hspace3pt}$
$\newcommand{\ljn}{\hspace3pt\lower.3ex{\Rule{.6pt}{.5ex}{0ex}}\hspace-.6pt\raise.45ex{\Rule{.6pt}{.5ex}{0ex}}\hspace-.6pt\raise1.2ex{\Rule{.6pt}{.5ex}{0ex}} \hspace3pt}$
$\newcommand{\lljn}{\hspace3pt\lower.3ex{\Rule{.6pt}{.5ex}{0ex}}\hspace-.6pt\raise.45ex{\Rule{.6pt}{.5ex}{0ex}}\hspace-.6pt\raise1.2ex{\Rule{.6pt}{.5ex}{0ex}}\hspace1.4pt\lower.3ex{\Rule{.6pt}{.5ex}{0ex}}\hspace-.6pt\raise.45ex{\Rule{.6pt}{.5ex}{0ex}}\hspace-.6pt\raise1.2ex{\Rule{.6pt}{.5ex}{0ex}}\hspace3pt}$
This chapter begins by explaining some basic terminology of thermodynamics. It discusses macroscopic properties of matter in general and properties distinguishing different physical states of matter in particular. Virial equations of state of a pure gas are introduced. The chapter goes on to discuss some basic macroscopic properties and their measurement. Finally, several important concepts needed in later chapters are described: thermodynamic states and state functions, independent and dependent variables, processes, and internal energy.
02: Systems and Their Properties
$\newcommand{\tx}[1]{\text{#1}} % text in math mode$
$\newcommand{\subs}[1]{_{\text{#1}}} % subscript text$
$\newcommand{\sups}[1]{^{\text{#1}}} % superscript text$
$\newcommand{\st}{^\circ} % standard state symbol$
$\newcommand{\id}{^{\text{id}}} % ideal$
$\newcommand{\rf}{^{\text{ref}}} % reference state$
$\newcommand{\units}[1]{\mbox{\thinspace#1}}$
$\newcommand{\K}{\units{K}} % kelvins$
$\newcommand{\degC}{^\circ\text{C}} % degrees Celsius$
$\newcommand{\br}{\units{bar}} % bar (\bar is already defined)$
$\newcommand{\Pa}{\units{Pa}}$
$\newcommand{\mol}{\units{mol}} % mole$
$\newcommand{\V}{\units{V}} % volts$
$\newcommand{\timesten}[1]{\mbox{\,\times\,10^{#1}}}$
$\newcommand{\per}{^{-1}} % minus one power$
$\newcommand{\m}{_{\text{m}}} % subscript m for molar quantity$
$\newcommand{\CVm}{C_{V,\text{m}}} % molar heat capacity at const.V$
$\newcommand{\Cpm}{C_{p,\text{m}}} % molar heat capacity at const.p$
$\newcommand{\kT}{\kappa_T} % isothermal compressibility$
$\newcommand{\A}{_{\text{A}}} % subscript A for solvent or state A$
$\newcommand{\B}{_{\text{B}}} % subscript B for solute or state B$
$\newcommand{\bd}{_{\text{b}}} % subscript b for boundary or boiling point$
$\newcommand{\C}{_{\text{C}}} % subscript C$
$\newcommand{\f}{_{\text{f}}} % subscript f for freezing point$
$\newcommand{\mA}{_{\text{m},\text{A}}} % subscript m,A (m=molar)$
$\newcommand{\mB}{_{\text{m},\text{B}}} % subscript m,B (m=molar)$
$\newcommand{\mi}{_{\text{m},i}} % subscript m,i (m=molar)$
$\newcommand{\fA}{_{\text{f},\text{A}}} % subscript f,A (for fr. pt.)$
$\newcommand{\fB}{_{\text{f},\text{B}}} % subscript f,B (for fr. pt.)$
$\newcommand{\xbB}{_{x,\text{B}}} % x basis, B$
$\newcommand{\xbC}{_{x,\text{C}}} % x basis, C$
$\newcommand{\cbB}{_{c,\text{B}}} % c basis, B$
$\newcommand{\mbB}{_{m,\text{B}}} % m basis, B$
$\newcommand{\kHi}{k_{\text{H},i}} % Henry's law constant, x basis, i$
$\newcommand{\kHB}{k_{\text{H,B}}} % Henry's law constant, x basis, B$
$\newcommand{\arrow}{\,\rightarrow\,} % right arrow with extra spaces$
$\newcommand{\arrows}{\,\rightleftharpoons\,} % double arrows with extra spaces$
$\newcommand{\ra}{\rightarrow} % right arrow (can be used in text mode)$
$\newcommand{\eq}{\subs{eq}} % equilibrium state$
$\newcommand{\onehalf}{\textstyle\frac{1}{2}\D} % small 1/2 for display equation$
$\newcommand{\sys}{\subs{sys}} % system property$
$\newcommand{\sur}{\sups{sur}} % surroundings$
$\renewcommand{\in}{\sups{int}} % internal$
$\newcommand{\lab}{\subs{lab}} % lab frame$
$\newcommand{\cm}{\subs{cm}} % center of mass$
$\newcommand{\rev}{\subs{rev}} % reversible$
$\newcommand{\irr}{\subs{irr}} % irreversible$
$\newcommand{\fric}{\subs{fric}} % friction$
$\newcommand{\diss}{\subs{diss}} % dissipation$
$\newcommand{\el}{\subs{el}} % electrical$
$\newcommand{\cell}{\subs{cell}} % cell$
$\newcommand{\As}{A\subs{s}} % surface area$
$\newcommand{\E}{^\mathsf{E}} % excess quantity (superscript)$
$\newcommand{\allni}{\{n_i \}} % set of all n_i$
$\newcommand{\sol}{\hspace{-.1em}\tx{(sol)}}$
$\newcommand{\solmB}{\tx{(sol,\,m\B)}}$
$\newcommand{\dil}{\tx{(dil)}}$
$\newcommand{\sln}{\tx{(sln)}}$
$\newcommand{\mix}{\tx{(mix)}}$
$\newcommand{\rxn}{\tx{(rxn)}}$
$\newcommand{\expt}{\tx{(expt)}}$
$\newcommand{\solid}{\tx{(s)}}$
$\newcommand{\liquid}{\tx{(l)}}$
$\newcommand{\gas}{\tx{(g)}}$
$\newcommand{\pha}{\alpha} % phase alpha$
$\newcommand{\phb}{\beta} % phase beta$
$\newcommand{\phg}{\gamma} % phase gamma$
$\newcommand{\aph}{^{\alpha}} % alpha phase superscript$
$\newcommand{\bph}{^{\beta}} % beta phase superscript$
$\newcommand{\gph}{^{\gamma}} % gamma phase superscript$
$\newcommand{\aphp}{^{\alpha'}} % alpha prime phase superscript$
$\newcommand{\bphp}{^{\beta'}} % beta prime phase superscript$
$\newcommand{\gphp}{^{\gamma'}} % gamma prime phase superscript$
$\newcommand{\apht}{\small\aph} % alpha phase tiny superscript$
$\newcommand{\bpht}{\small\bph} % beta phase tiny superscript$
$\newcommand{\gpht}{\small\gph} % gamma phase tiny superscript$
$\newcommand{\upOmega}{\Omega}$
$\newcommand{\dif}{\mathop{}\!\mathrm{d}} % roman d in math mode, preceded by space$
$\newcommand{\Dif}{\mathop{}\!\mathrm{D}} % roman D in math mode, preceded by space$
$\newcommand{\df}{\dif\hspace{0.05em} f} % df$
$\newcommand{\dBar}{\mathop{}\!\mathrm{d}\hspace-.3em\raise1.05ex{\Rule{.8ex}{.125ex}{0ex}}} % inexact differential$
$\newcommand{\dq}{\dBar q} % heat differential$
$\newcommand{\dw}{\dBar w} % work differential$
$\newcommand{\dQ}{\dBar Q} % infinitesimal charge$
$\newcommand{\dx}{\dif\hspace{0.05em} x} % dx$
$\newcommand{\dt}{\dif\hspace{0.05em} t} % dt$
$\newcommand{\difp}{\dif\hspace{0.05em} p} % dp$
$\newcommand{\Del}{\Delta}$
$\newcommand{\Delsub}[1]{\Delta_{\text{#1}}}$
$\newcommand{\pd}[3]{(\partial #1 / \partial #2 )_{#3}} % \pd{}{}{} - partial derivative, one line$
$\newcommand{\Pd}[3]{\left( \dfrac {\partial #1} {\partial #2}\right)_{#3}} % Pd{}{}{} - Partial derivative, built-up$
$\newcommand{\bpd}[3]{[ \partial #1 / \partial #2 ]_{#3}}$
$\newcommand{\bPd}[3]{\left[ \dfrac {\partial #1} {\partial #2}\right]_{#3}}$
$\newcommand{\dotprod}{\small\bullet}$
$\newcommand{\fug}{f} % fugacity$
$\newcommand{\g}{\gamma} % solute activity coefficient, or gamma in general$
$\newcommand{\G}{\varGamma} % activity coefficient of a reference state (pressure factor)$
$\newcommand{\ecp}{\widetilde{\mu}} % electrochemical or total potential$
$\newcommand{\Eeq}{E\subs{cell, eq}} % equilibrium cell potential$
$\newcommand{\Ej}{E\subs{j}} % liquid junction potential$
$\newcommand{\mue}{\mu\subs{e}} % electron chemical potential$
$\newcommand{\defn}{\,\stackrel{\mathrm{def}}{=}\,} % "equal by definition" symbol$
$\newcommand{\D}{\displaystyle} % for a line in built-up$
$\newcommand{\s}{\smash[b]} % use in equations with conditions of validity$
$\newcommand{\cond}[1]{\[-2.5pt]{}\tag*{#1}}$
$\newcommand{\nextcond}[1]{\[-5pt]{}\tag*{#1}}$
$\newcommand{\R}{8.3145\units{J\,K\per\,mol\per}} % gas constant value$
$\newcommand{\Rsix}{8.31447\units{J\,K\per\,mol\per}} % gas constant value - 6 sig figs$
$\newcommand{\jn}{\hspace3pt\lower.3ex{\Rule{.6pt}{2ex}{0ex}}\hspace3pt}$
$\newcommand{\ljn}{\hspace3pt\lower.3ex{\Rule{.6pt}{.5ex}{0ex}}\hspace-.6pt\raise.45ex{\Rule{.6pt}{.5ex}{0ex}}\hspace-.6pt\raise1.2ex{\Rule{.6pt}{.5ex}{0ex}} \hspace3pt}$
$\newcommand{\lljn}{\hspace3pt\lower.3ex{\Rule{.6pt}{.5ex}{0ex}}\hspace-.6pt\raise.45ex{\Rule{.6pt}{.5ex}{0ex}}\hspace-.6pt\raise1.2ex{\Rule{.6pt}{.5ex}{0ex}}\hspace1.4pt\lower.3ex{\Rule{.6pt}{.5ex}{0ex}}\hspace-.6pt\raise.45ex{\Rule{.6pt}{.5ex}{0ex}}\hspace-.6pt\raise1.2ex{\Rule{.6pt}{.5ex}{0ex}}\hspace3pt}$
Chemists are interested in systems containing matter—that which has mass and occupies physical space. Classical thermodynamics looks at macroscopic aspects of matter. It deals with the properties of aggregates of vast numbers of microscopic particles (molecules, atoms, and ions). The macroscopic viewpoint, in fact, treats matter as a continuous material medium rather than as the collection of discrete microscopic particles we know are actually present. Although this e-book is an exposition of classical thermodynamics, at times it will point out connections between macroscopic properties and molecular structure and behavior.
A thermodynamic system is any three-dimensional region of physical space on which we wish to focus our attention. Usually we consider only one system at a time and call it simply “the system.” The rest of the physical universe constitutes the surroundings of the system.
The boundary is the closed three-dimensional surface that encloses the system and separates it from the surroundings. The boundary may (and usually does) coincide with real physical surfaces: the interface between two phases, the inner or outer surface of the wall of a flask or other vessel, and so on. Alternatively, part or all of the boundary may be an imagined intangible surface in space, unrelated to any physical structure. The size and shape of the system, as defined by its boundary, may change in time. In short, our choice of the three-dimensional region that constitutes the system is arbitrary—but it is essential that we know exactly what this choice is.
We usually think of the system as a part of the physical universe that we are able to influence only indirectly through its interaction with the surroundings, and the surroundings as the part of the universe that we are able to directly manipulate with various physical devices under our control. That is, we (the experimenters) are part of the surroundings, not the system.
For some purposes we may wish to treat the system as being divided into subsystems, or to treat the combination of two or more systems as a supersystem.
If over the course of time matter is transferred in either direction across the boundary, the system is open; otherwise it is closed. If the system is open, matter may pass through a stationary boundary, or the boundary may move through matter that is fixed in space.
If the boundary allows heat transfer between the system and surroundings, the boundary is diathermal. An adiabatic (Greek: impassable) boundary, on the other hand, is a boundary that does not allow heat transfer. We can, in principle, ensure that the boundary is adiabatic by surrounding the system with an adiabatic wall—one with perfect thermal insulation and a perfect radiation shield.
An isolated system is one that exchanges no matter, heat, or work with the surroundings, so that the mass and total energy of the system remain constant over time. (The energy in this definition of an isolated system is measured in a local reference frame, as will be explained in Sec. 2.6.2.) A closed system with an adiabatic boundary, constrained to do no work and to have no work done on it, is an isolated system.
The constraints required to prevent work usually involve forces between the system and surroundings. In that sense a system may interact with the surroundings even though it is isolated. For instance, a gas contained within rigid, thermally-insulated walls is an isolated system; the gas exerts a force on each wall, and the wall exerts an equal and opposite force on the gas. An isolated system may also experience a constant external field, such as a gravitational field.
The term body usually implies a system, or part of a system, whose mass and chemical composition are constant over time.
2.1.1 Extensive and intensive properties
A quantitative property of a system describes some macroscopic feature that, although it may vary with time, has a particular value at any given instant of time.
Table 2.1 lists the symbols of some of the properties discussed in this chapter and the SI units in which they may be expressed. A much more complete table is found in Appendix C.
Most of the properties studied by thermodynamics may be classified as either extensive or intensive. We can distinguish these two types of properties by the following considerations.
If we imagine the system to be divided by an imaginary surface into two parts, any property of the system that is the sum of the property for the two parts is an extensive property. That is, an additive property is extensive. Examples are mass, volume, amount, energy, and the surface area of a solid.
Sometimes a more restricted definition of an extensive property is used: The property must be not only additive, but also proportional to the mass or the amount when intensive properties remain constant. According to this definition, mass, volume, amount, and energy are extensive, but surface area is not.
If we imagine a homogeneous region of space to be divided into two or more parts of arbitrary size, any property that has the same value in each part and the whole is an intensive property; for example density, concentration, pressure (in a fluid), and temperature. The value of an intensive property is the same everywhere in a homogeneous region, but may vary from point to point in a heterogeneous region—it is a local property.
Since classical thermodynamics treats matter as a continuous medium, whereas matter actually contains discrete microscopic particles, the value of an intensive property at a point is a statistical average of the behavior of many particles. For instance, the density of a gas at one point in space is the average mass of a small volume element at that point, large enough to contain many molecules, divided by the volume of that element.
Some properties are defined as the ratio of two extensive quantities. If both extensive quantities refer to a homogeneous region of the system or to a small volume element, the ratio is an intensive property. For example concentration, defined as the ratio $\tx{amount}/\tx{volume}$, is intensive. A mathematical derivative of one such extensive quantity with respect to another is also intensive.
A special case is an extensive quantity divided by the mass, giving an intensive specific quantity; for example $\tx{Specific volume} = \frac{V}{m} = \frac{1}{\rho} \tag{2.1.1}$ If the symbol for the extensive quantity is a capital letter, it is customary to use the corresponding lower-case letter as the symbol for the specific quantity. Thus the symbol for specific volume is $v$.
Another special case encountered frequently in this e-book is an extensive property for a pure, homogeneous substance divided by the amount $n$. The resulting intensive property is called, in general, a molar quantity or molar property. To symbolize a molar quantity, this e-book follows the recommendation of the IUPAC: The symbol of the extensive quantity is followed by subscript m, and optionally the identity of the substance is indicated either by a subscript or a formula in parentheses. Examples are $\tx{Molar volume} = \frac{V}{n} = V\m \tag{2.1.2}$ $\tx{Molar volume of substance }i = \frac{V}{n_i} = V\mi \tag{2.1.3}$ $\tx{Molar volume of H$_2$O } = V\m\tx{(H$_2$O)} \tag{2.1.4}$
In the past, especially in the United States, molar quantities were commonly denoted with an overbar (e.g., $\overline{V}_i$). | textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/DeVoes_Thermodynamics_and_Chemistry/02%3A_Systems_and_Their_Properties/2.01%3A_The_System_Surroundings_and_Boundary.txt |
$\newcommand{\tx}[1]{\text{#1}} % text in math mode$
$\newcommand{\subs}[1]{_{\text{#1}}} % subscript text$
$\newcommand{\sups}[1]{^{\text{#1}}} % superscript text$
$\newcommand{\st}{^\circ} % standard state symbol$
$\newcommand{\id}{^{\text{id}}} % ideal$
$\newcommand{\rf}{^{\text{ref}}} % reference state$
$\newcommand{\units}[1]{\mbox{\thinspace#1}}$
$\newcommand{\K}{\units{K}} % kelvins$
$\newcommand{\degC}{^\circ\text{C}} % degrees Celsius$
$\newcommand{\br}{\units{bar}} % bar (\bar is already defined)$
$\newcommand{\Pa}{\units{Pa}}$
$\newcommand{\mol}{\units{mol}} % mole$
$\newcommand{\V}{\units{V}} % volts$
$\newcommand{\timesten}[1]{\mbox{\,\times\,10^{#1}}}$
$\newcommand{\per}{^{-1}} % minus one power$
$\newcommand{\m}{_{\text{m}}} % subscript m for molar quantity$
$\newcommand{\CVm}{C_{V,\text{m}}} % molar heat capacity at const.V$
$\newcommand{\Cpm}{C_{p,\text{m}}} % molar heat capacity at const.p$
$\newcommand{\kT}{\kappa_T} % isothermal compressibility$
$\newcommand{\A}{_{\text{A}}} % subscript A for solvent or state A$
$\newcommand{\B}{_{\text{B}}} % subscript B for solute or state B$
$\newcommand{\bd}{_{\text{b}}} % subscript b for boundary or boiling point$
$\newcommand{\C}{_{\text{C}}} % subscript C$
$\newcommand{\f}{_{\text{f}}} % subscript f for freezing point$
$\newcommand{\mA}{_{\text{m},\text{A}}} % subscript m,A (m=molar)$
$\newcommand{\mB}{_{\text{m},\text{B}}} % subscript m,B (m=molar)$
$\newcommand{\mi}{_{\text{m},i}} % subscript m,i (m=molar)$
$\newcommand{\fA}{_{\text{f},\text{A}}} % subscript f,A (for fr. pt.)$
$\newcommand{\fB}{_{\text{f},\text{B}}} % subscript f,B (for fr. pt.)$
$\newcommand{\xbB}{_{x,\text{B}}} % x basis, B$
$\newcommand{\xbC}{_{x,\text{C}}} % x basis, C$
$\newcommand{\cbB}{_{c,\text{B}}} % c basis, B$
$\newcommand{\mbB}{_{m,\text{B}}} % m basis, B$
$\newcommand{\kHi}{k_{\text{H},i}} % Henry's law constant, x basis, i$
$\newcommand{\kHB}{k_{\text{H,B}}} % Henry's law constant, x basis, B$
$\newcommand{\arrow}{\,\rightarrow\,} % right arrow with extra spaces$
$\newcommand{\arrows}{\,\rightleftharpoons\,} % double arrows with extra spaces$
$\newcommand{\ra}{\rightarrow} % right arrow (can be used in text mode)$
$\newcommand{\eq}{\subs{eq}} % equilibrium state$
$\newcommand{\onehalf}{\textstyle\frac{1}{2}\D} % small 1/2 for display equation$
$\newcommand{\sys}{\subs{sys}} % system property$
$\newcommand{\sur}{\sups{sur}} % surroundings$
$\renewcommand{\in}{\sups{int}} % internal$
$\newcommand{\lab}{\subs{lab}} % lab frame$
$\newcommand{\cm}{\subs{cm}} % center of mass$
$\newcommand{\rev}{\subs{rev}} % reversible$
$\newcommand{\irr}{\subs{irr}} % irreversible$
$\newcommand{\fric}{\subs{fric}} % friction$
$\newcommand{\diss}{\subs{diss}} % dissipation$
$\newcommand{\el}{\subs{el}} % electrical$
$\newcommand{\cell}{\subs{cell}} % cell$
$\newcommand{\As}{A\subs{s}} % surface area$
$\newcommand{\E}{^\mathsf{E}} % excess quantity (superscript)$
$\newcommand{\allni}{\{n_i \}} % set of all n_i$
$\newcommand{\sol}{\hspace{-.1em}\tx{(sol)}}$
$\newcommand{\solmB}{\tx{(sol,\,m\B)}}$
$\newcommand{\dil}{\tx{(dil)}}$
$\newcommand{\sln}{\tx{(sln)}}$
$\newcommand{\mix}{\tx{(mix)}}$
$\newcommand{\rxn}{\tx{(rxn)}}$
$\newcommand{\expt}{\tx{(expt)}}$
$\newcommand{\solid}{\tx{(s)}}$
$\newcommand{\liquid}{\tx{(l)}}$
$\newcommand{\gas}{\tx{(g)}}$
$\newcommand{\pha}{\alpha} % phase alpha$
$\newcommand{\phb}{\beta} % phase beta$
$\newcommand{\phg}{\gamma} % phase gamma$
$\newcommand{\aph}{^{\alpha}} % alpha phase superscript$
$\newcommand{\bph}{^{\beta}} % beta phase superscript$
$\newcommand{\gph}{^{\gamma}} % gamma phase superscript$
$\newcommand{\aphp}{^{\alpha'}} % alpha prime phase superscript$
$\newcommand{\bphp}{^{\beta'}} % beta prime phase superscript$
$\newcommand{\gphp}{^{\gamma'}} % gamma prime phase superscript$
$\newcommand{\apht}{\small\aph} % alpha phase tiny superscript$
$\newcommand{\bpht}{\small\bph} % beta phase tiny superscript$
$\newcommand{\gpht}{\small\gph} % gamma phase tiny superscript$
$\newcommand{\upOmega}{\Omega}$
$\newcommand{\dif}{\mathop{}\!\mathrm{d}} % roman d in math mode, preceded by space$
$\newcommand{\Dif}{\mathop{}\!\mathrm{D}} % roman D in math mode, preceded by space$
$\newcommand{\df}{\dif\hspace{0.05em} f} % df$
$\newcommand{\dBar}{\mathop{}\!\mathrm{d}\hspace-.3em\raise1.05ex{\Rule{.8ex}{.125ex}{0ex}}} % inexact differential$
$\newcommand{\dq}{\dBar q} % heat differential$
$\newcommand{\dw}{\dBar w} % work differential$
$\newcommand{\dQ}{\dBar Q} % infinitesimal charge$
$\newcommand{\dx}{\dif\hspace{0.05em} x} % dx$
$\newcommand{\dt}{\dif\hspace{0.05em} t} % dt$
$\newcommand{\difp}{\dif\hspace{0.05em} p} % dp$
$\newcommand{\Del}{\Delta}$
$\newcommand{\Delsub}[1]{\Delta_{\text{#1}}}$
$\newcommand{\pd}[3]{(\partial #1 / \partial #2 )_{#3}} % \pd{}{}{} - partial derivative, one line$
$\newcommand{\Pd}[3]{\left( \dfrac {\partial #1} {\partial #2}\right)_{#3}} % Pd{}{}{} - Partial derivative, built-up$
$\newcommand{\bpd}[3]{[ \partial #1 / \partial #2 ]_{#3}}$
$\newcommand{\bPd}[3]{\left[ \dfrac {\partial #1} {\partial #2}\right]_{#3}}$
$\newcommand{\dotprod}{\small\bullet}$
$\newcommand{\fug}{f} % fugacity$
$\newcommand{\g}{\gamma} % solute activity coefficient, or gamma in general$
$\newcommand{\G}{\varGamma} % activity coefficient of a reference state (pressure factor)$
$\newcommand{\ecp}{\widetilde{\mu}} % electrochemical or total potential$
$\newcommand{\Eeq}{E\subs{cell, eq}} % equilibrium cell potential$
$\newcommand{\Ej}{E\subs{j}} % liquid junction potential$
$\newcommand{\mue}{\mu\subs{e}} % electron chemical potential$
$\newcommand{\defn}{\,\stackrel{\mathrm{def}}{=}\,} % "equal by definition" symbol$
$\newcommand{\D}{\displaystyle} % for a line in built-up$
$\newcommand{\s}{\smash[b]} % use in equations with conditions of validity$
$\newcommand{\cond}[1]{\[-2.5pt]{}\tag*{#1}}$
$\newcommand{\nextcond}[1]{\[-5pt]{}\tag*{#1}}$
$\newcommand{\R}{8.3145\units{J\,K\per\,mol\per}} % gas constant value$
$\newcommand{\Rsix}{8.31447\units{J\,K\per\,mol\per}} % gas constant value - 6 sig figs$
$\newcommand{\jn}{\hspace3pt\lower.3ex{\Rule{.6pt}{2ex}{0ex}}\hspace3pt}$
$\newcommand{\ljn}{\hspace3pt\lower.3ex{\Rule{.6pt}{.5ex}{0ex}}\hspace-.6pt\raise.45ex{\Rule{.6pt}{.5ex}{0ex}}\hspace-.6pt\raise1.2ex{\Rule{.6pt}{.5ex}{0ex}} \hspace3pt}$
$\newcommand{\lljn}{\hspace3pt\lower.3ex{\Rule{.6pt}{.5ex}{0ex}}\hspace-.6pt\raise.45ex{\Rule{.6pt}{.5ex}{0ex}}\hspace-.6pt\raise1.2ex{\Rule{.6pt}{.5ex}{0ex}}\hspace1.4pt\lower.3ex{\Rule{.6pt}{.5ex}{0ex}}\hspace-.6pt\raise.45ex{\Rule{.6pt}{.5ex}{0ex}}\hspace-.6pt\raise1.2ex{\Rule{.6pt}{.5ex}{0ex}}\hspace3pt}$
A phase is a region of the system in which each intensive property (such as temperature and pressure) has at each instant either the same value throughout (a uniform or homogeneous phase), or else a value that varies continuously from one point to another. Whenever this e-book mentions a phase, it is a uniform phase unless otherwise stated. Two different phases meet at an interface surface, where intensive properties have a discontinuity or change over a small distance.
Some intensive properties (e.g., refractive index and polarizability) can have directional characteristics. A uniform phase may be either isotropic, exhibiting the same values of these properties in all directions, or anisotropic, as in the case of some solids and liquid crystals. A vacuum is a uniform phase of zero density.
Suppose we have to deal with a nonuniform region in which intensive properties vary continuously in space along one or more directions—for example, a tall column of gas in a gravitational field whose density decreases with increasing altitude. There are two ways we may treat such a nonuniform, continuous region: either as a single nonuniform phase, or else as an infinite number of uniform phases, each of infinitesimal size in one or more dimensions.
2.2.1 Physical states of matter
We are used to labeling phases by physical state, or state of aggregation. It is common to say that a phase is a solid if it is relatively rigid, a liquid if it is easily deformed and relatively incompressible, and a gas if it is easily deformed and easily compressed. Since these descriptions of responses to external forces differ only in degree, they are inadequate to classify intermediate cases.
The way in which $Z$ varies with $p$ at different temperatures is shown for the case of carbon dioxide in Fig. 2.3(a).
A temperature at which the initial slope is zero is called the Boyle temperature, which for CO$_2$ is $710\K$. Both $B$ and $B_p$ must be zero at the Boyle temperature. At lower temperatures $B$ and $B_p$ are negative, and at higher temperatures they are positive—see Fig. 2.3(b). This kind of temperature dependence is typical for other gases. Experimentally, and also according to statistical mechanical theory, $B$ and $B_p$ for a gas can be zero only at a single Boyle temperature.
The fact that at any temperature other than the Boyle temperature $B$ is nonzero is significant since it means that in the limit as $p$ approaches zero at constant $T$ and the gas approaches ideal-gas behavior, the difference between the actual molar volume $V\m$ and the ideal-gas molar volume $RT/p$ does not approach zero. Instead, $V\m - RT/p$ approaches the nonzero value $B$ (see Eq. 2.2.8). However, the ratio of the actual and ideal molar volumes, $V\m/(RT/p)$, approaches unity in this limit.
Virial equations of gas mixtures will be discussed in Sec. 9.3.4.
2.2.6 Solids
A solid phase responds to a small applied stress by undergoing a small elastic deformation. When the stress is removed, the solid returns to its initial shape and the properties return to those of the unstressed solid. Under these conditions of small stress, the solid has an equation of state just as a fluid does, in which $p$ is the pressure of a fluid surrounding the solid (the hydrostatic pressure) as explained in Sec. 2.3.4. The stress is an additional independent variable. For example, the length of a metal spring that is elastically deformed is a unique function of the temperature, the pressure of the surrounding air, and the stretching force.
If, however, the stress applied to the solid exceeds its elastic limit, the response is plastic deformation. This deformation persists when the stress is removed, and the unstressed solid no longer has its original properties. Plastic deformation is a kind of hysteresis, and is caused by such microscopic behavior as the slipping of crystal planes past one another in a crystal subjected to shear stress, and conformational rearrangements about single bonds in a stretched macromolecular fiber. Properties of a solid under plastic deformation depend on its past history and are not unique functions of a set of independent variables; an equation of state does not exist. | textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/DeVoes_Thermodynamics_and_Chemistry/02%3A_Systems_and_Their_Properties/2.02%3A_Phases_and_Physical_States_of_Matter.txt |
$\newcommand{\tx}[1]{\text{#1}} % text in math mode$
$\newcommand{\subs}[1]{_{\text{#1}}} % subscript text$
$\newcommand{\sups}[1]{^{\text{#1}}} % superscript text$
$\newcommand{\st}{^\circ} % standard state symbol$
$\newcommand{\id}{^{\text{id}}} % ideal$
$\newcommand{\rf}{^{\text{ref}}} % reference state$
$\newcommand{\units}[1]{\mbox{\thinspace#1}}$
$\newcommand{\K}{\units{K}} % kelvins$
$\newcommand{\degC}{^\circ\text{C}} % degrees Celsius$
$\newcommand{\br}{\units{bar}} % bar (\bar is already defined)$
$\newcommand{\Pa}{\units{Pa}}$
$\newcommand{\mol}{\units{mol}} % mole$
$\newcommand{\V}{\units{V}} % volts$
$\newcommand{\timesten}[1]{\mbox{\,\times\,10^{#1}}}$
$\newcommand{\per}{^{-1}} % minus one power$
$\newcommand{\m}{_{\text{m}}} % subscript m for molar quantity$
$\newcommand{\CVm}{C_{V,\text{m}}} % molar heat capacity at const.V$
$\newcommand{\Cpm}{C_{p,\text{m}}} % molar heat capacity at const.p$
$\newcommand{\kT}{\kappa_T} % isothermal compressibility$
$\newcommand{\A}{_{\text{A}}} % subscript A for solvent or state A$
$\newcommand{\B}{_{\text{B}}} % subscript B for solute or state B$
$\newcommand{\bd}{_{\text{b}}} % subscript b for boundary or boiling point$
$\newcommand{\C}{_{\text{C}}} % subscript C$
$\newcommand{\f}{_{\text{f}}} % subscript f for freezing point$
$\newcommand{\mA}{_{\text{m},\text{A}}} % subscript m,A (m=molar)$
$\newcommand{\mB}{_{\text{m},\text{B}}} % subscript m,B (m=molar)$
$\newcommand{\mi}{_{\text{m},i}} % subscript m,i (m=molar)$
$\newcommand{\fA}{_{\text{f},\text{A}}} % subscript f,A (for fr. pt.)$
$\newcommand{\fB}{_{\text{f},\text{B}}} % subscript f,B (for fr. pt.)$
$\newcommand{\xbB}{_{x,\text{B}}} % x basis, B$
$\newcommand{\xbC}{_{x,\text{C}}} % x basis, C$
$\newcommand{\cbB}{_{c,\text{B}}} % c basis, B$
$\newcommand{\mbB}{_{m,\text{B}}} % m basis, B$
$\newcommand{\kHi}{k_{\text{H},i}} % Henry's law constant, x basis, i$
$\newcommand{\kHB}{k_{\text{H,B}}} % Henry's law constant, x basis, B$
$\newcommand{\arrow}{\,\rightarrow\,} % right arrow with extra spaces$
$\newcommand{\arrows}{\,\rightleftharpoons\,} % double arrows with extra spaces$
$\newcommand{\ra}{\rightarrow} % right arrow (can be used in text mode)$
$\newcommand{\eq}{\subs{eq}} % equilibrium state$
$\newcommand{\onehalf}{\textstyle\frac{1}{2}\D} % small 1/2 for display equation$
$\newcommand{\sys}{\subs{sys}} % system property$
$\newcommand{\sur}{\sups{sur}} % surroundings$
$\renewcommand{\in}{\sups{int}} % internal$
$\newcommand{\lab}{\subs{lab}} % lab frame$
$\newcommand{\cm}{\subs{cm}} % center of mass$
$\newcommand{\rev}{\subs{rev}} % reversible$
$\newcommand{\irr}{\subs{irr}} % irreversible$
$\newcommand{\fric}{\subs{fric}} % friction$
$\newcommand{\diss}{\subs{diss}} % dissipation$
$\newcommand{\el}{\subs{el}} % electrical$
$\newcommand{\cell}{\subs{cell}} % cell$
$\newcommand{\As}{A\subs{s}} % surface area$
$\newcommand{\E}{^\mathsf{E}} % excess quantity (superscript)$
$\newcommand{\allni}{\{n_i \}} % set of all n_i$
$\newcommand{\sol}{\hspace{-.1em}\tx{(sol)}}$
$\newcommand{\solmB}{\tx{(sol,\,m\B)}}$
$\newcommand{\dil}{\tx{(dil)}}$
$\newcommand{\sln}{\tx{(sln)}}$
$\newcommand{\mix}{\tx{(mix)}}$
$\newcommand{\rxn}{\tx{(rxn)}}$
$\newcommand{\expt}{\tx{(expt)}}$
$\newcommand{\solid}{\tx{(s)}}$
$\newcommand{\liquid}{\tx{(l)}}$
$\newcommand{\gas}{\tx{(g)}}$
$\newcommand{\pha}{\alpha} % phase alpha$
$\newcommand{\phb}{\beta} % phase beta$
$\newcommand{\phg}{\gamma} % phase gamma$
$\newcommand{\aph}{^{\alpha}} % alpha phase superscript$
$\newcommand{\bph}{^{\beta}} % beta phase superscript$
$\newcommand{\gph}{^{\gamma}} % gamma phase superscript$
$\newcommand{\aphp}{^{\alpha'}} % alpha prime phase superscript$
$\newcommand{\bphp}{^{\beta'}} % beta prime phase superscript$
$\newcommand{\gphp}{^{\gamma'}} % gamma prime phase superscript$
$\newcommand{\apht}{\small\aph} % alpha phase tiny superscript$
$\newcommand{\bpht}{\small\bph} % beta phase tiny superscript$
$\newcommand{\gpht}{\small\gph} % gamma phase tiny superscript$
$\newcommand{\upOmega}{\Omega}$
$\newcommand{\dif}{\mathop{}\!\mathrm{d}} % roman d in math mode, preceded by space$
$\newcommand{\Dif}{\mathop{}\!\mathrm{D}} % roman D in math mode, preceded by space$
$\newcommand{\df}{\dif\hspace{0.05em} f} % df$
$\newcommand{\dBar}{\mathop{}\!\mathrm{d}\hspace-.3em\raise1.05ex{\Rule{.8ex}{.125ex}{0ex}}} % inexact differential$
$\newcommand{\dq}{\dBar q} % heat differential$
$\newcommand{\dw}{\dBar w} % work differential$
$\newcommand{\dQ}{\dBar Q} % infinitesimal charge$
$\newcommand{\dx}{\dif\hspace{0.05em} x} % dx$
$\newcommand{\dt}{\dif\hspace{0.05em} t} % dt$
$\newcommand{\difp}{\dif\hspace{0.05em} p} % dp$
$\newcommand{\Del}{\Delta}$
$\newcommand{\Delsub}[1]{\Delta_{\text{#1}}}$
$\newcommand{\pd}[3]{(\partial #1 / \partial #2 )_{#3}} % \pd{}{}{} - partial derivative, one line$
$\newcommand{\Pd}[3]{\left( \dfrac {\partial #1} {\partial #2}\right)_{#3}} % Pd{}{}{} - Partial derivative, built-up$
$\newcommand{\bpd}[3]{[ \partial #1 / \partial #2 ]_{#3}}$
$\newcommand{\bPd}[3]{\left[ \dfrac {\partial #1} {\partial #2}\right]_{#3}}$
$\newcommand{\dotprod}{\small\bullet}$
$\newcommand{\fug}{f} % fugacity$
$\newcommand{\g}{\gamma} % solute activity coefficient, or gamma in general$
$\newcommand{\G}{\varGamma} % activity coefficient of a reference state (pressure factor)$
$\newcommand{\ecp}{\widetilde{\mu}} % electrochemical or total potential$
$\newcommand{\Eeq}{E\subs{cell, eq}} % equilibrium cell potential$
$\newcommand{\Ej}{E\subs{j}} % liquid junction potential$
$\newcommand{\mue}{\mu\subs{e}} % electron chemical potential$
$\newcommand{\defn}{\,\stackrel{\mathrm{def}}{=}\,} % "equal by definition" symbol$
$\newcommand{\D}{\displaystyle} % for a line in built-up$
$\newcommand{\s}{\smash[b]} % use in equations with conditions of validity$
$\newcommand{\cond}[1]{\[-2.5pt]{}\tag*{#1}}$
$\newcommand{\nextcond}[1]{\[-5pt]{}\tag*{#1}}$
$\newcommand{\R}{8.3145\units{J\,K\per\,mol\per}} % gas constant value$
$\newcommand{\Rsix}{8.31447\units{J\,K\per\,mol\per}} % gas constant value - 6 sig figs$
$\newcommand{\jn}{\hspace3pt\lower.3ex{\Rule{.6pt}{2ex}{0ex}}\hspace3pt}$
$\newcommand{\ljn}{\hspace3pt\lower.3ex{\Rule{.6pt}{.5ex}{0ex}}\hspace-.6pt\raise.45ex{\Rule{.6pt}{.5ex}{0ex}}\hspace-.6pt\raise1.2ex{\Rule{.6pt}{.5ex}{0ex}} \hspace3pt}$
$\newcommand{\lljn}{\hspace3pt\lower.3ex{\Rule{.6pt}{.5ex}{0ex}}\hspace-.6pt\raise.45ex{\Rule{.6pt}{.5ex}{0ex}}\hspace-.6pt\raise1.2ex{\Rule{.6pt}{.5ex}{0ex}}\hspace1.4pt\lower.3ex{\Rule{.6pt}{.5ex}{0ex}}\hspace-.6pt\raise.45ex{\Rule{.6pt}{.5ex}{0ex}}\hspace-.6pt\raise1.2ex{\Rule{.6pt}{.5ex}{0ex}}\hspace3pt}$
The thermodynamic state of the system is an important and subtle concept. At each instant of time, the system is in some definite state that we may describe with values of the macroscopic properties we consider to be relevant for our purposes. The values of these properties at any given instant define the state at that instant. Whenever the value of any of these properties changes, the state has changed. If we subsequently find that each of the relevant properties has the value it had at a certain previous instant, then the system has returned to its previous state.
Do not confuse the state of the system with the kind of physical state or state of aggregation of a phase discussed in Sec. 2.2.1. A change of state refers to a change in the state of the system, not necessarily to a phase transition.
2.4.1 State functions and independent variables
The properties whose values at each instant depend only on the state of the system at that instant, and not on the past or future history of the system, are called state functions (or state variables or state parameters). There may be other system properties that we consider to be irrelevant to the state, such as the shape of the system, and these are not state functions.
Various conditions determine what states of a system are physically possible. If a uniform phase has an equation of state, property values must be consistent with this equation. The system may have certain built-in or externally-imposed conditions or constraints that keep some properties from changing with time. For instance, a closed system has constant mass; a system with a rigid boundary has constant volume. We may know about other conditions that affect the properties during the time the system is under observation.
We can define the state of the system with the values of a certain minimum number of state functions which we treat as the independent variables. Once we have selected a set of independent variables, consistent with the physical nature of the system and any conditions or constraints, we can treat all other state functions as dependent variables whose values depend on the independent variables.
Whenever we adjust the independent variables to particular values, every other state function is a dependent variable that can have only one definite, reproducible value. For example, in a single-phase system of a pure substance with $T$, $p$, and $n$ as the independent variables, the volume is determined by an equation of state in terms of $T$, $p$, and $n$; the mass is equal to $nM$; the molar volume is given by $V\m = V/n$; and the density is given by $\rho = nM/V$.
2.4.2 An example: state functions of a mixture
The heat-conducting metal rod shown in Fig. 2.8 is a system in such a steady state. Each end of the rod is in thermal contact with a heat reservoir (or thermal reservoir), which is a body or external system whose temperature remains constant and uniform when there is heat transfer to or from it.
A heat reservoir can be a body that is so large that its temperature changes only imperceptibly during heat transfer; a thermostat bath whose temperature can be controlled; or an external system of coexisting phases of a pure substance (e.g., ice and water) at constant pressure.
The two heat reservoirs in the figure have different temperatures, causing a temperature gradient to form along the length of the rod and energy to be transferred by heat from the warmer reservoir to the rod and from the rod to the cooler reservoir. Although the properties of the steady state of the rod remain constant, the rod is clearly not in an equilibrium state because the temperature gradient will quickly disappear when we isolate the rod by removing it from contact with the heat reservoirs. | textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/DeVoes_Thermodynamics_and_Chemistry/02%3A_Systems_and_Their_Properties/2.04%3A_The_State_of_the_System.txt |
$\newcommand{\tx}[1]{\text{#1}} % text in math mode$
$\newcommand{\subs}[1]{_{\text{#1}}} % subscript text$
$\newcommand{\sups}[1]{^{\text{#1}}} % superscript text$
$\newcommand{\st}{^\circ} % standard state symbol$
$\newcommand{\id}{^{\text{id}}} % ideal$
$\newcommand{\rf}{^{\text{ref}}} % reference state$
$\newcommand{\units}[1]{\mbox{\thinspace#1}}$
$\newcommand{\K}{\units{K}} % kelvins$
$\newcommand{\degC}{^\circ\text{C}} % degrees Celsius$
$\newcommand{\br}{\units{bar}} % bar (\bar is already defined)$
$\newcommand{\Pa}{\units{Pa}}$
$\newcommand{\mol}{\units{mol}} % mole$
$\newcommand{\V}{\units{V}} % volts$
$\newcommand{\timesten}[1]{\mbox{\,\times\,10^{#1}}}$
$\newcommand{\per}{^{-1}} % minus one power$
$\newcommand{\m}{_{\text{m}}} % subscript m for molar quantity$
$\newcommand{\CVm}{C_{V,\text{m}}} % molar heat capacity at const.V$
$\newcommand{\Cpm}{C_{p,\text{m}}} % molar heat capacity at const.p$
$\newcommand{\kT}{\kappa_T} % isothermal compressibility$
$\newcommand{\A}{_{\text{A}}} % subscript A for solvent or state A$
$\newcommand{\B}{_{\text{B}}} % subscript B for solute or state B$
$\newcommand{\bd}{_{\text{b}}} % subscript b for boundary or boiling point$
$\newcommand{\C}{_{\text{C}}} % subscript C$
$\newcommand{\f}{_{\text{f}}} % subscript f for freezing point$
$\newcommand{\mA}{_{\text{m},\text{A}}} % subscript m,A (m=molar)$
$\newcommand{\mB}{_{\text{m},\text{B}}} % subscript m,B (m=molar)$
$\newcommand{\mi}{_{\text{m},i}} % subscript m,i (m=molar)$
$\newcommand{\fA}{_{\text{f},\text{A}}} % subscript f,A (for fr. pt.)$
$\newcommand{\fB}{_{\text{f},\text{B}}} % subscript f,B (for fr. pt.)$
$\newcommand{\xbB}{_{x,\text{B}}} % x basis, B$
$\newcommand{\xbC}{_{x,\text{C}}} % x basis, C$
$\newcommand{\cbB}{_{c,\text{B}}} % c basis, B$
$\newcommand{\mbB}{_{m,\text{B}}} % m basis, B$
$\newcommand{\kHi}{k_{\text{H},i}} % Henry's law constant, x basis, i$
$\newcommand{\kHB}{k_{\text{H,B}}} % Henry's law constant, x basis, B$
$\newcommand{\arrow}{\,\rightarrow\,} % right arrow with extra spaces$
$\newcommand{\arrows}{\,\rightleftharpoons\,} % double arrows with extra spaces$
$\newcommand{\ra}{\rightarrow} % right arrow (can be used in text mode)$
$\newcommand{\eq}{\subs{eq}} % equilibrium state$
$\newcommand{\onehalf}{\textstyle\frac{1}{2}\D} % small 1/2 for display equation$
$\newcommand{\sys}{\subs{sys}} % system property$
$\newcommand{\sur}{\sups{sur}} % surroundings$
$\renewcommand{\in}{\sups{int}} % internal$
$\newcommand{\lab}{\subs{lab}} % lab frame$
$\newcommand{\cm}{\subs{cm}} % center of mass$
$\newcommand{\rev}{\subs{rev}} % reversible$
$\newcommand{\irr}{\subs{irr}} % irreversible$
$\newcommand{\fric}{\subs{fric}} % friction$
$\newcommand{\diss}{\subs{diss}} % dissipation$
$\newcommand{\el}{\subs{el}} % electrical$
$\newcommand{\cell}{\subs{cell}} % cell$
$\newcommand{\As}{A\subs{s}} % surface area$
$\newcommand{\E}{^\mathsf{E}} % excess quantity (superscript)$
$\newcommand{\allni}{\{n_i \}} % set of all n_i$
$\newcommand{\sol}{\hspace{-.1em}\tx{(sol)}}$
$\newcommand{\solmB}{\tx{(sol,\,m\B)}}$
$\newcommand{\dil}{\tx{(dil)}}$
$\newcommand{\sln}{\tx{(sln)}}$
$\newcommand{\mix}{\tx{(mix)}}$
$\newcommand{\rxn}{\tx{(rxn)}}$
$\newcommand{\expt}{\tx{(expt)}}$
$\newcommand{\solid}{\tx{(s)}}$
$\newcommand{\liquid}{\tx{(l)}}$
$\newcommand{\gas}{\tx{(g)}}$
$\newcommand{\pha}{\alpha} % phase alpha$
$\newcommand{\phb}{\beta} % phase beta$
$\newcommand{\phg}{\gamma} % phase gamma$
$\newcommand{\aph}{^{\alpha}} % alpha phase superscript$
$\newcommand{\bph}{^{\beta}} % beta phase superscript$
$\newcommand{\gph}{^{\gamma}} % gamma phase superscript$
$\newcommand{\aphp}{^{\alpha'}} % alpha prime phase superscript$
$\newcommand{\bphp}{^{\beta'}} % beta prime phase superscript$
$\newcommand{\gphp}{^{\gamma'}} % gamma prime phase superscript$
$\newcommand{\apht}{\small\aph} % alpha phase tiny superscript$
$\newcommand{\bpht}{\small\bph} % beta phase tiny superscript$
$\newcommand{\gpht}{\small\gph} % gamma phase tiny superscript$
$\newcommand{\upOmega}{\Omega}$
$\newcommand{\dif}{\mathop{}\!\mathrm{d}} % roman d in math mode, preceded by space$
$\newcommand{\Dif}{\mathop{}\!\mathrm{D}} % roman D in math mode, preceded by space$
$\newcommand{\df}{\dif\hspace{0.05em} f} % df$
$\newcommand{\dBar}{\mathop{}\!\mathrm{d}\hspace-.3em\raise1.05ex{\Rule{.8ex}{.125ex}{0ex}}} % inexact differential$
$\newcommand{\dq}{\dBar q} % heat differential$
$\newcommand{\dw}{\dBar w} % work differential$
$\newcommand{\dQ}{\dBar Q} % infinitesimal charge$
$\newcommand{\dx}{\dif\hspace{0.05em} x} % dx$
$\newcommand{\dt}{\dif\hspace{0.05em} t} % dt$
$\newcommand{\difp}{\dif\hspace{0.05em} p} % dp$
$\newcommand{\Del}{\Delta}$
$\newcommand{\Delsub}[1]{\Delta_{\text{#1}}}$
$\newcommand{\pd}[3]{(\partial #1 / \partial #2 )_{#3}} % \pd{}{}{} - partial derivative, one line$
$\newcommand{\Pd}[3]{\left( \dfrac {\partial #1} {\partial #2}\right)_{#3}} % Pd{}{}{} - Partial derivative, built-up$
$\newcommand{\bpd}[3]{[ \partial #1 / \partial #2 ]_{#3}}$
$\newcommand{\bPd}[3]{\left[ \dfrac {\partial #1} {\partial #2}\right]_{#3}}$
$\newcommand{\dotprod}{\small\bullet}$
$\newcommand{\fug}{f} % fugacity$
$\newcommand{\g}{\gamma} % solute activity coefficient, or gamma in general$
$\newcommand{\G}{\varGamma} % activity coefficient of a reference state (pressure factor)$
$\newcommand{\ecp}{\widetilde{\mu}} % electrochemical or total potential$
$\newcommand{\Eeq}{E\subs{cell, eq}} % equilibrium cell potential$
$\newcommand{\Ej}{E\subs{j}} % liquid junction potential$
$\newcommand{\mue}{\mu\subs{e}} % electron chemical potential$
$\newcommand{\defn}{\,\stackrel{\mathrm{def}}{=}\,} % "equal by definition" symbol$
$\newcommand{\D}{\displaystyle} % for a line in built-up$
$\newcommand{\s}{\smash[b]} % use in equations with conditions of validity$
$\newcommand{\cond}[1]{\[-2.5pt]{}\tag*{#1}}$
$\newcommand{\nextcond}[1]{\[-5pt]{}\tag*{#1}}$
$\newcommand{\R}{8.3145\units{J\,K\per\,mol\per}} % gas constant value$
$\newcommand{\Rsix}{8.31447\units{J\,K\per\,mol\per}} % gas constant value - 6 sig figs$
$\newcommand{\jn}{\hspace3pt\lower.3ex{\Rule{.6pt}{2ex}{0ex}}\hspace3pt}$
$\newcommand{\ljn}{\hspace3pt\lower.3ex{\Rule{.6pt}{.5ex}{0ex}}\hspace-.6pt\raise.45ex{\Rule{.6pt}{.5ex}{0ex}}\hspace-.6pt\raise1.2ex{\Rule{.6pt}{.5ex}{0ex}} \hspace3pt}$
$\newcommand{\lljn}{\hspace3pt\lower.3ex{\Rule{.6pt}{.5ex}{0ex}}\hspace-.6pt\raise.45ex{\Rule{.6pt}{.5ex}{0ex}}\hspace-.6pt\raise1.2ex{\Rule{.6pt}{.5ex}{0ex}}\hspace1.4pt\lower.3ex{\Rule{.6pt}{.5ex}{0ex}}\hspace-.6pt\raise.45ex{\Rule{.6pt}{.5ex}{0ex}}\hspace-.6pt\raise1.2ex{\Rule{.6pt}{.5ex}{0ex}}\hspace3pt}$
A process is a change in the state of the system over time, starting with a definite initial state and ending with a definite final state. The process is defined by a path, which is the continuous sequence of consecutive states through which the system passes, including the initial state, the intermediate states, and the final state. The process has a direction along the path. The path could be described by a curve in an $N$-dimensional space in which each coordinate axis represents one of the $N$ independent variables.
This e-book takes the view that a thermodynamic process is defined by what happens within the system, in the three-dimensional region up to and including the boundary, and by the forces exerted on the system by the surroundings and any external field. Conditions and changes in the surroundings are not part of the process except insofar as they affect these forces. For example, consider a process in which the system temperature decreases from $300\K$ to $273\K$. We could accomplish this temperature change by placing the system in thermal contact with either a refrigerated thermostat bath or a mixture of ice and water. The process is the same in both cases, but the surroundings are different.
Expansion is a process in which the system volume increases; in compression, the volume decreases.
Figure 2.9 Paths of three processes of a closed ideal-gas system with $p$ and $V$ as the independent variables. (a) Isothermal expansion. (b) Isobaric expansion. (c) Isochoric pressure reduction.
Paths for these processes of an ideal gas are shown in Fig. 2.9. An isothermal process is one in which the temperature of the system remains uniform and constant. An isobaric or isopiestic process refers to uniform constant pressure, and an isochoric process refers to constant volume.
An adiabatic process is one in which there is no heat transfer across any portion of the boundary. We may ensure that a process is adiabatic either by using an adiabatic boundary or, if the boundary is diathermal, by continuously adjusting the external temperature to eliminate a temperature gradient at the boundary.
Recall that a state function is a property whose value at each instant depends only on the state of the system at that instant. The finite change of a state function $X$ in a process is written $\Del X$. The notation $\Del X$ always has the meaning $X_2 - X_1$, where $X_1$ is the value in the initial state and $X_2$ is the value in the final state. Therefore, the value of $\Del X$ depends only on the values of $X_1$ and $X_2$. The change of a state function during a process depends only on the initial and final states of the system, not on the path of the process.
An infinitesimal change of the state function $X$ is written $\dif X$. The mathematical operation of summing an infinite number of infinitesimal changes is integration, and the sum is an integral (see the brief calculus review in Appendix E). The sum of the infinitesimal changes of $X$ along a path is a definite integral equal to $\Del X$: $\int_{X_1}^{X_2} \!\dif X = X_2-X_1 = \Delta X \tag{2.5.1}$ If $\dif X$ obeys this relation—that is, if its integral for given limits has the same value regardless of the path—it is called an exact differential. The differential of a state function is always an exact differential.
A cyclic process is a process in which the state of the system changes and then returns to the initial state. In this case the integral of $\dif X$ is written with a cyclic integral sign: $\oint \dif X$. Since a state function $X$ has the same initial and final values in a cyclic process, $X_2$ is equal to $X_1$ and the cyclic integral of $\dif X$ is zero: $\oint \dif X = 0 \tag{2.5.2}$
Heat ($q$) and work ($w$) are examples of quantities that are not state functions. They are not even properties of the system; instead they are quantities of energy transferred across the boundary over a period of time. It would therefore be incorrect to write “$\Del q$” or “$\Del w$.” Instead, the values of $q$ and $w$ depend in general on the path and are called path functions.
This e-book uses the symbol $\dBar$ (the letter “d” with a bar through the stem) for an infinitesimal quantity of a path function. Thus, $\dq$ and $\dw$ are infinitesimal quantities of heat and work. The sum of many infinitesimal quantities of a path function is not the difference of two values of the path function; instead, the sum is the net quantity: $\int \! \dq = q \qquad \int \! \dw = w \tag{2.5.3}$ The infinitesimal quantities $\dq$ and $\dw$, because the values of their integrals depend on the path, are inexact differentials.
Chemical thermodynamicists often write these quantities as $\dif q$ and $\dif w$. Mathematicians, however, frown on using the same notation for inexact and exact differentials. Other notations sometimes used to indicate that heat and work are path functions are $\tx{D}q$ and $\tx{D}w$, and also $\delta q$ and $\delta w$.
There is a fundamental difference between a state function (such as temperature or volume) and a path function (such as heat or work): The value of a state function refers to one instant of time; the value of a path function refers to an interval of time.
The difference between a state function and a path function in thermodynamics is analogous to the difference between elevation and trail length in hiking up a mountain. Suppose a trailhead at the base of the mountain has several trails to the summit. The hiker at each instant is at a definite elevation above sea level. During a climb from the trailhead to the summit, the hiker’s change of elevation is independent of the trail used, but the trail length from base to summit depends on the trail. | textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/DeVoes_Thermodynamics_and_Chemistry/02%3A_Systems_and_Their_Properties/2.05%3A__Processes_and_Paths.txt |
$\newcommand{\tx}[1]{\text{#1}} % text in math mode$
$\newcommand{\subs}[1]{_{\text{#1}}} % subscript text$
$\newcommand{\sups}[1]{^{\text{#1}}} % superscript text$
$\newcommand{\st}{^\circ} % standard state symbol$
$\newcommand{\id}{^{\text{id}}} % ideal$
$\newcommand{\rf}{^{\text{ref}}} % reference state$
$\newcommand{\units}[1]{\mbox{\thinspace#1}}$
$\newcommand{\K}{\units{K}} % kelvins$
$\newcommand{\degC}{^\circ\text{C}} % degrees Celsius$
$\newcommand{\br}{\units{bar}} % bar (\bar is already defined)$
$\newcommand{\Pa}{\units{Pa}}$
$\newcommand{\mol}{\units{mol}} % mole$
$\newcommand{\V}{\units{V}} % volts$
$\newcommand{\timesten}[1]{\mbox{\,\times\,10^{#1}}}$
$\newcommand{\per}{^{-1}} % minus one power$
$\newcommand{\m}{_{\text{m}}} % subscript m for molar quantity$
$\newcommand{\CVm}{C_{V,\text{m}}} % molar heat capacity at const.V$
$\newcommand{\Cpm}{C_{p,\text{m}}} % molar heat capacity at const.p$
$\newcommand{\kT}{\kappa_T} % isothermal compressibility$
$\newcommand{\A}{_{\text{A}}} % subscript A for solvent or state A$
$\newcommand{\B}{_{\text{B}}} % subscript B for solute or state B$
$\newcommand{\bd}{_{\text{b}}} % subscript b for boundary or boiling point$
$\newcommand{\C}{_{\text{C}}} % subscript C$
$\newcommand{\f}{_{\text{f}}} % subscript f for freezing point$
$\newcommand{\mA}{_{\text{m},\text{A}}} % subscript m,A (m=molar)$
$\newcommand{\mB}{_{\text{m},\text{B}}} % subscript m,B (m=molar)$
$\newcommand{\mi}{_{\text{m},i}} % subscript m,i (m=molar)$
$\newcommand{\fA}{_{\text{f},\text{A}}} % subscript f,A (for fr. pt.)$
$\newcommand{\fB}{_{\text{f},\text{B}}} % subscript f,B (for fr. pt.)$
$\newcommand{\xbB}{_{x,\text{B}}} % x basis, B$
$\newcommand{\xbC}{_{x,\text{C}}} % x basis, C$
$\newcommand{\cbB}{_{c,\text{B}}} % c basis, B$
$\newcommand{\mbB}{_{m,\text{B}}} % m basis, B$
$\newcommand{\kHi}{k_{\text{H},i}} % Henry's law constant, x basis, i$
$\newcommand{\kHB}{k_{\text{H,B}}} % Henry's law constant, x basis, B$
$\newcommand{\arrow}{\,\rightarrow\,} % right arrow with extra spaces$
$\newcommand{\arrows}{\,\rightleftharpoons\,} % double arrows with extra spaces$
$\newcommand{\ra}{\rightarrow} % right arrow (can be used in text mode)$
$\newcommand{\eq}{\subs{eq}} % equilibrium state$
$\newcommand{\onehalf}{\textstyle\frac{1}{2}\D} % small 1/2 for display equation$
$\newcommand{\sys}{\subs{sys}} % system property$
$\newcommand{\sur}{\sups{sur}} % surroundings$
$\renewcommand{\in}{\sups{int}} % internal$
$\newcommand{\lab}{\subs{lab}} % lab frame$
$\newcommand{\cm}{\subs{cm}} % center of mass$
$\newcommand{\rev}{\subs{rev}} % reversible$
$\newcommand{\irr}{\subs{irr}} % irreversible$
$\newcommand{\fric}{\subs{fric}} % friction$
$\newcommand{\diss}{\subs{diss}} % dissipation$
$\newcommand{\el}{\subs{el}} % electrical$
$\newcommand{\cell}{\subs{cell}} % cell$
$\newcommand{\As}{A\subs{s}} % surface area$
$\newcommand{\E}{^\mathsf{E}} % excess quantity (superscript)$
$\newcommand{\allni}{\{n_i \}} % set of all n_i$
$\newcommand{\sol}{\hspace{-.1em}\tx{(sol)}}$
$\newcommand{\solmB}{\tx{(sol,\,m\B)}}$
$\newcommand{\dil}{\tx{(dil)}}$
$\newcommand{\sln}{\tx{(sln)}}$
$\newcommand{\mix}{\tx{(mix)}}$
$\newcommand{\rxn}{\tx{(rxn)}}$
$\newcommand{\expt}{\tx{(expt)}}$
$\newcommand{\solid}{\tx{(s)}}$
$\newcommand{\liquid}{\tx{(l)}}$
$\newcommand{\gas}{\tx{(g)}}$
$\newcommand{\pha}{\alpha} % phase alpha$
$\newcommand{\phb}{\beta} % phase beta$
$\newcommand{\phg}{\gamma} % phase gamma$
$\newcommand{\aph}{^{\alpha}} % alpha phase superscript$
$\newcommand{\bph}{^{\beta}} % beta phase superscript$
$\newcommand{\gph}{^{\gamma}} % gamma phase superscript$
$\newcommand{\aphp}{^{\alpha'}} % alpha prime phase superscript$
$\newcommand{\bphp}{^{\beta'}} % beta prime phase superscript$
$\newcommand{\gphp}{^{\gamma'}} % gamma prime phase superscript$
$\newcommand{\apht}{\small\aph} % alpha phase tiny superscript$
$\newcommand{\bpht}{\small\bph} % beta phase tiny superscript$
$\newcommand{\gpht}{\small\gph} % gamma phase tiny superscript$
$\newcommand{\upOmega}{\Omega}$
$\newcommand{\dif}{\mathop{}\!\mathrm{d}} % roman d in math mode, preceded by space$
$\newcommand{\Dif}{\mathop{}\!\mathrm{D}} % roman D in math mode, preceded by space$
$\newcommand{\df}{\dif\hspace{0.05em} f} % df$
$\newcommand{\dBar}{\mathop{}\!\mathrm{d}\hspace-.3em\raise1.05ex{\Rule{.8ex}{.125ex}{0ex}}} % inexact differential$
$\newcommand{\dq}{\dBar q} % heat differential$
$\newcommand{\dw}{\dBar w} % work differential$
$\newcommand{\dQ}{\dBar Q} % infinitesimal charge$
$\newcommand{\dx}{\dif\hspace{0.05em} x} % dx$
$\newcommand{\dt}{\dif\hspace{0.05em} t} % dt$
$\newcommand{\difp}{\dif\hspace{0.05em} p} % dp$
$\newcommand{\Del}{\Delta}$
$\newcommand{\Delsub}[1]{\Delta_{\text{#1}}}$
$\newcommand{\pd}[3]{(\partial #1 / \partial #2 )_{#3}} % \pd{}{}{} - partial derivative, one line$
$\newcommand{\Pd}[3]{\left( \dfrac {\partial #1} {\partial #2}\right)_{#3}} % Pd{}{}{} - Partial derivative, built-up$
$\newcommand{\bpd}[3]{[ \partial #1 / \partial #2 ]_{#3}}$
$\newcommand{\bPd}[3]{\left[ \dfrac {\partial #1} {\partial #2}\right]_{#3}}$
$\newcommand{\dotprod}{\small\bullet}$
$\newcommand{\fug}{f} % fugacity$
$\newcommand{\g}{\gamma} % solute activity coefficient, or gamma in general$
$\newcommand{\G}{\varGamma} % activity coefficient of a reference state (pressure factor)$
$\newcommand{\ecp}{\widetilde{\mu}} % electrochemical or total potential$
$\newcommand{\Eeq}{E\subs{cell, eq}} % equilibrium cell potential$
$\newcommand{\Ej}{E\subs{j}} % liquid junction potential$
$\newcommand{\mue}{\mu\subs{e}} % electron chemical potential$
$\newcommand{\defn}{\,\stackrel{\mathrm{def}}{=}\,} % "equal by definition" symbol$
$\newcommand{\D}{\displaystyle} % for a line in built-up$
$\newcommand{\s}{\smash[b]} % use in equations with conditions of validity$
$\newcommand{\cond}[1]{\[-2.5pt]{}\tag*{#1}}$
$\newcommand{\nextcond}[1]{\[-5pt]{}\tag*{#1}}$
$\newcommand{\R}{8.3145\units{J\,K\per\,mol\per}} % gas constant value$
$\newcommand{\Rsix}{8.31447\units{J\,K\per\,mol\per}} % gas constant value - 6 sig figs$
$\newcommand{\jn}{\hspace3pt\lower.3ex{\Rule{.6pt}{2ex}{0ex}}\hspace3pt}$
$\newcommand{\ljn}{\hspace3pt\lower.3ex{\Rule{.6pt}{.5ex}{0ex}}\hspace-.6pt\raise.45ex{\Rule{.6pt}{.5ex}{0ex}}\hspace-.6pt\raise1.2ex{\Rule{.6pt}{.5ex}{0ex}} \hspace3pt}$
$\newcommand{\lljn}{\hspace3pt\lower.3ex{\Rule{.6pt}{.5ex}{0ex}}\hspace-.6pt\raise.45ex{\Rule{.6pt}{.5ex}{0ex}}\hspace-.6pt\raise1.2ex{\Rule{.6pt}{.5ex}{0ex}}\hspace1.4pt\lower.3ex{\Rule{.6pt}{.5ex}{0ex}}\hspace-.6pt\raise.45ex{\Rule{.6pt}{.5ex}{0ex}}\hspace-.6pt\raise1.2ex{\Rule{.6pt}{.5ex}{0ex}}\hspace3pt}$
A large part of classical thermodynamics is concerned with the energy of the system. The total energy of a system is an extensive property whose value at any one instant cannot be measured in any practical way, but whose change is the focus of the first law of thermodynamics (Chap. 3).
2.6.1 Energy and reference frames
Classical thermodynamics ignores microscopic properties such as the behavior of individual atoms and molecules. Nevertheless, a consideration of the classical mechanics of particles will help us to understand the sources of the potential and kinetic energy of a thermodynamic system.
In classical mechanics, the energy of a collection of interacting point particles is the sum of the kinetic energy $\onehalf mv^2$ of each particle (where $m$ is the particle’s mass and $v$ is its velocity), and of various kinds of potential energies. The potential energies are defined in such a way that if the particles are isolated from the rest of the universe, as the particles move and interact with one another the total energy (kinetic plus potential) is constant over time. This principle of the conservation of energy also holds for real atoms and molecules whose electronic, vibrational, and rotational energies, absent in point particles, are additional contributions to the total energy.
The positions and velocities of particles must be measured in a specified system of coordinates called a reference frame. This e-book will use reference frames with Cartesian axes. Since the kinetic energy of a particle is a function of velocity, the kinetic energy depends on the choice of the reference frame. A particularly important kind is an inertial frame, one in which Newton’s laws of motion are obeyed (see Sec. G.1 in Appendix G).
A reference frame whose axes are fixed relative to the earth’s surface is what this e-book will call a lab frame. A lab frame for all practical purposes is inertial (Sec. G.10). It is in this kind of stationary frame that the laws of thermodynamics have been found by experiment to be valid.
The energy $E$ of a thermodynamic system is the sum of the energies of the particles contained in it and the potential energies of interaction between these particles. Just as for an individual particle, the energy of the system depends on the reference frame in which it is measured. The energy of the system may change during a process, but the principle of the conservation of energy ensures that the sum of the energy of the system, the energy of the surroundings, and any energy shared by both, all measured in the same reference frame, remains constant over time.
This e-book uses the symbol $E\sys$ for the energy of the system measured in a specified inertial frame. The system could be located in a weightless environment in outer space, and the inertial frame could be one that is either fixed or moving at constant velocity relative to local stars. Usually, however, the system is located in the earth’s gravitational field, and the appropriate inertial frame is then an earth-fixed lab frame.
If during a process the system as a whole undergoes motion or rotation relative to the inertial frame, then $E\sys$ depends in part on coordinates that are not properties of the system. In such situations $E\sys$ is not a state function, and we need the concept of internal energy.
2.6.2 Internal energy
The internal energy, $U\!$, is the energy of the system measured in a reference frame that allows $U$ to be a state function—that is, at each instant the value of $U$ depends only on the state of the system. This e-book will call a reference frame with this property a local frame. A local frame may also be, but is not necessarily, an earth-fixed lab frame.
Here is a simple illustration of the distinction between the energy $E\sys$ of a system measured in a lab frame and the internal energy $U\!$ measured in a local frame. Let the system be a fixed amount of water contained in a glass beaker. (The glass material of the beaker is part of the surroundings.) We can define the state of this system by two independent variables: the temperature, $T$, and pressure, $p$, of the water. The most convenient local frame in which to measure $U$ in this case is a frame fixed with respect to the beaker.
• Section 3.1.1 will show that the relation between changes of the system energy and the internal energy in this example is $\Del E\sys = \Del E\subs{k} + \Del E\subs{p} + \Del U$, where $E\subs{k}$ and $E\subs{p}$ are the kinetic and potential energies of the system as a whole measured in the lab frame.
Our choice of the local frame used to define the internal energy $U$ of any particular system during a given process is to some extent arbitrary. Three possible choices are as follows.
• Is it possible to determine a numerical value for the internal energy of a system? The total energy of a body of mass $m$ when it is at rest is given by the Einstein relation $E = mc_0^2$, where $c_0$ is the speed of light in vacuum. In principle, then, we could calculate the internal energy $U$ of a system at rest from its mass, and we could determine $\Del U$ for a process from the change in mass. In practice, however, an absolute value of $U$ calculated from a measured mass has too much uncertainty to be of any practical use. For example, the typical uncertainty of the mass of an object measured with a microbalance, about $0.1\units{\(\mu$g}\) (Table 2.2), would introduce the enormous uncertainty in energy of about $10^{10}$ joules. Only values of the change $\Del U$ are useful, and these values cannot be calculated from $\Del m$ because the change in mass during an ordinary chemical process is much too small to be detected. | textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/DeVoes_Thermodynamics_and_Chemistry/02%3A_Systems_and_Their_Properties/2.06%3A_The_Energy_of_the_System.txt |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.