chapter
stringlengths 1.97k
1.53M
| path
stringlengths 47
241
|
---|---|
The ideal gas law is valid for low pressures, where the finite volume of particles and intermolecular attractions do not have a large impact. At higher pressures, we must account for these factors. The van der Waals and Redlich-Kwong equations of states are equations of state that attempt to account for real gas behavior by modifying the ideal gas law with two additional parameters.
The Van der Waals Equation of State
The van der Waals Equation of State is an equation relating the density of gases and liquids to the pressure, volume, and temperature conditions (i.e., it is a thermodynamic equation of state). It can be viewed as an adjustment to the ideal gas law that takes into account the non-zero volume of gas molecules and inter-particle attraction using correction terms $a$ and $b$. It was derived in 1873 by Johannes Diderik van der Waals, who received the Nobel Prize in 1910 for this work. The van der Waals equation of state is:
$P = \dfrac{nRT}{V - nb} - \dfrac{an^2}{V^2} \label{Eq7}$
Equation $\ref{Eq7}$ can also be rewritten as
$\left(P+{\dfrac {n^{2}a}{V^{2}}}\right)\left(V-nb\right)=nRT \nonumber$
If the correction terms $a$ and $b$ go to zero, the equation reduces to the ideal gas equation of state:
$PV=nRT \nonumber$
Let's first look first at the correction term $b$, which represents the volumes of the particles and assumes a hard-wall potential, $u_o(r)$. This potential energy term describes a system of hard sphere “billiard balls” of diameter $\sigma$. Figure 16.2.1 shows two of these billiard ball type particles at the point of contact (i.e, the distance of closest approach). At this point, they undergo a collision and separate, so they cannot be closer than that distance.
The distance between their centers is also $\sigma$. Because of this distance of closest approach, the total volume available to the particles is not the volume of the container, $V$, but some volume less than $V$. This reduction in volume can be calculated. Figure 16.2.1 shows a shaded sphere that just contains the pair of billiard ball particles. The volume of this sphere is the volume excluded from any two particles. The radius of the sphere is $\sigma$ and the excluded volume for the two particles is $4 \pi \sigma^3/3$, which is the volume of the shaded sphere. From this, we see that the excluded volume for any one particle is just half of this or $\frac{2}{3} \pi \sigma^3$. The excluded volume for a mole of such particles is the parameter $b$:
$b = \dfrac{2}{3} \pi \sigma^3 N_0 \nonumber$
Given $n$ moles of gas, the total excluded volume is then $nb$, so that the total available volume is $V - nb$.
Let's turn our attention the $a$, which represents the intermolecular attractions of the particles. Molecular attractions tend to make the $P$ exerted by the gas less than that predicted by the ideal gas law at low pressures. $P$ is force over area:
$P=\frac{F}{A} \nonumber$
Pressure is proportion to the net force on the wall of the container:
The net force is the force of the molecules impacting the container wall minus the intermolecular attraction of the molecules:
${\bar{F}}_{Net}={\bar{F}}_{Impact}-{\bar{F}}_{Attract} \nonumber$
Therefore, the pressure will be less for the real gas than the ideal gas. We can add a correction term to pressure to account for intermolecular attractions:
$P_{VDW}=P_{Real}+a\left(?\right) \nonumber$
$V$ does influence the amount of intermolecular attraction because $P$ is reduced as $V$ is increased:
$P\propto\frac{1}{V} \nonumber$
And pressure is proportional to the number of attractive interactions:
# of molecular interactions $\propto P^2$
# of molecular interactions $\propto \frac{1}{V^2}$
We now have an expression for the van der Waals $P$:
$P=P_{Real}+\frac{an^2}{V^2} \nonumber$
The constants $a$ and $b$ depend on the substance. Some typical values are:1
Molecule $a\;\sf\left(\frac{L^2\cdot bar}{{mol}^2}\right)$ $b\;\sf\left(\frac{L}{mole}\right)$
$\sf H_2O$ 5.536 0.03049
$\sf N_2$ 1.43 0.03913
$\sf CH_4$ 2.283 0.04278
$\sf C_2H_6$ 5.562 0.0638
1. R. C. Weast (1972). Handbook of Chemistry and Physics 53rd Edition. Chemical Rubber Pub.
Redlich-Kwong Equation of State
The van der Waals Equation of State had to wait almost 100 years before a real, successful improvement was introduced to it. This progress occurred once researchers committed themselves to finding the empirical temperature dependency of the attraction parameter $a$ proposed by van der Waals. In contrast, very little attention has been paid to modifying the parameter $b$ for co-volume. It makes a lot of sense that $b$ would not be modified by temperature, because it represents the volume of the molecules, which should not be affected by their kinetic energy (measured in terms of temperature). The very first noteworthy successful modification to the attraction parameter came with the publication of the equation of state of Redlich-Kwong in 1949.
The Redlich–Kwong equation of state is an empirical, algebraic equation that relates temperature, pressure, and volume of gases. It is generally more accurate than the van der Waals and the ideal gas equations of state at temperatures above the critical temperature. It was formulated by Otto Redlich and Joseph Neng Shun Kwong in 1949, who showed that a simple two-parameter equation of state could well reflect reality in many situations. Redlich and Kwong revised the van der Waals Equation of State (Equation $\ref{Eq7}$) and proposed the following expressions:
$\left( P + \dfrac{a}{\sqrt{T} \bar{V} (\bar{V} + b)} \right) ( \bar{V}-b) = RT \label{10.1}$
The fundamental change they introduced was to the functional form of $\partial P_\text{attraction}$. Additionally, they introduced the co-volume $b$ into the denominator of this functional form. The important concept here is that the attraction parameter $a$ of van der Waals needed to be made a function of temperature to do a better job of quantitatively matching experimental data. This was a realization that van der Waals had suggested, but no actual functional dependency had been introduced until the Redlich-Kwong equation.
We know what follows at this point. To come up with an expression for $a$ and $b$ of Equation $\ref{10.1}$, we apply the criticality conditions to this equation of state. As we recall, imposing the criticality conditions allows us to relate the coefficients $a$ and $b$ to the critical properties ($P_c$, $T_c$) of the substance. Once we have done that, we obtain the definition of $a$ and $b$ for the Redlich-Kwong equation of state:
$a =0.42780 \dfrac{R^2T_c^{2.5}}{P_c} \label{10.2a}$
$b =0.086640 \dfrac{RT_c}{P_c} \label{10.2b}$
The Redlich-Kwong equation of state radically improved, in a quantitative sense, the predictions of the van der Waals equation of state. We now recall that van der Waals-type equations are cubic because they are cubic polynomials in molar volume and compressibility factor. It comes as no surprise then, that we can transform Equation $\ref{10.1}$ into:
$\bar{v} ^3 - \left( \dfrac{RT}{P} \right)\bar{v}^2 + \dfrac{1}{P} \left( \dfrac{1}{T^{0.5}} - bRT - Pb^2 \right) \bar{v} - \dfrac{ab}{PT^{0.5}} = 0 \label{10.3}$
and, by defining the following parameters:
$A = \dfrac{aP}{R^2T^{2.5}} \label{10.3a}$
$B =\dfrac{bP}{RT} \label{10.3b}$
and introducing the compressibility factor definition:
$Z = \dfrac{P\bar{v}}{RT} \nonumber$
we get:
$Z^3 -Z^2 + (A -B -B^2) Z -AB =0 \label{10.4}$
We may also verify the two-parameter corresponding state theory by introducing Equations $\ref{10.2a}$, $\ref{10.2b}$, and $\ref{10.3}$ into Equation $\ref{10.4}$:
$Z^3−Z^2+\dfrac{P_r}{T_r} \left( \dfrac{0.42748}{T_r^{1.5}} −0.08664−0.007506 \dfrac{P_r}{T_r} \right)Z−0.03704 \dfrac{P^2_r}{T^{3.5}_r}=0 \label{10.5}$
Where:
$P_r=\frac{P}{P_c} \nonumber$
$T_r=\frac{T}{T_c} \nonumber$
$P_r$ and $T_r$ are the pressure and temperature at the reduced state. See Section 16.4 for more information on reduced states. In Equation $\ref{10.5}$, we can observe the same thing that we saw with the van der Waals equation of state: gases at corresponding states have the same properties. Equation $\ref{10.5}$ is particularly clear about it: any two different gases at the same $P_r$, $T_r$ condition have the same compressibility factor.
Just as any other cubic equation of state, Equations $\ref{10.1}$ through $\ref{10.5}$, as they stand, are to be applied to pure substances. For mixtures, however, we apply the same equation, but we impose certain mixing rules to obtain $a$ and $b$, which are functions of the properties of the pure components. Strictly speaking, we create a new “pseudo” pure substance that has the average properties of the mixture. Redlich-Kwong preserved the same mixing rules that van der Waals proposed for his eqution of state:
$a_m = \sum_i \sum_j y_iy_j a_{ij} \nonumber$
with
$a_{ij} = \sqrt{a_ia_j} \nonumber$
and
$b_{m}=\sum_i y_ib_i \nonumber$
Naturally, Redlich and Kwong did not have the last word on possible improvements to the van der Waals equation of state. The Redlich-Kwong equation of state, as shown here, is no longer used in practical applications. Research continued and brought with it new attempts to improve the Redlich-Kwong equation of state. After more than two decades, a modified Redlich-Kwong equation of state with very good potential was developed.
Contributors and Attributions
• Michael Adewumi (The Pennsylvania State University) Vice Provost for Global Program, Professor of Petroleum and Natural Gas Engineering
• Jerry LaRue (Chapman University)
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(LibreTexts)/16%3A_The_Properties_of_Gases/16.02%3A_van_der_Waals_and_Redlich-Kwong_Equations_of_State.txt
|
Isotherms
Isotherms are plots of the pressure of a gas as a function of volume at a fixed constant temperature. The isotherms for an ideal gas are hypberbolas:
$P=\dfrac{RT}{\bar{V}} \nonumber$
where $\bar{V}$ is the molar volume $V/n$. We know that at sufficiently low temperatures, any real gas, when compressed, must undergo a transition from gas to liquid. The signature of such a transition is a discontinuous change in the volume, signifying the condensation of the gas into a liquid that occupies a significantly lower volume.
CO2 isotherms for the van der Waals Equation of state are shown in Figure 16.3.2 . At sufficiently high temperatures, the isotherms approach those of an ideal gas. At lower temperatures, the fluid obeys approximately the ideal gas law $PV=nRT$ at large volumes when the pressures are low. If we decrease the volume (go to the left in the figure along an isotherm), the pressure rises. Consider the (blue) isotherm of 10 °C, which is below the critical temperature. Decrease the volume until we reach the point $B$, where condensation (formation of liquid CO2) starts. At this point the van der Waals curve is no longer physical (excluding the possibility of the occurrence of an oversaturated, metastable gas) because $P$ and $V$ increase together. It should be clear that many approximations and assumptions go into the derivation of the van der Waals equation so that some of the important physics is missing from the model. Hence, we should not be surprised if the van der Waals equation has some unphysical behavior buried in it. In reality, the pressure stays constant between the region $A$ and $B$ and the real physical behavior is given by the dashed blue line, called the tie line. This lines represents the gas-liquid coexistence and the pressure is equal to the vapor pressure of the liquid.
The tie line must be added in ad hoc by drawing a horizontal line through the isotherm (Figure 16.3.2 ). The vertical position of the line is chosen so that the area above the line (between the line and the isotherm) and below the line (again between the line and the isotherm) is exactly the same. This is known as Maxwell's construction. In this way, we entirely remove the artifact of the unphysical increase of $P$ with $V$ when we compute the compressional work on the gas from $\int P(V) \: dV$, to be discussed in our section on thermodynamics. Although the van der Waals curves have regions where they are not physical, the equation for these curves, derived by van der Waals in 1873, was a great scientific achievement.
Note
Even today it is not possible to give a single equation that describes correctly the gas-liquid phase transition.
Critical Point
Looking to the left of point $A$ on the 10 °C isotherm (blue curve), the system is in the liquid state. Increasing the volume to point $A$ leads to a rapid drop in the pressure of the system because the compressibility of a liquid is considerable smaller than that of a gas. The system is still in the liquid state at point $A$, but as we increase the volume further, we enter the gas-liquid coexistence line and the liquid begins to transition to gas. As we move along this line to the right, there is less liquid and more gas in the system until we reach point $B$, at which point the system will be completely in the gas phase. The areas, bounded by the 10 °C isotherm (blue curve) below and above the coexistence line are equal. Any further increase in the volume will lead to an expansion of the gas.There is exactly one isotherm along which the van der Waals equation correctly predicts the gas-to-liquid phase transition. If one follows the 31 °C isotherm (green curve in Figure 16.3.2 ) of critical temperature, the volume discontinuity captured by the tie line is shrunken down to a single point (so that there is no possibility of an increase of $P$ with $V$!). This point is called the critical point and it exists at only one temperature, called the critical temperature, denoted $T_c$. The critical isotherm at the critical temperature corresponds to the highest possible temperature at which a gas-liquid transition can occur. Isotherms at higher temperatures have no liquid-gas phase transitions. Along those isotherms, the higher pressure fluid, called a supercritical fluid, resembles a liquid, while at lower pressures the fluid is more gas-like.
The critical point exists at an inflection where the first and second derivatives of $P$ with respect to $V$ are zero:
$\left(\frac{\partial P}{\partial V}\right)_{T_c} = 0 \nonumber$
$\left(\frac{\partial^2 P}{\partial V^2}\right)_{T_c} = 0 \nonumber$
Substituting the van der Waals equation into these two conditions, we find the following:
\begin{align*} -\dfrac{nRT_c}{\left( V_c - nb \right)^2} + \dfrac{2 an^2}{V_c^3} &= 0 \ \dfrac{2nRT_c}{\left(V_c - nb \right)^3} - \dfrac{6an^2}{V_c^4} &= 0 \end{align*}
Hence, we have two equations in two unknowns $V_c$ and $T_c$ for the critical temperature and critical volume. Once these are determined, the van der Waals equation, itself, allows us to determine the critical pressure, $P_c$. To solve the equations, first divide one by the other. This gives us a simple condition for the volume:
\begin{align*} \dfrac{V_c - nb}{2} &= \dfrac{V_c}{3} \ 3V_c - 3nb &= 2V_c \ V_c &= 3nb \end{align*}
This is the critical volume. Now use either of the two conditions to obtain the critical temperature, $T_c$. If we use the first one, we find:
\begin{align*} \dfrac{nRT_c}{\left( V_c - nb \right)^2} &= \dfrac{2an^2}{V_c^3} \ \dfrac{nRT_c}{\left( 3nb - nb \right)^2} &= \dfrac{2an^2}{\left( 3nb \right)^3} \ \dfrac{nRT_c}{4n^2b^2} &= \dfrac{2an^2}{27n^3b^3} \ RT_c &= \dfrac{8a}{27b} \end{align*}
Finally, plugging the critical temperature and volume into the van der Waals equation, we obtain the critical pressure:
\begin{align*} P_c &= \dfrac{nRT_c}{V_c - nb} - \dfrac{an^2}{V_c^2} \ &= \dfrac{8an/27b}{3nb - nb} - \dfrac{an^2}{\left( 3nb \right)^2} \ &= \dfrac{a}{27b^2} \end{align*}
It comes as no surprise that cubic equations of state like the van der Waals (and Redlich-Kwong) equations of state yield three different roots for volume and compressibility factor. This is simply because they are algebraic equations, and any nth order algebraic equation will always yield “n” roots. However, those “n” roots are not required to be distinct, and that is not all: they are not required be real numbers, either. A quadratic expression (n = 2) may have zero real roots (e.g., $x^2 + 1 = 0$); this is because those roots are complex numbers. In the case of cubic expressions (n = 3), we will either have one or three real roots; this is because complex roots always show up in pairs (i.e., once you have a complex root, its conjugate must also be a solution). In our case, and because we are dealing with physical quantities (densities, volumes, compressibility factors), only real roots are of interest. More specifically, we look for real, positive roots such that $\bar{V} > b$ in the case of molar volume and $Z > \frac{Pb}{RT}$ in the case of compressibility factor.
In a cubic equation of state, the possibility of three real roots is restricted to the case of sub-critical conditions ($T < T_c$), because the S-shaped behavior, which represents the vapor-liquid transition, takes place only at temperatures below critical. This restriction is mathematically imposed by the criticality conditions. Anywhere else, beyond the S-shaped curve, we will only get one real root of the type $\bar{V} > b$. Figure 16.3.3 illustrates this point.
The shape of the critical isotherm at the critical point allows us to determine the exact temperature, pressure, and volume at which the phase transitions from gas to liquid will occur. If we draw a curve through the isotherms joining all points of these isotherms at which the tie lines begin, continue the curve up to the critical isotherm, and down the other side where the tie lines end, this curve reaches a maximum at the critical point. The region inside this curve is when the gas and liquid phases coexist.
Let us summarize the three cases presented in Figure 16.3.3 :
1. Supercritical isotherms ($T > T_c$: At temperatures beyond critical, the cubic equation will have only one real root (the other two are imaginary complex conjugates). In this case, there is no ambiguity in the assignment of the volume root since we have single-phase conditions. The occurrence of a unique real root remains valid at any pressure: any horizontal (isobaric) line cuts the supercritical isotherm just once in Figure 16.3.3 .
2. Critical isotherm ($T = T_c$): At the critical point ($P = P_c$), vapor and liquid properties are the same. Consequently, the cubic equation predicts three real and equal roots at this special and particular point. However, for any other pressure along the critical isotherm ($P < P_c$ or $P > P_c$,) the cubic equation gives a unique real root with two complex conjugates.
3. Subcritical isotherm ($T < T_c$): Predictions for pressures within the pressure range for metastability ($P_A’ < P < P_B’$) or for the saturation condition ($P = P^{sat}$) will always yield three real, different roots. In fact, this is the only region where an isobar cuts the same isotherm more than once. The smallest root is taken as the specific volume of the liquid phase; the largest is the specific volume of the vapor phase; the intermediate root is not computed as it is physically meaningless. However, do not get carried away. Subcritical conditions will not always yield three real roots of the type $\bar{\nu} > b$. If the pressure is higher than the maximum of the S-shaped curve, $P_B$, we will only have one (liquid) real root that satisfies $\bar{\nu} > b$. By the same token, pressures between $0 < P < P_A’$ yield only one (vapor) root. In the case of $P_A’$ being a negative number, three real roots are to be found even for very low pressures when the ideal gas law applies. The largest root is always the correct choice for the gas phase molar volume of pure components.
Most of these considerations apply to the cubic equation of state in $Z$ (compressibility factor). The most common graphical representation of compressibility factor is the well-known chart of Standing and Katz, where compressibility, $Z$, is plotted against pressure (Figure 16.3.4 ). Standing and Katz presented their chart for the compressibility factor of sweet natural gases in 1942. This chart was based on experimental data. Graphical determination of properties was widespread until the advent of computers, and thus the Standing and Katz Z-chart became very popular in the natural gas industry. Typical Standing and Katz charts are given for high temperature conditions ($T > T_c$ or $T_r > 1$).
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(LibreTexts)/16%3A_The_Properties_of_Gases/16.03%3A_A_Cubic_Equation_of_State.txt
|
van der Waals assumed that all real gases at corresponding states should behave similarly. The corresponding state that van der Waals chose to use is called the reduced state, which is based on the deviation of the conditions of a substance from its own critical conditions. We can define reduced quantities:
$P_r = \dfrac{P}{P_c} \nonumber$
$V_r= \dfrac{V}{V_c} \nonumber$
$T_r= \dfrac{T}{T_c} \nonumber$
By substitution into the van der Waals equation we find:
$\left( P_r + \dfrac{3}{\bar{V}_r^2} \right) \left( \bar{V}_r - \dfrac{1}{3} \right) = \dfrac{8}{3} T_r \label{vdw}$
Which means the critical parameters for a gas can be expressed in terms of $a$ and $b$ parameters:
$V_c=3b \nonumber$
$P_c=\dfrac{a}{27b^2} \nonumber$
and:
$T_c=\dfrac{8a}{27b R} \nonumber$
The formation of the van de Waal's equation of state in terms of the critical parameters in Equation $\ref{vdw}$ is a universal equation for all gases. Although the actual pressures and volumes may differ, two gases are said to be in corresponding states if their reduced pressure, volume, and temperature are the same. What it says is that the behavior of all gases (and liquids!!) is pretty much the same, except for a scaling factor that is related to the critical point of the substance. The compressibility value ($Z$) of 3/8 at the critical point for the van der Waals equation is actually not in such good agreement with measurement. This is why the more complicated Redlich-Kwong and Peng-Robertson expressions are better, although the idea is the same. Hence, all one needs to know to describe any fluid's behavior is its critical point. For example, argon behaves much the same at 300 K as ethane does at 600 K because these temperatures correspond to twice their respective critical temperatures (150.72 K and 305.4 K, respectively) so $T_r=2.0$.
We can rewrite the universal expression for compressibility $Z$ using reduced variables and plot measured values of $Z$ versus the reduced pressure, $P_r$ (see Figure 16.4.1 ). As you can see, very different gases/liquids like nitrogen and water can be made to coincide if their properties are plotted relative to there critical points rather than in absolute terms. The compressibility factor $Z$ can also be cast into the form of corresponding states showing that $Z$ also can be expressed as a universal function of $V_r$ and $T_r$ or any other two reduced quantities:
$Z=\dfrac{\bar{V}_r}{\bar{V}_r–\dfrac{1}{3}} – \dfrac{9}{8\bar{V}_r T_r} \nonumber$
Van der Waals did not stop here. He went on to describe the root cause of condensation of gases into liquids at lower temperatures: the attractive interactions between the molecules.
16.05: The Second Virial Coefficient
The second virial coefficient describes the contribution of the pair-wise potential to the pressure of the gas. The third virial coefficient depends on interactions between three molecules, and so on and so forth.
Introduction
As the density is increased the interactions between gas molecules become non-negligible. Deviations from the ideal gas law have been described in a large number of equations of state. The virial equation of state expresses the deviation from ideality in terms of a power series in the density:
$\dfrac{P}{kT} = \rho + B_2(T)\rho^2 + B_3(T)\rho^3 + ... \nonumber$
• $B_2(T)$ is the second virial coefficient,
• $B_3(T)$ is called the third virial coefficient, etc.
The jth virial coefficient can be calculated in terms of the interaction of j molecules in a volume $V$. The second and third virial coefficients give most of the deviation from ideal ($P/rkT$) up to 100 atm.
The second virial coefficient is usually written as $B$ or as $B_2$. The second virial coefficient represents the initial departure from ideal-gas behavior. The second virial coefficient, in three dimensions, is given by:
$B_{2}(T)= - \dfrac{1}{2} \int \left( \exp\left(-\dfrac{\Phi_{12}({\mathbf r})}{k_BT}\right) -1 \right) 4 \pi r^2 dr \nonumber$
where $\Phi_{12}({\mathbf r})$ is the intermolecular pair potential, $T$ is the temperature, and $k_B$ is the Boltzmann constant. Notice that the expression within the parenthesis of the integral is the Mayer f-function. In practice, the integral is often very hard to integrate analytically for anything other than, say, the hard sphere model, thus one numerically evaluates:
$B_{2}(T)= - \dfrac{1}{2} \int \left( \left\langle \exp\left(-\dfrac{\Phi_{12}({\mathbf r})}{k_BT}\right)\right\rangle -1 \right) 4 \pi r^2 dr \nonumber$
calculating:
$\left\langle \exp\left(-\dfrac{\Phi_{12}({\mathbf r})}{k_BT}\right)\right\rangle \nonumber$
for each $r$ using the numerical integration scheme proposed by Harold Conroy [1][2].
Calculation of virial coefficients
The configuration integrals for $Z_1$, $Z_2$, and $Z_3$ are:
• $Z_1 = \int dr_1 = V$
• $Z_2 = \int e^{-U_2/kT}dr_1\;dr_2$
• $Z_3 = \int e^{-U_3/kT}dr_1\;dr_2\;dr_3$
The series method allows the calculation of a number of virial coefficients. Recall that the second and third virial coefficient can account for the properties of gases up to hundreds of atmospheres. We will discuss the calculation of the second virial coefficient for a monatomic gas to illustrate the procedure. To calculate $B_2(T)$ we need $U_2$. For monatomic particles it is reasonable to assume that the potential depends only on the separation of the two particles so $U_2 = u(r_{12})$, where $r_{12} = |r_2 - r_1|$. Using a change of variables we can write this integral $r_{12} = r_2 - r_1$ and after integration over $r_1$ we can transform variables from $dr_{12}$ to $4\pi r^2dr$. The result is:
$B_2(T) = -2 \pi \int [ e^{-\beta u(r)} - 1 ] r^2 \; dr \nonumber$
This expression can be used to obtain parameters from experiment. The second virial coefficient is tabulated for a number of gases. For a hard-sphere potential there is an infinite repulsive wall at a particle radius $\sigma$. There is no attractive part.
\begin{align*} B_2(T) &= -2 \pi \int_0^{\sigma} [ - 1 ] r^2 \; dr \[4pt] &= \dfrac{2 \pi \sigma^3}{3} \end{align*}
The Lennard-Jones potential cannot be calculated analytically, but the integral can be computed numerically. The second virial coefficient was a useful starting point for obtaining Lennard-Jones parameters that were used in simulations.
Isihara-Hadwiger Formula
The Isihara-Hadwiger formula was discovered simultaneously and independently by Isihara and the Swiss mathematician Hadwiger in 1950. The second virial coefficient for any hard convex body is given by the exact relation:
$B_2=RS+V \nonumber$
or:
$\dfrac{B_2}{V}=1+3 \alpha \nonumber$
where:
$\alpha = \dfrac{RS}{3V} \nonumber$
where $V$ is the volume, $S$, the surface area, and $R$ the mean radius of curvature.
Hard spheres
For the hard sphere model one has:[9]
$B_{2}(T)= - \dfrac{1}{2} \int_0^\sigma \left(\langle 0\rangle -1 \right) 4 \pi r^2 dr \nonumber$ leading to $B_{2}= \dfrac{2\pi\sigma^3}{3} \nonumber$
Note that $B_{2}$ for the hard sphere is independent of temperature.
Van der Waals equation of state
For the Van der Waals equation of state one has:
$B_{2}(T)= b -\dfrac{a}{RT} \nonumber$
Excluded volume
The second virial coefficient can be computed from the expression:
$B_{2}= \dfrac{1}{2} \iint v_{\mathrm {excluded}} (\Omega,\Omega') f(\Omega) f(\Omega')~ {\mathrm d}\Omega {\mathrm d}\Omega' \nonumber$
where $v_{\mathrm {excluded}}$ is the excluded volume.
Related reading
• W. H. Stockmayer "Second Virial Coefficients of Polar Gases", Journal of Chemical Physics 9 pp. 398- (1941)
• G. A. Vliegenthart and H. N. W. Lekkerkerker "Predicting the gas–liquid critical point from the second virial coefficient", Journal of Chemical Physics 112 pp. 5364-5369 (2000)
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(LibreTexts)/16%3A_The_Properties_of_Gases/16.04%3A_The_Law_of_Corresponding_States.txt
|
Proposed by Sir John Edward Lennard-Jones, the Lennard-Jones potential describes the potential energy of interaction between two non-bonding atoms or molecules based on their distance of separation. The potential equation accounts for the difference between attractive forces (dipole-dipole, dipole-induced dipole, and London interactions) and repulsive forces.
Introduction
Imagine two rubber balls separated by a large distance. Both objects are far enough apart that they are not interacting. The two balls can be brought closer together with minimal energy, allowing interaction. The balls can continuously be brought closer together until they are touching. At this point, it becomes difficult to further decrease the distance between the two balls. To bring the balls any closer together, increasing amounts of energy must be added. This is because eventually, as the balls begin to invade each other’s space, they repel each other; the force of repulsion is far greater than the force of attraction.
This scenario is similar to that which takes place in neutral atoms and molecules and is often described by the Lennard-Jones potential.
The Lennard-Jones Potential
The Lennard-Jones model consists of two 'parts'; a steep repulsive term, and smoother attractive term, representing the London dispersion forces. Apart from being an important model in itself, the Lennard-Jones potential frequently forms one of 'building blocks' of many force fields. It is worth mentioning that the 12-6 Lennard-Jones model is not the most faithful representation of the potential energy surface, but rather its use is widespread due to its computational expediency. The Lennard-Jones Potential is given by the following equation:
$V(r)= 4 \epsilon \left [ {\left (\dfrac{\sigma}{r} \right )}^{12}-{\left (\dfrac{\sigma}{r} \right )}^{6} \right] \label{1}$
or is sometimes expressed as:
$V(r) = \frac{A}{r^{12}}- \dfrac{B}{r^6} \label{2}$
where
• $V$ is the intermolecular potential between the two atoms or molecules.
• $\epsilon$ is the well depth and a measure of how strongly the two particles attract each other.
• $\sigma$ is the distance at which the intermolecular potential between the two particles is zero (Figure $1$). $\sigma$ gives a measurement of how close two nonbonding particles can get and is thus referred to as the van der Waals radius. It is equal to one-half of the internuclear distance between nonbonding particles.
• $r$ is the distance of separation between both particles (measured from the center of one particle to the center of the other particle).
• $A= 4\epsilon \sigma^{12}$, $B= 4\epsilon \sigma^{6}$
• Minimum value of $\Phi_{12}(r)$ at $r = r_{min}$.
Example $1$
The $\epsilon$ and $\sigma$ values for Xenon (Xe) are found to be 1.77 kJ/mol and 4.10 Angstroms, respectively. Determine the van der Waals radius for the Xenon atom.
Solution
Recall that the van der Waals radius is equal to one-half of the internuclear distance between nonbonding particles. Because $\sigma$ gives a measure of how close two non-bonding particles can be, the van der Waals radius for Xenon (Xe) is given by:
r = $\sigma$/2 = 4.10 Angstroms/2 = 2.05 Angstroms
Bonding Potential
The Lennard-Jones potential is a function of the distance between the centers of two particles. When two non-bonding particles are an infinite distance apart, the possibility of them coming together and interacting is minimal. For simplicity's sake, their bonding potential energy is considered zero. However, as the distance of separation decreases, the probability of interaction increases. The particles come closer together until they reach a region of separation where the two particles become bound; their bonding potential energy decreases from zero to a negative quantity. While the particles are bound, the distance between their centers continue to decrease until the particles reach an equilibrium, specified by the separation distance at which the minimum potential energy is reached.
If the two bound particles are further pressed together, past their equilibrium distance, repulsion begins to occur: the particles are so close together that their electrons are forced to occupy each other’s orbitals. Repulsion occurs as each particle attempts to retain the space in their respective orbitals. Despite the repulsive force between both particles, their bonding potential energy increases rapidly as the distance of separation decreases.
Example $2$
Calculate the intermolecular potential between two Argon (Ar) atoms separated by a distance of 4.0 Angstroms (use $\epsilon$=0.997 kJ/mol and $\sigma$=3.40 Angstroms).
Solution
To solve for the intermolecular potential between the two Argon atoms, we use equation 2.1 where V is the intermolecular potential between two non-bonding particles.
$V= 4 \epsilon \left [ {\left (\dfrac{\sigma}{r} \right )}^{12}-{\left (\dfrac{\sigma}{r} \right )}^{6} \right] \nonumber$
The data given are $\epsilon$=0.997 kJ/mol, $\sigma$=3.40 Angstroms, and the distance of separation, r=4.0 Angstroms. We plug these values into equation 2.1 and solve as follows:
\begin{align*} V &= 4(0.997\;\text{kJ/mol}) \left[\left(\dfrac{3.40\;\text{Angstroms}}{4.0\;\text{Angstroms}}\right)^{12}-\left(\dfrac{3.40 \;\text{Angstroms}}{4.0 \;\text{Angstroms}}\right)^6\right] \[4pt] &= 3.988(0.14-0.38) \[4pt] & = 3.988(-0.24) \[4pt] & = -0.96 kJ/mol \end{align*}
Stability and Force of Interactions
Like the bonding potential energy, the stability of an arrangement of atoms is a function of the Lennard-Jones separation distance. As the separation distance decreases below equilibrium, the potential energy becomes increasingly positive (indicating a repulsive force). Such a large potential energy is energetically unfavorable, as it indicates an overlapping of atomic orbitals. However, at long separation distances, the potential energy is negative and approaches zero as the separation distance increases to infinity (indicating an attractive force). This indicates that at long-range distances, the pair of atoms or molecules experiences a small stabilizing force. Lastly, as the separation between the two particles reaches a distance slightly greater than σ, the potential energy reaches a minimum value (indicating zero force). At this point, the pair of particles is most stable and will remain in that orientation until an external force is exerted upon it.
Example $3$
Two molecules, separated by a distance of 3.0 angstroms, are found to have a $\sigma$ value of 4.10 angstroms. By decreasing the separation distance between both molecules to 2.0 angstroms, the intermolecular potential between the molecules becomes more negative. Do these molecules follow the Lennard-Jones potential? Why or why not?
Solution
Recall that $\sigma$ is the distance at which the bonding potential between two particles is zero. On a graph of the Lennard-Jones potential, then, this value gives the x-intersection of the graph. According to the Lennard-Jones potential, any value of $r$ greater than $\sigma$ should yield a negative bonding potential and any value of r smaller than $\sigma$ should yield a positive bonding potential. In this scenario, as the separation between the two molecules decreases from 3.0 angstroms to 2.0 angstroms, the bonding potential is becomes more negative. In essence however, because the starting separation (3.0 angstroms) is already less than $\sigma$ (4.0 angstroms), decreasing the separation even further (2.0 angstroms) should result in a more positive bonding potential. Therefore, these molecules do not follow the Lennard-Jones potential.
Problems
1. The second part of the Lennard-Jones equation is ($\sigma$/r)6 and denotes attraction. Name at least three types of intermolecular interactions that represent attraction.
Answer
From section 2.1, dipole-dipole, dipole-induced dipole, and London interactions are all attractive forces.
1. At what separation distance in the Lennard-Jones potential does a species have a repulsive force acting on it? An attractive force? No force?
Answer
See Figure C. A species will have a repulsive force acting on it when r is less than the equilibrium distance between the particles. A species will have an attractive force acting on it when r is greater than the equilibrium distance between the particles. Lastly, when $r$ is equal to the equilibrium distance between both particles, the species will have no force acting upon it.
16.07: Van der Waals Constants in Terms of Molecular Parameters
The van der Waals equation can be written as:
$P=\frac{RT}{\bar{V}-b}-\frac{a}{\bar{V}^2} \nonumber$
Using the binomial expansion:
$\frac{1}{1-x}=1+x+x^2+\cdots \nonumber$
We can write the van der Waals equation of state as:
$\begin{split}P&=\frac{RT}{\bar{V}}\left[1+\frac{b}{\bar{V}}+\frac{b^2}{\bar{V}^2}+\cdots\right]-\frac{a}{\bar{V}^2}\&= \frac{RT}{\bar{V}}+(RTb-a)\frac{1}{\bar{V}^2}+\frac{RTb^2}{\bar{V}^3}+\cdot\end{split} \nonumber$
Rearranging in terms of compressibility, $Z$:
$Z = \frac{P\bar{V}}{RT} = 1+\left(b-\frac{a}{RT}\right)\frac{1}{\bar{V}}+\frac{b^2}{\bar{V}^2}+\cdots \nonumber$
Comparing to the Viral equation of state, we can see that the second Viral coefficient is related to the van der Waals coefficients:
$B_{2V}(T)=b-\frac{a}{RT} \nonumber$
Using a combination of the hard sphere model and the Lennard-Jones potential, we obtain:
$u(r) = \begin{cases} \infty & r \leq \sigma \ -\dfrac{C_6}{r^6} & r > \sigma \end{cases} \label{1}$
We can write the second Virial coefficient as
$B_2(T) = \dfrac{2}{3} \pi N_0 \sigma^3 \left[ 1 - \dfrac{C_6}{3 k_B T \sigma^6} \right] \label{2}$
Let us introduce to simplifying variables
\begin{align} b &= \dfrac{2}{3} \pi N_0 \sigma^3 \ a &= \dfrac{2 \pi N_0^2 C_6}{9 \sigma^3} \end{align} \label{3}
In which the second Virial coefficient becomes:
$B_2(T) = b - \dfrac{a}{RT} \label{4}$
From these equations, we can see that $a$ is directly proportional to $c_6$, the intermolecular attraction between particles with a force proportional to $r^{-6}$. Likewise, $b$ is equivalent to four times the volume of the particle. In other words, the van der Waals equations assumes the hard sphere model at small distances and weak interactions (attractive forces) at larger distances.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(LibreTexts)/16%3A_The_Properties_of_Gases/16.06%3A_The_Repulsive_Term_in_the_Lennard-Jones_Potential.txt
|
These are homework exercises to accompany Chapter 16 of McQuarrie and Simon's "Physical Chemistry: A Molecular Approach" Textmap.
Q16.10
One liter of N2 (g) at 2.1 bar and two liters of Ar(g) at 3.4 bar are mixed in a 4.0-L flask to form an ideal-gas mixture. Calculate the value of the final pressure of the mixture if the initial and final temperature of the gases are the same. Then, repeat this calculation if the initial temperatures of the N2 (g) and Ar(g) are 304 K and 402 K, respectively, and the final temperature of the mixture is 377 K. (Assume ideal-gas behavior.)
S16.10
Using the ideal gas law, we can find the number of moles of each gas.
$n_{N_2} = \dfrac{P_{N_2} V_{N_2}}{RT} = \dfrac{(2.1x10^{5} Pa)*(1x10^{-3} m^3)}{RT} = \dfrac{210 Pa * m^3}{RT}$
$n_{Ar} = \dfrac{P_{Ar} V_{Ar}}{RT} = \dfrac{(3.4x10^{5} Pa)*(2x10^{-3} m^3)}{RT} = \dfrac{680 Pa * m^3}{RT}$
The total moles of gas in the final mixture is the sum of the moles of each gas in the mixture, which is
$\dfrac{210 Pa * m^3}{RT} + \dfrac{680 Pa * m^3}{RT} = \dfrac{890 Pa * m^3}{RT}$
Therefore,
$P = \dfrac{nRT}{V} = \dfrac{890 Pa * m^3}{0.0040 m^3} = 2.2 x 10^5 Pa = 2.2 bar$
Now, considering the initial temperatures of the gases are different from each other and from the final temperature of the mixture, we calculate the total number of moles to be
$n_{total} = n_{N_2} + n_{Ar} = \dfrac{210 Pa * m^3}{R*(304 K)} + \dfrac{680 Pa * m^3}{R*(402 K)}$
Substituting this into the ideal gas law, we get the final pressure to be
$P= [ \dfrac{210 Pa * m^3}{R*(304 K)} + \dfrac{680 Pa * m^3}{R*(402 K)} ] * \dfrac{R*(377 K)}{0.0040 m^3} = 2.2 x 10^5 Pa = 2.2 bar$
Q16.12
What is the molar Gas Constant in units of cm3 torr K-1 mol-1.
S16.12
We know $R=0.082058 L\cdot atm\cdot mol^{-1}K^{-1}$
using dimensional analysis:
$R=0.082058 L\cdot atm\cdot mol^{-1}K^{-1}\times \frac{760torr}{1 atm} \times \frac{1 dm^3}{1L} \times \frac{1 dm^3}{1000cm^3}$
$R=0.0623639 cm^3\cdot atm\cdot mol^{-1}K^{-1}$
Q16.13
Use the van der Waals equation to plot the compressibility factor, Z, against P for methane for T = 180 K, 189 K, 190 K, 200 K, and 250 K.
S16.13
First, calculate Z as a function of $\bar{V}$ and $P$ as a function of $\bar{V}$ and plot $Z$ versus $P$
For methane, a = 2.3026 $dm^{6}\cdot bar\cdot mol^{-2}$ and b = 0.043067 $dm^{3}\cdot mol^{-1}$
$Z = \dfrac{P\bar{V}}{RT}$
and the van der Waals equation of state is:
$P = \dfrac{RT}{\bar{V} - b} - \dfrac{a}{\bar{V}^2}$
Then create a parametric plot of Z versus P for the suggested temperatures as shown below.
Q16-15
Define the equation that could be solved by using the Netwon-Raphson method to find the molar volume of O2 at 300K and 200atm. Use the van der Waals anf Redlich-Kwong equations.
Constants:
• van der Waals: $a = 1.3820 dm^{6}*bar*mol,b=0.031860 dm^{3}*mol^{-1}$
• Redlich-Kwong: $A = 17.411 dm^{6}*bar*mol^{-2}*K^{1/2},B=0.022082 dm^{3}*mol^{-1}$
S16.15
Solving van der Waals equation for volume produces:
$\bar{V}^{3} - (b + \dfrac{RT}{P})\bar{V}^{2} + \dfrac{a}{P}\bar{V} - \dfrac{ab}{P}=0\] Plugging in constants gives:$\bar{V}^{3} - 0.15486\bar{V}^{2} + 0.00691\bar{V} - 0.00022015=0\]
which can be solved numerically to find molar volume.
The Redlich-Kwong equation can be written as a cubic in volume:
$\bar{V}^{3} - \dfrac{RT}{P}\bar{V}^{2} - \left(B^{2} + \dfrac{BRT}{P} - \dfrac{A}{T^{1/2}P}\right)\bar{V} - \dfrac{AB}{T^{1/2}P} = 0\] Plugging in constants gives:$\bar{V}^{3} - 0.123\bar{V}^{2} + 0.001822\bar{V} - 0.00011099 = 0\]
which can be solved numerically to find molar volume.
Q16.16
Compare the Redlich-Kwong and the van der Waals equations for ethane. Take the molar volume to be 0.09416 .
S16.16
van der Waals:
$P = \dfrac{RT}{\bar{V}-b}-\dfrac{a}{(\bar{V})^2}\] Reference Table 16.3 for a, b$P = \dfrac{0.083145*400}{.09416-0.065144}-\dfrac{5.5818}{(0.09416)^2}\]
Redlich-Kwong equation of state improved the accuracy of the van der Waals equation by adding a temperature dependence for the attractive term:
$P = \dfrac{RT}{\bar{V}-B} - \dfrac{A}{T^(\dfrac{1}{2})\bar{V}(\bar{V}+B)}\] Reference Table 16.4 for A, B$P = \dfrac{0.083145*400}{0.09416-0.045153} - \dfrac{98.831}{400^(\dfrac{1}{2})0.09416*(0.09416+0.62723)}\]
Q16.17
$p=\dfrac{RT}{\dfrac{V}{n}-b}-\dfrac{a}{\sqrt{T}\dfrac{V}{n}\big( \dfrac{V}{n}+b\big) }$
$\big (p+\dfrac{n^{2}a}{V^{2}} \big) (V-nb)=nRT$Use the equations above to calculate the temperature of a system with $1.5 \ mol$ of methane at $300 \ K$ confined to a volume of $100 \ cm^{3}$. Compare the two calculated pressures.
S16.17
$R = 82.057338 \dfrac{cm^{2} atm }{K \ mol}$
Redlich-Kwong constants: $a = 31.59\times 10^{6} \ (atm)(K^{\frac{1}{2}})\big( \dfrac{cm^{3}}{mol} \big) ^{2}$ and $b = 29.6 \ \dfrac{cm^{3}}{mol}$
Van der Waals constants: $a = 2.25 \times 10^{6} \ (atm) \big( \dfrac{cm^{3}}{mol} \big) ^{2}$ and $b = 42.8 \ \dfrac{cm^{3}}{mol}$
Redlich-Kwong equation: $p=\dfrac{82.057338 \dfrac{cm^{2} atm }{K \ mol} \ 300\ K}{\dfrac{100 \ cm^{3}}{1.5 \ mol}-29.6 \ \dfrac{cm^{3}}{mol}}-\dfrac{31.59 \times 10^{6} \ (atm)(K^{\frac{1}{2}})\big( \dfrac{cm^{3}}{mol} \big) ^{2}}{\sqrt{300 \ K}\dfrac{100 \ cm^{3}}{1.5 \ mol}\big( \dfrac{100 \ cm^{3}}{1.5 \ mol}+29.6 \ \dfrac{cm^{3}}{mol}\big) } = 379.95\ atm$
Van der Waals equation: $p =\dfrac{(1.5 \ mol)(82.057338 \dfrac{cm^{2} atm }{K \ mol})(300\ K)}{(100 \ cm^{3})-(1.5\ mol)(42.8 \ \dfrac{cm^{3}}{mol})} -\dfrac{(1.5\ mol)^{2}(2.25 \times 10^{6} \ (atm) \big( \dfrac{cm^{3}}{mol} \big) ^{2} )}{(100 \ cm^{3})^{2}} = 525.2\ atm$
There is a $145.25\ atm$ difference in the calculated pressures.
Q16.20
The following equation can be used to relate the pressure of propane to its density at temperatures below 400 K:
$P = 33.258\rho - 7.5884\rho^{2} + 1.0306\rho^{3} - 0.058757\rho^{4} - 0.0033566\rho^{5} + 0.00060696\rho^{6}$
where the pressure (P) is in bar and the density ( $\rho$) is in $\frac{mol}{L}$ and ranges from $0 \frac{mol}{L} \leq \rho \leq 12.3 \frac{mol}{L}$.
Calculate and plot the pressure for the given density range using:
1. the equation given above
2. the Van der Waals equation of state
3. the Redlich-Kwong equation of state
Compare your results from the above parts.
S16.20
a.) Straight forward to calculate.
Even though straight forward, should still put solution for students.
graph plotted using matlab.
b.) The following constants were used in the equation below:
$a = 9.3919 \dfrac{L^{2} bar}{mol^{2}}$ and $b = 0.09049 \dfrac{L}{mol}$
$P_{VDW} = \dfrac{RT}{\overline{V}-b} - \dfrac{a}{\overline{V}^{2}}$
c.) The following constants were used in the equation below:
$a = 183.01 \dfrac{L^{2} bar}{mol^{2}}$ and $b = 0.06272 \dfrac{L}{mol}$
$P_{RK} = \dfrac{RT}{\overline{V}-b} - \dfrac{a}{\sqrt{T}\overline{V} (\overline{V}+b)}$
As can be seen by the plot, the Van der Waals equation of state deviates greatly from the empirical fit at higher densities. The Redlich-Kwong equation provides a reasonable approximation to the empirical data. (All calculations were done in MATLAB, and the above plot was produced by me)
Q16.21
Use the data below to evaluate the van der waals and Redlich Kwong constants for benzene.
Species Tc/K Pc/bar Pc/atm Vnc/L mol-1* PcVnc/RTc
Benzene 561.75 48.758 48.120 0.256 0.26724
*n to indicate molar volume
Van Der Waals
$a = \dfrac{27(RT_c)^2}{64P_c} \,and \,b = \dfrac{RT_c}{8P_c}$
Redlich-Kwong
$A = 0.42748\dfrac{R^2T_c^{\dfrac{5}{2}}}{P_c} \,and \,B = 0.08664\dfrac{RT_c}{P_c}$
S16.21
Van Der Waals
a = $\dfrac{27*(0.083145 dm^3 \cdot bar \cdot mol^{-1}K^{-1})^2*(561.75 K)^2}{64*48.758 bar} = {\bf 18.8754 \, dm^6 \cdot atm \cdot mol^{-2}}$
b = $\dfrac{(0.083145 dm^3 \cdot bar \cdot mol^{-1}K^{-1})^2*(561.75 K)^2}{8*48.758 bar} = {\bf 5.5927 \, dm^3 \cdot mol^{-1}}$
Redlich-Kwong
$A = 0.42748*\dfrac{(0.083145 dm^3 \cdot bar \cdot mol^{-1}K^{-1})^2*(561.75 K)^{\dfrac{5}{2}}}{48.758 bar} = {\bf 453.315 \, dm^6 \cdot atm \cdot mol^{-2}\cdot K^{\dfrac{1}{2}}}$
$B = 0.42748*\dfrac{(0.083145 dm^3 \cdot bar \cdot mol^{-1}K^{-1})^2*(561.75 K)^2}{48.758 bar} = {\bf 19.126\, dm^3 \cdot mol^{-1}}$
Q16.22
Show that the van der Waals equation for argon at T = 142.69 K and P = 35.00 atm can be written as
$\overline {V}^3 - 0.3664\overline {V}^2+0.001210=0$
where, for convenience, we have suppressed the units in the coefficients. Use the Newton-Raphson method (MathChapter G) to find the three roots to this equation, and calculate the values of the density of liquid and vapor in equilibrium with each other under these conditions.
S16.22
Using Table 16.3, we can get the values of a and b for argon to use in Van der Waals equation of state.
By doing this, you can isolate $\overline {V}$ in van der Waals equation of state. Next, by applying the Newton-Raphson method to the function given by van der Waals equation (which is only dependent on $\overline {V}$ ), and the derivative of the function, we find the roots to the equation. On the case here, the smallest root represents the molar volume of liquid argon, and the largest root represents the molar volume of vapor argon.
Q16.23
Calculate the Volume occupied by $50\ kg$ of propane at $50^{\circ}C$ and $35\ bar$, using the Redlich-Kwong equation of state and the Peng-Robinson equation of state.
S16.23
Redlich-Kwong equation of state:
$P = \frac{RT}{v-b} - \frac{a}{v(v+b)}$
$v$ is volume of gas per mole.
we have $T = 50^{\circ} C = 323\ K$ , $P = 35\ bar = 35 \times (10^5) Pa$
For propane, we have $P_{c} =42.4924\ bar = 42.4924 \times 10^5 Pa$ and $T_{c} = 369.522\ K$
so we have $a = \frac{0.42748 R^2 T_{c}^{2.5}}{P_{c}T^{0.5}}$ ;
$b = \frac{0.08664 * R*T_{c}}{P_{c}}$
so, $a = \frac{0.42748 (8.314^2) 369.522^{2.5}}{42.4924 \times 10^5(323^{0.5})} = 1.015603$
$b = \frac{(0.08664) (8.314)(369.522)}{42.4924 \times 10^5} = 6.2640829 \times 10^{-5}$
so, $P = \frac{RT}{v-b} - \frac{a}{v(v+b)}$
$(35*(10^5)) = \frac{(8.314)(323)}{ v-(6.2640829 \times 10^{-5})} - \frac{1.015603}{v(v+6.2640829 \times 10^{-5})}$
The only real solution is
$v = 0.00010933 \frac{m^3}{mol}$
so In $50\ Kg$ propane, number of moles = $\frac{50*1000}{44.10 } = 1133.78684$
total volume occupied by $50\ Kg$ propane at the specified conditions = $(1133.78684)(0.00010933) = 0.1239569 m^3 = 123.9569$ Liters is the answer.
Peng - Robinson's equation of state:
$P = \frac{RT} {v-b} -\frac{ a}{v(v+b) + b(v-b)}$
$a = 0.45724R^2 * T_{c}^2 * \frac{\alpha}{P_c}$
$b = 0.0778 R \frac{T_c}{P_c}$
$\alpha = 1 + S(1- \sqrt{T_r})$
$S = 0.37464 + 1.54226w -0.26992w^2$
$T_r = \frac{T}{T_c}$
we have $P = 35 \times 10^5 Pa$ and $T = 323\ K$
$P_c =42.4924\ bar = 42.4924 \times 10^5 Pa$ and $T_c = 369.522\ K$
we have accentric factor = $w = 0.152$.
$T_r = \frac{T}{T_c} = \frac{323}{369.522} = 0.87410$
$S = 0.37464 + (1.54226)(0.152) -(0.26992)(0.152^2) =0.602827$
$\alpha = 1 + 0.602827*(1- \sqrt{0.87410}) = 1.0392240523$
$a = (0.45724)(8.314^2 )( 369.522^2 )\frac{ 1.0392240523}{42.4924 \times 10^5} = 1.055462444$
$b = (0.0778 ) (8.314)\frac{369.522} {42.4924 \times 10^5} = 0.0000562494$
so, by the equation of state:
$(35 \times 10^5) = \frac{(8.314)(323)}{v-0.0000562494} - \frac{1.055462444}{v(v+ 0.0000562494+ 0.0000562494(v- 0.0000562494)}$
$v = 0.0000988504 \frac{m^3}{mol}$
so In $50\ Kg$ propane, number of moles = $\frac{(50)(1000)}{44.10} = 1133.78684$
total volume occupied by $50\ Kg$ propane at the specified conditions = $(1133.78684)(0.0000988504) = 0.1120752 m^3 = 112.0752$ Liters is the answer.
Q16.25
A way to obtain the expressions for the van der Waals constants is to make $(\frac{\partial P}{\partial\bar{V}})_T$ and $(\frac{\partial^2 P}{\partial\bar{V}^{2}})_T$ zero at the critical point, but why do they equal to zero at critical point? Obtain the equation $\bar{V_c}$ = $3b$from this procedure.
S16.25
$(\frac{\partial P}{\partial\bar{V}})_T$ = 0
$(\frac{\partial^2 P}{\partial\bar{V}^{2}})_T$ = 0
$(\frac{\partial P}{\partial\bar{V}})_T$ = $\frac{\partial}{\partial\bar{V}}$ $(\frac{RT}{\bar{V}-b} - \frac{a}{\bar{T}^{2}})_T$
= $RT$ $(-\frac{1}{(\bar{V}-b)^{2}})$ - $a$ $(\frac{-2}{\bar{V}^{3}})$
= $-\frac{RT}{(\bar{V}-b)^{2}}$ +$\frac{2a}{\bar{V}^{3}}$
At critical point,
$\frac{RT_c}{(\bar{V_c}-b)^2}$ = $\frac{2a}{\bar{V_c}^{3}}$ by equating $(\frac{\partial P}{\partial\bar{V}})_T$ = 0.
By differentiating the equation $RT$ $(-\frac{1}{(\bar{V}-b)^{2}})$ - $a$ $(\frac{-2}{\bar{V}^{3}})$ with respect to $\bar{V}$,
$(\frac{\partial^{2} P}{\partial\bar{V}^{2}})_T$ = $\frac{\partial}{\partial \bar{V}}$ $(-\frac{RT}{(\bar{V}-b)^2}$ +$\frac{2a}{\bar{V}^{3}})_T$
= $RT$ $(-\frac{2}{(\bar{V}-b)^{3}})$ + $2a$ $(\frac{-3}{\bar{V}^{4}})$
= $\frac{2RT}{(\bar{V}-b)^{3}}$ - $\frac{6a}{\bar{V}^{4}}$
At critical point,
$\frac{2RT_c}{(\bar{V_c}-b)^{3}}$ = $\frac{6a}{\bar{V_c}^{4}}$ by equating $(\frac{\partial^2 P}{\partial\bar{V}^{2}})_T$ = 0.
Now,
$\frac{\frac{2a}{\bar{V_c}^{3}}}{\frac{6a}{\bar{V_c}^{4}}}$ = $\frac{\frac{RT_c}{(\bar{V_c}-b)^2}}{\frac{2RT_c}{(\bar{V_c}-b)^{3}}}$
$\frac{\bar{V_c}}{3}$ = $\frac{(\bar{V_c}-b)}{2}$
$\bar{V_c}$ = $3b$
Q16.26
Show that the Redlich-Kwong equation can be rewritten in the following form:
$\bar(V)^3 - \frac{RT}{P}\bar{V}^2 - (B^2 + \frac{BRT}{P} - \frac{A}{PT^{1/2}})\bar{V} - \frac{AB}{PT^{1/2}} = 0$
S16.26
Starting with the Redlich-Kwong equation and manipulating it we get the following:
$P = \frac{RT}{\bar{V}-B} - \frac{A}{T^{1/2}\bar{V}(\bar{V} + B)}$
$P(\bar{V}-B)T^{1/2}\bar{V}(\bar{V} + B) = RT^{3/2}\bar{V}(\bar{V} + B) - A(\bar{V}-B)$
$PT^{1/2}\bar{V}(\bar{V}^2 - B^2) = RT^{3/2}\bar{V}^2 + RT^{3/2}\bar{V}B - A\bar{V} + AB$
$\bar(V)^3 - \frac{RT}{P}\bar{V}^2 - (B^2 + \frac{BRT}{P} - \frac{A}{PT^{1/2}})\bar{V} - \frac{AB}{PT^{1/2}} = 0$
Q16.27
Use the results of the previous question, Q16.26, to derive Equations 16.14.
$(a) \bar{V}_{c} = 3.8473B$, $(b) P_{c} = .029894\dfrac{A^{2/3}R^{1/3}}{B^{5/3}}$, and $(c) T_{c} = .34504\left(\frac{A}{BR}\right) ^{2/3}$
The following are solved from Q16.26 and are necessary to solve for Equation 16.14:
$1) B = .25992\bar{V}_{c}$ $2) A = 0.42748\dfrac{R^{2}T^{5/2}_{c}}{P_{c}}$ $3) \dfrac{P_{c}\bar{V_{C}}}{RT_{c}} = \frac{1}{3}$
S16.27
From equation (1) we see that solving for $\bar{V}_{c}$ yields the first equation $\bar{V}_{c} = 3.8473B$.
From equation (2) we can rewrite the expression with some manipulation as $A = .42748RT^{3/2}_{c}\left(\frac{RT_{c}}{P_{c}\bar{V}_{c}}\right)\bar{V}_{c}$
Substituting equation (3) and 16.14a into the previous equation gives the expression $A = 3(.42748)RT^{3/2}_{c}(3.8473B)$
Lastly, solving for $T_{c}$ gives the equation for 16.14c $T_{c} = .34504\left(\frac{A}{BR}\right) ^{2/3}$
Finally, solving for equation 16.14b we must substitute the previously solved 16.14c into equation (2),solving for $P_{c}$
$P_{c} = 0.42748\dfrac{R^{2}T^{5/2}_{c}}{A} = \left(\frac{.42748}{A}\right)R^{2}\left[.34504\left(\frac{A}{BR}\right)^{2/3}\right]^{5/2} = .029894\dfrac{A^{2/3}R^{1/3}}{B^{5/3}}$
which is equation 16.14b.
Q16.28
Derive the Van der Waals cubic equation of state from
$\big( P+\frac{a}{\bar{V}^2}\big)\big(\bar{V}-b\big)=RT$
S16.28
The Van der Waal equation can be rewritten in terms of $Z$:
$Z=\frac{P\bar{V}}{RT}=\frac{\bar{V}}{\bar{V}-b}-\frac{a}{RT\bar{V}}$
solving for $\bar{V}$
$\bar{V}^3-\big(b+\frac{RT}{P}\big)\bar{V}^2 + \frac{a}{P}\bar{V}-\frac{ab}{P}=0$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(LibreTexts)/16%3A_The_Properties_of_Gases/16.E%3A_The_Properties_of_Gases_%28Exercises%29.txt
|
Statistical Mechanics provides the connection between microscopic motion of individual atoms of matter and macroscopically observable properties such as temperature, pressure, entropy, free energy, heat capacity, chemical potential, viscosity, spectra, reaction rates, etc. Statistical Mechanics provides the microscopic basis for thermodynamics, which, otherwise, is just a phenomenological theory. Microscopic basis allows calculation of a wide variety of properties not dealt with in thermodynamics, such as structural properties, using distribution functions, and dynamical properties – spectra, rate constants, etc., using time correlation functions. Because a statistical mechanical formulation of a problem begins with a detailed microscopic description, microscopic trajectories can, in principle and in practice, be generated providing a window into the microscopic world. This window often provides a means of connecting certain macroscopic properties with particular modes of motion in the complex dance of the individual atoms that compose a system, and this, in turn, allows for interpretation of experimental data and an elucidation of the mechanisms of energy and mass transfer in a system.
17: Boltzmann Factor and Partition Functions
Statistical Mechanics provides the connection between microscopic motion of individual atoms of matter and macroscopically observable properties such as temperature, pressure, entropy, free energy, heat capacity, chemical potential, viscosity, spectra, reaction rates, etc.
The Microscopic Laws of Motion
Consider a system of $N$ classical particles. The particles are confined to a particular region of space by a container of volume $V$. In classical mechanics, the state of each particle is specified by giving its position and its velocity, i.e., by telling where it is and where it is going. The position of particle $i$ is simply a vector of three coordinates $r_i = \begin{pmatrix} x_i, y_i, z_i \end{pmatrix}$, and its velocity $\textbf{v}_i$ is also a vector $\begin{pmatrix}v_{x_i}, v_{y_i}, v_{z_i} \end{pmatrix}$ of the three velocity components. Thus, if we specify, at any instant in time, these six numbers, we know everything there is to know about the state of particle $i$.
The particles in our system have a finite kinetic energy and are therefore in constant motion, driven by the forces they exert on each other (and any external forces which may be present). At a given instant in time $t$, the Cartesian positions of the particles are $\textbf{r}_1(t), \ldots, \textbf{r}_N(t)$, and the velocities at $t$ are related to the positions by differentiation:
$\textbf{v}_i(t) = \dfrac{d \textbf{r}_i}{dt} = \dot{\textbf{r}}_i \label{2.12}$
In order to determine the positions and velocities as function of time, we need the classical laws of motion, particularly, Newton’s second law. Newton’s second law states that the particles move under the action of the forces the particles exert on each other and under any forces present due to external fields. Let these forces be denoted $\textbf{F}_1, \textbf{F}_2, \ldots, \textbf{F}_N$ . Note that these forces are functions of the particle positions:
$\textbf{F}_i = \textbf{F}_i(\textbf{r}_1, \ldots, \textbf{r}_N) \label{2.13}$
which is known as a force field (because it is a function of positions). Given these forces, the time evolution of the positions of the particles is then given by Newton’s second law of motion:
$m_i \ddot{\textbf{r}}_i = \textbf{F}_i (\textbf{r}_1, \ldots, \textbf{r}_N) \nonumber$
where $\textbf{F}_1, \ldots, \textbf{F}_N$ are the forces on each of the $N$ particles due to all the other particles in the system. The notation $\ddot{\textbf{r}}_i = d^2 \textbf{r}_i/dt^2$.
$N$ Newton’s equations of motion constitute a set of $3N$ coupled second order differential equations. In order to solve these, it is necessary to specify a set of appropriate initial conditions on the coordinates and their first time derivatives, $\begin{Bmatrix} \textbf{r}_1(0), \ldots, \textbf{r}_N(0), \dot{\textbf{r}}_1(0), \ldots, \dot{\textbf{r}}_N(0) \end{Bmatrix}$. Then, the solution of Newton’s equations gives the complete set of coordinates and velocities for all time $t$.
The Ensemble Concept (Heuristic Definition)
For a typical macroscopic system, the total number of particles is $N \sim 10^{23}$. Since an essentially infinite amount of precision is needed in order to specify the initial conditions (due to exponentially rapid growth of errors in this specification), the amount of information required to specify a trajectory is essentially infinite. Even if we content ourselves with quadrupole precision, however, the amount of memory needed to hold just one phase space point would be about $128$ bytes $= \: 2^7 \sim 10^2$ bytes for each number or $10^2 \times 6 \times 10^{23} \sim 10^{17}$ gigabytes which is also $10^2$ yottabytes! The largest computers we have today have perhaps $10^6$ gigabytes of memory, so we are off by $11$ orders of magnitude just to specify $1$ classical state.
Fortunately, we do not need all of this detail. There are enormous numbers of microscopic states that give rise to the same macroscopic observable. Let us again return to the connection between temperature and kinetic energy:
$\dfrac{3}{2} NkT = \sum_{i=1}^N \dfrac{1}{2}m_i \textbf{v}_i^2 \label{2.14}$
which can be solved to give:
$T = \dfrac{2}{3k} \left( \dfrac{1}{N} \sum_{i=1}^N \dfrac{1}{2}m_i \textbf{v}_i^2 \right) \label{2.15}$
Here we see that $T$ is related to the average kinetic energy of all of the particles. We can imagine many ways of choosing the particle velocities so that we obtain the same average. One is to take a set of velocities and simply assign them in different ways to the $N$ particles, which can be done in $N!$ ways. Apart from this, there will be many different choices of the velocities, themselves, all of which give the same average.
Since, from the point of view of macroscopic properties, precise microscopic details are largely unimportant, we might imagine employing a construct known as the ensemble concept in which a large number of systems with different microscopic characteristics but similar macroscopic characteristics is used to “wash out” the microscopic details via an averaging procedure. This is an idea developed by individuals such as Gibbs, Maxwell, and Boltzmann.
Consider a large number of systems each described by the same set of microscopic forces and sharing a common set of macroscopic thermodynamic variables (e.g. the same total energy, number of moles, and volume). Each system is assumed to evolve under the microscopic laws of motion from a different initial condition so that the time evolution of each system will be different from all the others. Such a collection of systems is called an ensemble. The ensemble concept then states that macroscopic observables can be calculated by performing averages over the systems in the ensemble. For many properties, such as temperature and pressure, which are time-independent, the fact that the systems are evolving in time will not affect their values, and we may perform averages at a particular instant in time.
The questions that naturally arise are:
1. How do we construct an ensemble?
2. How do we perform averages over an ensemble?
3. How do we determine which thermodynamic properties characterize an ensemble?
4. How many different types of ensembles can we construct, and what distinguishes them?
These questions will be addresses in the sections ahead.
Thermal energy
A confined monatomic gas can be seen as a box with a whole bunch of atoms in it. Each of these particles can be in one of the states given by the last formula. If all of them have large $n$ values there is obviously a lot of kinetic energy in the system. The lowest energy is when all atoms have 1,1,1 as quantum numbers. Boltzmann realized that this should relate to temperature. When we add energy to the system (by heating it up without changing the volume of the box), the temperature goes up. At higher temperatures we would expect higher quantum numbers, and at lower $T$, lower ones. But how exactly are the atoms distributed over the various states?
This is a good example of a problem involving a discrete probability distribution. The probability that a certain level (e.g., $n = ( n_1,n_2,n_3)$ with energy $E_i$) is occupied should be a function of temperature: $P_i(T)$. Boltzmann postulated that you could look at temperature as a form of energy. The thermal energy of a system is directly proportional to an absolute temperature.
$E_{thermal} = k T \nonumber$
The proportionality constant $k$ (or $k_B$) is named after him: the Boltzmann constant. It plays a central role in all statistical thermodynamics. The Boltzmann factor is used to approximate the fraction of particles in a large system. The Boltzmann factor is given by:
$e^{-\beta E_i} \label{17.1}$
where:
• $E_i$ is the energy in the state $i$,
• $T$ is the kelvin temperature, and
• $k$ is Boltzmann constant.
As the following section demonstrates, the term $\beta$ in Equation $\ref{17.1}$ is expressed as:
$\beta=\frac{1}{k T} \nonumber$
The rates of many physical processes are also determined by the Boltzmann factor. For a random particle, its thermal energy of a particle is a small multiple of the energy $k T$. An increase in temperature results in more particles crossing the energy barrier characteristic of activation processes. If this is to occur, particles need to carry sufficient energy. This energy is needed for the particles to successfully cross the energy barrier and is usually called activation energy. The fraction of molecules that have sufficient energy to escape the original material surface is approximately proportional to the Boltzmann factor.
Problems
1. How are temperature and the average energy per particle for a system related?
2. What does the Boltzmann factor tell you? Why is it important?
3. When is it possible for particles to get extra energy?
4. Give three examples of activation processes.
5. What do "separate arrangements" mean? What are the differences between these arrangements?
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(LibreTexts)/17%3A_Boltzmann_Factor_and_Partition_Functions/17.01%3A_The_Boltzmann_Factor.txt
|
Consider a N-particle ensemble. The particles are not necessarily indistinguishable and possibly have mutual potential energy. Since this is a large system, there are many different ways to arrange its particles and yet yield the same thermodynamic state. Only one arrangement can occur at a time. The sum of the probabilities of each separate arrangement equals the total number of separate arrangements. Then the probability of a system is:
$p_N=W_N p_i \nonumber$
where $p_N$ is the probability of the system, $W_N$ is the total number of different possible arrangements of the N particles in the system, and $p_i$ is the probability of each separate arrangement. Heisenberg's uncertainty principle states that it is impossible to simultaneously know the momentum and the position of an object with complete precision. In agreement with the uncertainty principle, the total possible number of combinations can be defined as the total number of distinguishable rearrangements of the N particles.
The most practical ensemble is the canonical ensemble with $N$, $V$, and $T$ fixed. We can imagine a collection of boxes with equal volumes and number of particles with the entire collection kept in thermal equilibrium. Based on the Boltzmann factor, we know that for a system that has states with energies $e_1,e_2,e_3$..., the probability $p_j$ that the system will be in the state $j$ with energy $E_j$ is exponentially proportional to the energy of state $j$. The partition functions of the state places a very important role in calculating the properties of a system, for example, it can be used to calculate the probability, as well as the energy, heat capacity, and pressure.
The Boltzmann Distribution
We are ultimately interested in the probability that a given distribution will occur. The reason for this is that we must have this information in order to obtain useful thermodynamic averages. Let's consider an ensemble of $A$ systems. We will define $a_j$ as the number of systems in the ensemble that are in the quantum state $j$. For example, $a_1$ represents the number of systems in the quantum state 1. The total number of possible microstates is:
$W(a_1,a_2,...) = \frac{A!}{a_1!a_2!...} \nonumber$
The overall probability that $P_j$ that a system is in the jth quantum state is obtained by averaging $a_j/A$ over all the allowed distributions. Thus, $P_j$ is given by:
\begin{align*} P_j &= \dfrac{\langle a_j \rangle}{A} \[4pt] &= \dfrac{1}{A} \dfrac{ \displaystyle \sum_a W(a) a_j(a)}{\displaystyle \sum_a W(a)} \end{align*}
where the angle brackets indicate an ensemble average. Using this definition we can calculate any average property (i.e. any thermodynamic property):
$\langle M \rangle = \sum_j M_j P_j \label{avg}$
The method of the most probable distribution is based on the idea that the average over $P_j$ is identical to the most probable distribution. Physically, this results from the fact that we have so many particles in a typical system that the fluctuations from the mean are extremely (immeasurably) small. The equivalence of the average probability of an occupation number and the most probable distribution is expressed as follows:
$P_j = \dfrac{\langle a_j \rangle}{A} = \dfrac{a_j}{A} \nonumber$
The probability function is subject to the following constraints:
• Constraint 1: Conservation of energy requires: $E_{total} = \sum_j a_j e_j \label{con1}$ where $e_j$ is the energy of the jth quantum state.
• Constraint 2: Conservation of mass requires: $A = \sum_j a_j \label{con2}$ which says only that the total number of all of the systems in the ensemble is $A$.
As we will learn in later chapters, the system will tend towards the distribution of $a_j$ that maximizes the total number of microstates. This can be expressed as:
$\sum_j \left(\dfrac{\partial \ln W }{\partial a_j}\right) = 0 \nonumber$
Our constraints becomes:
$\sum_j e_j da_j =0 \nonumber$
$\sum_j da_j =0 \nonumber$
The method of Lagrange multipliers (named after Joseph Louis Lagrange is a strategy for finding the local maxima and minima of a function subject to equality constraints. Using the method of LaGrange undetermined multipliers we have:
$\sum_j \left[ \left(\dfrac{\partial \ln W }{\partial a_j}\right)da_j + \alpha da_j - \beta e_j da_j \right] = 0 \nonumber$
We can use Stirling's approximation:
$\ln x! \approx x\ln x – x \nonumber$
to evaluate:
$\left(\dfrac{\partial \ln W }{\partial a_j}\right) \nonumber$
to get:
$\left(\dfrac{\partial A! }{\partial a_j}\right) - \sum_i \left(\dfrac{\partial \ln a_i }{\partial a_j}\right) = 0 \nonumber$
as outlined below.
Application of Stirling's Approximation
First step is to note that:
$\ln W = \ln A! - \sum_j \ln a_j! a \approx A \ln A – A - \sum_j a_j \ln a_j - \sum_j a_j \nonumber$
Since (from Equation $\ref{con2}$):
$A = \sum_j a_j \nonumber$
these two cancel to give:
$\ln W = A \ln A - \sum_j a_j \ln a_j \nonumber$
The derivative is:
$\left(\dfrac{\partial \ln W}{\partial a_j} \right) = \dfrac{\partial A \ln A}{\partial a_j} - \sum_i \dfrac{\partial a_i \ln a_i}{\partial a_j} \nonumber$
Therefore we have:
$\left(\dfrac{\partial A \ln A}{\partial a_j} \right) = \dfrac{\partial A }{\partial a_j} \ln A - \dfrac{\partial A }{\partial a_j} = \ln A -1 \nonumber$
$\left(\dfrac{\partial a_i \ln a_i}{\partial a_j} \right) = \dfrac{\partial a_i }{\partial a_j} \ln a_i - \dfrac{\partial a_i }{\partial a_j} = \ln a_j +1 \nonumber$
These latter derivatives result from the fact that:
$\left( \dfrac{\partial a_i}{\partial a_i} \right) = 1 \nonumber$
$\left( \dfrac{\partial a_j}{\partial a_i}\right)=0 \nonumber$
The simple expression that results from these manipulations is:
$- \ln \left( \dfrac{a_j}{A} \right) + \alpha - \beta e_j =0 \nonumber$
The most probable distribution is:
$\dfrac{a_j}{A} = e^a \sum_j e^{-\beta e_j} \label{Eq3}$
Now we need to find the undetermined multipliers $\alpha$ and $\beta$.
The left hand side of Equation $\ref{Eq3}$ is 1. Thus, we have:
$P_j= \dfrac{a_j}{A} = \dfrac{ e^{-\beta e_j}} {\sum_j e^{-\beta e_j}} \nonumber$
This determines $a$ and defines the Boltzmann distribution. We will show that $\beta$ from the optimization procedure of method of Lagrange multipliers is:
$\beta=\dfrac{1}{kT} \nonumber$
This identification will show the importance of temperature in the Boltzmann distribution. The distribution represents a thermally equilibrated most probable distribution over all energy levels (Figure 17.2.1 ).
Boltzmann Distribution
The Boltzmann distribution represents a thermally equilibrated most probable distribution over all energy levels. There is always a higher population in a state of lower energy than in one of higher energy.
Once we know the probability distribution for energy, we can calculate thermodynamic properties like the energy, entropy, free energies and heat capacities, which are all average quantities (Equation $\ref{avg}$). To calculate $P_j$, we need the energy levels of a system (i.e., $\{e_i\}$). The energy ("levels") of a system can be built up from the quantum energy levels
It must always be remembered that no matter how large the energy spacing is, there is always a non-zero probability of the upper level being populated. The only exception is a system that is at absolute zero. This situation is however hypothetical as absolute zero can be approached but not reached.
Partition Function
The sum over all factors $e^{-\beta e_j}$ is given a name. It is called the molecular partition function, $q$:
$q = \sum_j e^{-\beta e_j} \nonumber$
The molecular partition function $q$ gives an indication of the average number of states that are thermally accessible to a molecule at the temperature of the system. The partition function is a sum over states (of course with the Boltzmann factor $\beta$ multiplying the energy in the exponent) and is a number. Larger the value of $q$, larger the number of states which are available for the molecular system to occupy (Figure 17.2.2 ).
We distinguish here between the partition function of the ensemble, $Q$ and that of an individual molecule, $q$. Since $Q$ represents a sum over all states accessible to the system it can written as:
$Q(N,V,T) = \sum_{i,j,k ...} e^{-\beta ( e_i + e_j +e_k ...)} \nonumber$
where the indices $i,\,j,\,k$ represent energy levels of different particles.
Regardless of the type of particle the molecular partition function, $q$ represents the energy levels of one individual molecule. We can rewrite the above sum as:
$Q = q_iq_jq_k… \nonumber$
or:
$Q = q^N \nonumber$
for $N$ particles. Note that $q_i$ means a sum over states or energy levels accessible to molecule $i$ and $q_j$ means the same for molecule $j$. The molecular partition function, $q$ counts the energy levels accessible to molecule $i$ only. $Q$ counts not only the states of all of the molecules, but all of the possible combinations of occupations of those states. However, if the particles are not distinguishable then we will have counted $N!$ states too many. The factor of $N!$ is exactly how many times we can swap the indices in $Q(N,V,T)$ and get the same value (again provided that the particles are not distinguishable). See this video for more information.
Problems
1. Complete the justification of Boltzmann's distribution law by computing the proportionality constant $a$.
2. A system contains two energy levels $E_1, E_2$. Using Boltzmann statistics, express the average energy of the system in terms of $E_1, E_2$.
3. Consider a system contains N energy levels. Redo problem #2.
4. Use the property of exponential function, derive equation (17.9).
5. What are the uses of partition functions?
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(LibreTexts)/17%3A_Boltzmann_Factor_and_Partition_Functions/17.02%3A_The_Thermal_Boltzman_Distribution.txt
|
We will be restricting ourselves to the canonical ensemble (constant temperature and constant pressure). Consider a collection of $N$ molecules. The probability of finding a molecule with energy $E_i$ is equal to the fraction of the molecules with energy $E_i$. That is, in a collection of $N$ molecules, the probability of the molecules having energy $E_i$:
$P_i = \dfrac{n_i}{N} \nonumber$
This is the directly obtained from the Boltzmann distribution, where the fraction of molecules $n_i /N$ having energy $E_i$ is:
$P_i = \dfrac{n_i}{N} = \dfrac{e^{-E_i/kT}}{Q} \label{BD1}$
The average energy is obtained by multiplying $E_i$ with its probability and summing over all $i$:
$\langle E \rangle = \sum_i E_i P_i \label{Mean1}$
Equation $\ref{Mean1}$ is the standard average over a distribution commonly found in quantum mechanics as expectation values. The quantum mechanical version of this Equation is
$\langle \psi | \hat{H} | \psi \rangle \nonumber$
where $\Psi^2$ is the distribution function that the Hamiltonian operator (e.g., energy) is averaged over; this equation is also the starting point in the Variational method approximation.
Equation $\ref{Mean1}$ can be solved by plugging in the Boltzmann distribution (Equation $\ref{BD1}$):
$\langle E \rangle = \sum_i{ \dfrac{E_ie^{-E_i/ kT}}{Q}} \label{Eq1}$
Where $Q$ is the partition function:
$Q = \sum_i{e^{-\dfrac{E_i}{kT}}} \nonumber$
We can take the derivative of $\ln{Q}$ with respect to temperature, $T$:
$\left(\dfrac{\partial \ln{Q}}{\partial T}\right) = \dfrac{1}{kT^2}\sum_i{\dfrac{E_i e^{-E_i/kT}}{Q}} \label{Eq2}$
Comparing Equation $\ref{Eq1}$ with $\ref{Eq2}$, we obtain:
$\langle E \rangle = kT^2 \left(\dfrac{\partial \ln{Q}}{\partial T}\right) \nonumber$
It is common to write these equations in terms of $\beta$, where:
$\beta = \dfrac{1}{kT} \nonumber$
The partition function becomes:
$Q = \sum_i{e^{-\beta E_i}} \nonumber$
We can take the derivative of $\ln{Q}$ with respect to $\beta$:
$\left(\dfrac{\partial \ln{Q}}{\partial\beta}\right) = -\sum_i{\dfrac{E_i e^{-\beta E_i}}{Q}} \nonumber$
And obtain:
$\langle E \rangle = -\left(\dfrac{\partial \ln{Q}}{\partial\beta}\right) \nonumber$
Replacing $1/kT$ with $\beta$ often simplifies the math and is easier to use.
It is not uncommon to find the notation changes: $Z$ instead of $Q$ and $\bar{E}$ instead of $\langle E \rangle$.
17.04: Heat Capacity at Constant Volume
The heat capacity at constant volume ($C_V$) is defined to be the change in internal energy with respect to temperature:
$C_V = \left( \dfrac{\partial U}{\partial T} \right)_{N, V} \label{Eq3.26}$
Since:
$E = -\dfrac{\partial \ln{Q(N, V, \beta)}}{\partial \beta} \label{Eq3.27}$
We see that:
\begin{align} C_V &= \dfrac{\partial U}{\partial T} \nonumber \[4pt] &= \dfrac{\partial U}{\partial \beta} \dfrac{\partial \beta}{\partial T} \nonumber \[4pt] &= \dfrac{1}{kT^2} \dfrac{\partial^2}{\partial \beta^2} \: \ln Q(N, V, \beta) \nonumber\[4pt] &= k \beta^2 \dfrac{\partial^2}{\partial \beta^2} \: \ln Q(N, V, \beta) \label{Eq3.28} \end{align}
where $k$ is the Boltzmann constant. Energy can be stored in materials/gases via populating the specific degrees of freedom that exist in the sample. Understanding how this occurs requires the usage of Quantum Mechanics.
Dulong-Petit Law on the Heat Capacities of Solids
Pierre Louis Dulong and Alexis Thèrèse Petit conducted experiments in 1819 on three dimensional solid crystals to determine the heat capacities of a variety of these solids (Heat capacity is the solids ability to absorb and retain heat). Dulong and Petit discovered that all investigated solids had a heat capacity of approximately 2.49 x 104 J kilomole-1 K-1 at around 298 K or room temperature. The result from their experiment was explained by considering every atom inside the solid as an oscillator with six degrees of freedom (an oscillator can be thought of as a spring connecting all the atoms in the solid lattice). These springs extend into three dimensional space. The more energy that is added to the solid the more these springs vibrate. Each atom has an energy of $\frac{1}{2}kT$, where $k$ is the Boltzmann constant and $T$ is the absolute temperature. Thus,
$C_V=\dfrac{6R}{2}=3R \label{1}\tag{Dulong-Petit Law}$
The number 6 in this equation is the number of degrees of freedom for the molecule. Petit and Dulong suggested that these results supported their foundation for the heat capacity of solids. The explanation for Petit and Dulong's experiment was not sufficient when it was discovered that heat capacity decreased as temperature approached absolute zero. The degrees of freedom do not slow down or cease to move when the solid reaches a sufficiently cold temperature. An additional model was proposed to explain this deviance. Two main theories were developed to explain this puzzling deviance in the heat capacity experiments. The first was constructed by Einstein and the second was authored by Debye.
Einstein's Theory on the Heat Capacities of Solids
Einstein assumed three things when he investigated the heat capacity of solids. First, he assumed that each solid was composed of a lattice structure consisting of $N$ atoms. Each atom was treated as moving independently in three dimensions within the lattice (3 degrees of freedom). This meant that the entire lattice's vibrational motion could be described by a total of $3N$ motions, or degrees of freedom. Secondly, he assumed that the atoms inside the solid lattice did not interact with each other and thirdly, all of the atoms inside the solid vibrated at the same frequency. The third point highlights the main difference in Einstein's and Debye's two models.
Einstein's first point is accurate because the experimental data supported his hypothesis, however his second point is not because if atoms inside a solid could not interact, sound could not propagate through it. For example, a tuning fork's atoms, when struck, interact with one another to create sound which travels through air to the listener's ear. Atoms also interact in a solid when they are heated. Take for example a frying pan. If the pan is heated on one side, the heat transfers throughout the metal effectively warming the entire pan. Molecules that make up the frying pan interact to transfer heat. Much in the same way the oscillators in a solid interact when energy is added to the system. The extent of these interactions lead to the physically observed heat capacity.
The heat capacity of a solid at a constant volume is
\begin{align*} C_V &= \left(\dfrac{\partial{U}}{\partial{T}}\right)_V \[4pt] &=3Nk\left(\dfrac{\theta_E}{T}\right)^2 \label{2} \[4pt] &= \dfrac{\exp \left(\dfrac{\theta_E}{T} \right)}{\left(\exp \dfrac{\theta_E}{T} -1\right)^2} \end{align*}
where
• $\theta_E=\dfrac{hv}{k}$ is the Einstein temperature,
• $h$ is Planck's constant,
• $k$ is Boltzmann's constant, and
• $\nu$ is the oscillator frequency of the atoms inside the solid.
The Einstein temperature's accessibility of the vibrational energy inside of a solid molecule determines the heat capacity of that solid. The greater the accessibility the greater the heat capacity. If the vibrational energy is easily accessible the collisions in the molecule have a greater probability of exciting the atom into an upper vibrational level. This is displayed below.
So the Einstein temperature specifically indicates the probability that a molecule has in its degrees of freedom to store energy in its atomic oscillators (or bonds). Comparing the Einstein temperature to the traditional classical values of Heat capacity will illustrate the differences the specific strengths (high temperature) and weaknesses (low temperature) of the Einstein model.
Thus, in the high temperature limit ($\dfrac{\theta_E}{T}\ll 1$) (i.e., the temperature is very large compared to the Einstein temperature) then
$C_V \approx 3Nk=3nR. \label{4}$
Einstein's model reveals the accuracy of the Petit and Dulong model and models high temperatures accurately. However, just as Petit and Dulong's model decreased in accuracy as the temperature decreased, so followed Einstein's.
When examining the extremely low temperature limit: $\dfrac{\theta_E}{T}\gg1$, it can be seen:
$C_V=3Nk\left(\dfrac{\theta_E}{T}\right)^2e^\dfrac{-\theta_E}{T} \label{5}$
As temperature ($T$) goes to zero, the exponential portion of the above equation goes to zero and therefore $C_V$ also approaches zero. This supports the experimental values as temperature approaches zero the heat capacity of the solid likewise decreases to zero.
Einstein's theory also explains solids that exhibit a low heat capacity even at relatively high temperatures. An example of such a solid is diamond. The heat capacity of diamond approaches $3Nk$ as temperature greatly increases. Einstein's model supports this through the definition of an Einstein temperature. As the Einstein temperature increases, $\nu$ must increase likewise. This is the equivalent of each atom possessing more energy and therefore vibrating more rapidly within the solid itself. The oscillator frequency, $\nu$, can be written as:
$\nu=\dfrac{1}{2\pi}\sqrt{\dfrac{\kappa}{\mu}} \label{6}$
where $\kappa$ is the force constant and $\mu$ is the reduced mass. This formula better predicts solids with high force constants or low reduced masses. This corrects deviations from the Petit and Dulong model.
Essentially the Einstein temperature allows for the heat capacity equation and the vibrational frequencies in the solid to change as the temperature changes. This effectively adjusts for the deviations seen in the Petit/Dulong model. As the temperature increases or decreases, the Einstein temperature increases or decreases likewise to mirror the actual physical activity within the solid.
Einstein's model predicts relatively low temperatures well. However, when decreasing from approximately 15 K, Einstein's model deviates from experimental values. Also, this can be observed, although not as dramatically, for temperatures from 25 K to 30 K. Clearly a term or correction is still missing from Einstein's model to increase its accuracy.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(LibreTexts)/17%3A_Boltzmann_Factor_and_Partition_Functions/17.03%3A_The_Average_Ensemble_Energy.txt
|
Pressure can also be derived from the canonical partition function. The average pressure is the sum of the probability times the pressure:
$\langle P \rangle = \sum_i{P_i(N,V,T)P_i(N,V)} \nonumber$
The pressure of a macroscopic system is:
$P(N,V) = -\left(\dfrac{\partial E_i}{\partial V}\right)_N \nonumber$
So we can write the average pressure as:
$\begin{split} \langle P \rangle &= \sum_i{P_i(N,V,\beta)\left(-\dfrac{\partial E_i}{\partial V}\right)_N} \ &= \sum_i{\left(-\dfrac{\partial E_i}{\partial V}\right)_N\dfrac{e^{-E_i(N,V)/kT}}{Q(N,V,T)}} \end{split} \nonumber$
In a few steps we can show that the temperature can be expressed in terms of the partition function:
$Q(N,V,T) = \sum_i{e^{-E_i(N,V)/kT}} \nonumber$
Writing in terms of $\beta$ instead of temperature:
$Q(N,V,\beta) = \sum_i{e^{-\beta E_i(N,V)}} \nonumber$
The derivative of the partition function with respect to volume is:
$\left(\dfrac{\partial Q}{\partial V}\right)_{N,\beta} = -\beta \sum_i{\left(\dfrac{\partial E_i}{\partial V}\right)_{N}e^{-\beta E_i(N,V)}} \nonumber$
The average pressure can then be written as:
$\langle P \rangle = \dfrac{kT}{Q(N,V,\beta}\left(\dfrac{\partial Q}{\partial V}\right)_{N,\beta} \nonumber$
Which shows that the pressure can be expressed solely terms of the partition function:
$\langle P \rangle = kT\left(\dfrac{\partial \ln{Q}}{\partial V}\right)_{N,\beta} \nonumber$
We can use this result to derive the ideal gas law. For $N$ particles of an ideal gas:
$Q(N,V,\beta) = \dfrac{[q(V,\beta)]^N}{N!} \nonumber$
where:
$q(V,\beta) = \left(\dfrac{2\pi m}{h^2\beta}\right)^{3/2}V \nonumber$
is the translational partition function. The utility of expressing the pressure as a logarithm is clear from the fact that we can write:
$\begin{split} \ln{Q} &= N\ln{q}-\ln{N!} \ &= -\dfrac{3N}{2}\ln{\left(\dfrac{2\pi m}{h^2\beta}\right)}+N\ln{V}-\ln{N!} \end{split} \nonumber$
We have used the property of logarithms that $\ln{(AB)} = \ln{(A)} + \ln{(B)}$ and $\ln{(X^Y)} = Y\ln{(X)}$. Only one term in the ln $Q$ depends on $V$. Taking the derivative of $N\ln{V}$ with respect to $V$ gives:
$\left(\dfrac{\partial \ln{Q}}{\partial V}\right)_{N,\beta} = \dfrac{N}{V} \nonumber$
Substituting this into the above equation for the pressure gives:
$P = \dfrac{NkT}{V} \nonumber$
which is the ideal gas law. Recall that $Nk = nR$ where $N$ is the number of molecules and $n$ is the number of moles. $R$ is the universal gas constant (8.314 J/mol-K) which is nothing more than $k$ multiplied by Avagadro’s number. $N_Ak = R$ converts the constant from a "per molecule" to a "per mole" basis.
Gas Compressed by a Piston
Let us consider a simple thought experiment, which is illustrated in the figure above: A system of $N$ particles is compressed by a piston pushing in the positive $z$ direction. Since this is a classical thought experiment, we think in terms of forces. The piston exerts a constant force of magnitude $F$ on the system. The direction of the force is purely in the positive $z$ direction, so that we can write the force vector $\bf{F}$ as $\bf{F} = \begin{pmatrix} 0, 0, F \end{pmatrix}$. At equilibrium (the piston is not moving), the system exerts an equal and opposite force on the piston of the form $\begin{pmatrix} 0, 0, -F \end{pmatrix}$. If the energy of the system is $E$, then the force exerted by the system on the piston will be given by the negative change in $E$ with respect to $z$:
$-F = -\dfrac{dE}{dz} \label{Eq3.29}$
or:
$F = \dfrac{dE}{dz} \label{Eq3.30}$
The force exerted by the system on the piston is manifest as an observable pressure $P$ equal to the force $F$ divided by the area $A$ of the piston, $P=F/A$. Given this, the observed pressure is just:
$P = \dfrac{dE}{Adz} \label{Eq3.31}$
Since the volume decreases when the system is compressed, we see that $Adz = -dV$. Hence, we can write the pressure as $P = -dE/dV$.
Of course, the relation $P = -dE/dV$ is a thermodynamic one, but we need a function of $x$ that we can average over the ensemble. The most natural choice is:
$p(x) = -\dfrac{d \mathcal{E} (x)}{dV} \label{Eq3.32}$
so that $P = \langle p(x) \rangle$. Setting up the average, we obtain:
\begin{align} P &= -\dfrac{C_N}{Q(N, V, T)} \int \dfrac{\partial \mathcal{E}}{\partial V} e^{-\beta \mathcal{E} (x)} \ &= \dfrac{C_N}{Q(N, V, T)} \dfrac{1}{\beta} \int \dfrac{\partial}{\partial V} e^{-\beta \mathcal{E} (x)} \ &= \dfrac{kT}{Q(N, V, T} \dfrac{\partial}{\partial V} C_N \int e^{-\beta \mathcal{E} (x)} \ &= kT \left( \dfrac{\partial \: \text{ln} \: Q(N, V, T)}{\partial V} \right) \end{align} \label{Eq3.33}
Ideal Gas in the Canonical Ensemble
Recall that the mechanical energy for an ideal gas is:
$\mathcal{E} (x) = \sum_{i=1}^N \dfrac{\textbf{p}_i^2}{2m} \label{Eq3.36}$
where all particles are identical and have mass $m$. Thus, the expression for the canonical partition function $Q(N, V, T)$ is:
$Q(N, V, T) = \dfrac{1}{N!h^{3N}} \int dx \: e^{-\beta \sum_{i=1}^N \textbf{p}_i^2/2m} \nonumber$
Note that this can be expressed as:
$Q(N, V, T) = \dfrac{1}{N!h^{3N}}V^N \left[ \int dp \: e^{-\beta p^2/2m} \right]^{3N} \nonumber$
Evaluating the Gaussian integral gives us the final result immediately:
$Q(N, V, T) = \dfrac{1}{N!} \left[ \dfrac{V}{h^3} \left( \dfrac{2 \pi m}{\beta} \right)^{3/2} \right]^N \nonumber$
The expressions for the energy:
$E = -\dfrac{\partial}{\partial \beta} \: \text{ln} \: Q(N, V, T) \nonumber$
which gives:
$E = \dfrac{3}{2}NkT = \dfrac{3}{2}nRT \label{Eq3.37}$
and pressure:
$P = kT \left( \dfrac{\partial \: \text{ln} \: Q(N, V, T)}{\partial V} \right) \nonumber$
giving:
$P = \dfrac{NkT}{V} = \dfrac{nRT}{V} \label{Eq3.38}$
which is the ideal gas law.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(LibreTexts)/17%3A_Boltzmann_Factor_and_Partition_Functions/17.05%3A_Pressure_in_Terms_of_Partition_Functions.txt
|
A system, such as a container of gas, can consist of a large number of subsystems. How is the partition function of the system built up from those of the subsystems? This depends on whether the subsystems are distinguishable or indistinguishable. Let's start with energy. Energy is additive so that:
$E_\text{tot}(N,V) = \epsilon_1(V) + \epsilon_2(V) + \epsilon_3(V) + \cdots \nonumber$
Each of the molecules can have their energy distributed over their respective energy states (e.g., vibrations, rotations, etc.). This means that each $\epsilon_i$ is already a summation over the states of the molecule. Let us assume that we can somehow distinguish all the molecules as: $a$, $b$, $c$, $d$… and denote the energy state they are in by $i$, $j$, $k$:
$E_l(N,V) = \epsilon_i^a (V) + \epsilon_j^b (V) + \epsilon_k^c (V) + \cdots \nonumber$
A good example would be the molecules in a molecular crystal. They only move around a fixed site and so we can distinguish by how far molecule '$a$' is from a given corner of the crystal. The systems partition function becomes:
$Q(N,V,\beta) = \sum_i e^{-\beta E_l} = \sum_{i,j,k,…} e^{-\beta (\epsilon_i^a + \epsilon_j^b + \epsilon_k^c)} \nonumber$
So far, we have done little effort to distinguish between the partition function of a molecular system $q$ and the whole ensemble $Q$ (e.g. the gas). If the entities that we called systems are distinguishable and independent, the whole ensemble partition function is the product of the molecular system partition functions. We get:
$Q(N,V,\beta) = \prod_i{q_i} \nonumber$
for $N$ distinguishable systems. We can split up the summation into a product of molecular partition functions:
$Q(N,V,β) = \prod_i^N q_i(V,\beta)= q_a(V,\beta) q_b(V,\beta) q_c(V,\beta) \cdots \nonumber$
This results in each molecular system partition function being summed over independently:
$Q(N,V,\beta) = \sum_{i} e^{-\beta \epsilon_i^a} \sum_{j} e^{-\beta \epsilon_i^b} \sum_{k} e^{-\beta \epsilon_i^c} \cdots \nonumber$
If the energy states of all the particles are the same, then the equation simplifies to:
$Q(N,V,\beta) = [q(V,\beta)]^N \nonumber$
We can do this if, for example, the particles are embedded in a crystal where we can distinguish them by their location. We will see in the next chapter that for indistinguishable particles, such as those in a gas, we get a different result.
17.07: Partition Functions of Indistinguishable Molecules
In the previous section, the definition of the the partition function involves a sum of state formulism:
$Q = \sum_i e^{-\beta E_i} . \label{1}$
However, under most conditions, full knowledge of each member of an ensemble is missing and hence we have to operate with a more reduced knowledge. This is demonstrated via a simple model of two particles in a two-energy level system in Figure $1$. Each particle (red or blue) can occupy either the $E_1=0$ energy level or the $E_2=\epsilon$ energy level resulting in four possible states that describe the system. The corresponding partition function for this system is then (via Equation \ref{1}):
$Q_{\text{distinguishable}}=e^0+ e^{-\beta\epsilon} + e^{-\beta\epsilon} + e^{-2 \beta\epsilon}=q^2 \label{Q1}$
and is just the molecular partition function ($q$) squared.
However, if the two particles are indistinguishable (e.g., both the same color as in Figure $2$) then while four different combinations can be generated like in Figure $1$, there is no discernible way to separate the two middle states. Hence, there are effectively only three states observable for this system.
The corresponding partition function for this system (again using Equation \ref{1}) can be constructed:
$Q(N,V,\beta) =e^0+ e^{-\beta\epsilon} + e^{-2 \beta\epsilon} \neq q^2 \label{Q2}$
and this is not equal to the square of the molecular partition function. If Equation \ref{Q1} were used to describe the indistinguishable particle case, then it would overestimate the number of observable states. From combinatorics, using $q^N$ for a large $N$-particle system of indistinguishable particles will overestimate the number of states by a factor of $N!$. Therefore Equation \ref{1} requires a slight modification to account for this over counting.
$Q(N,V,\beta) = \dfrac{\sum_i{e^{-\beta E_i}}}{N!} \label{2}$
If we have $N$ molecules, we can perform $N!$ permutations that should not affect the outcome. To avoid over counting (making sure we do not count each state more than once), the partition function becomes:
$Q(N,V,\beta) = \dfrac{q(V,\beta)^N}{N!} \nonumber$
Note
As you may have noticed, using Equation \ref{2} to estimate of $Q$ for the two-indistinguishable particle discussed case above with $N=2$ is incorrect (i.e., Equation \ref{2} is not equal to Equation \ref{Q2}). That is because the $N!$ factor is only applicable for large $N$.
17.08: Partition Functions can be Decomposed
From the previous sections, the partition function for a system of $N$ indistinguishable and independent molecules is:
$Q(N,V,\beta) = \dfrac{\sum_i{e^{-\beta E_i}}}{N!} \label{ID1}$
And the average energy of the system is:
$\langle E \rangle = kT^2 \left(\dfrac{\partial \ln{Q}}{\partial T}\right) \label{ID2}$
We can combine these two equations to obtain:
$\begin{split} \langle E \rangle &= kT^2 \left(\dfrac{\partial \ln{Q}}{\partial T}\right)_{N,V} \ &= NkT^2 \left(\dfrac{\partial \ln{q}}{\partial T}\right)_V \ &= N\sum_i{\epsilon_i \dfrac{e^{-\epsilon_i/kT}}{q(V,T)}} \end{split} \label{ID3}$
The average energy is equal to:
$\langle E \rangle = N \langle \epsilon \rangle \label{aveE}$
where $\langle \epsilon \rangle$ is the average energy of a single particle. If we compare Equation $\ref{ID2}$ with Equation $\ref{ID2}$, we can see:
$\langle \epsilon \rangle = \sum_i{\epsilon_i \dfrac{e^{-\epsilon_i/kT}}{q(V,T)}} \nonumber$
The probability that a particle is in state $i$, $\pi_i$, is given by:
$\langle \epsilon \rangle = \dfrac{e^{-\epsilon_i/kT}}{q(V,T)} = \dfrac{e^{-\epsilon_i/kT}}{\sum_i{e^{-\epsilon_i/kT}}} \nonumber$
The energy of a particle is a sum of the energy of each degree of freedom for that particle. In the case of a molecule, the energy is:
$\epsilon = \epsilon_\text{trans} + \epsilon_\text{rot} + \epsilon_\text{vib} + \epsilon_\text{elec} \nonumber$
The molecular partition function is the product of the degree of freedom partition functions:
$q(V,T) = q_\text{trans} q_\text{rot} q_\text{vib} q_\text{elec} \nonumber$
The partition function for each degree of freedom follows the same is related to the Boltzmann distribution. For example, the vibrational partition function is:
$q_\text{vib} = \sum_i{e^{-\epsilon_i/kT}} \nonumber$
The average energy of each degree of freedom follows the same pattern as before. For example, the average vibrational energy is:
$\langle \epsilon_\text{vib} \rangle = kT^2\dfrac{\partial \ln{q_\text{vib}}}{\partial t} = -\dfrac{\partial \ln{q_\text{vib}}}{\partial \beta} \nonumber$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(LibreTexts)/17%3A_Boltzmann_Factor_and_Partition_Functions/17.06%3A_Partition_Functions_of_Distinguishable_Molecules.txt
|
These are homework exercises to accompany Chapter 17 of McQuarrie and Simon's "Physical Chemistry: A Molecular Approach" Textmap.
Q17.9
In Section 17-3, we derived an expression for the expectation value of energy, $\langle E \rangle$, by applying Equation 17.20 to $Q(N, V, T)$ given by Equation 17.22. Now, apply Equation 17.21 to $Q(N, V, T)$ to give the same result, except with $\beta$ replaced by $\dfrac{1}{k_bT}$.
S17.9
Beginning with Equation 17.21
$\langle E \rangle = k_bT^2 (\dfrac{\partial{ln Q}}{\partial T})_{N,V}$
We can use the partition function in the problem to find $(\dfrac{\partial{ ln Q}}{\partial T})_{N,V}$:
$Q(N, V, T) = \dfrac{V^N}{N!} ( \dfrac{2\pi mk_b}{h^2} )^{\dfrac{3N}{2}} T^{\dfrac{3N}{2}}$
$ln Q = \dfrac{3N}{2} ln T$
$( \dfrac{\partial_{ln Q}}{\partial T} )_{N,V} = \dfrac{3N}{2T}$
Substituting this last result into Equation 17.21 yields
$\langle E \rangle = k_bT^2 \dfrac{3N}{2T} = \dfrac{3}{2}(Nk_bT)$
Q17.10
A gas absorbed on a surface can sometimes be modeled as a two-dimensional ideal gas. We will learn in that the partition function of a two-dimensional ideal gas is:
$Q(N,A,T) = \dfrac{1}{N!}\big(\dfrac{2\pi mk_{B}T}{h^{2}}\big)^{N}A^{N}$
where $A$ is the area of the surface. Derive an expression for $\bar{C}_{V}$.
S17.10
$\bar{C}_{V} = \big(\dfrac{\partial U}{\partial T} \big)_{N,A}$ and $U= K_{b}T^{2} \big(\dfrac{\partial ln (Q)}{\partial T} \big)_{N,A}$
Addition:
$\bar{C}_{V} = \big(\dfrac{\partial U}{\partial T} \big)_{N,A}$
and
$U= K_{b}T^{2} \big(\dfrac{\partial ln (Q)}{\partial T} \big)_{N,A}$
where
$U = K_{b} T^{2} * \frac{N}{T}$
and thus
$U = NK_{b}T$.
Q17.11
Given the partition function of a monatomic can der Waals gas:
$Q(N,V,T)=\frac{1}{N!}(\frac{2\pi m k_B T}{h^2})^{\frac{3N}{2}} (V-Nb)^N e^{\frac{aN^2}{V k_b T}}$
what is the average energy of this gas?
S17.11
The average energy of a gas is given by
$<E> = k_b T^2 (\frac{\partial ln Q}{\partial T})_{N,V}$
taking the natural log of the partition function gives us
$lnQ = \frac{3N}{2} ln({\frac{2\pi m k_B T}{h^2}}) + \frac{aN^2}{V k_b T} +\frac{1}{N!}N ln(V-Nb)$
Now we take the partial derivative with respect to T while holding N and V constant.
This yields
$(\dfrac{\partial ln Q}{\partial T})_{N,V} = \frac{3Nh^2 2 \pi m k_b}{4\pi m k_b T} - \frac{aN^2}{V k_b T^2} = \frac{3nh^2}{2T} -\frac{aN^2}{V k_b T^2}$
Substituting this value into the Average Energy equation, we get
$<E> = \frac{3N k_b T}{2} - \frac{aN^2}{V}$.
Q17.12
Given the following equation:
$Q(N,V,T) = \dfrac{(V)^{N}}{N!}\Big(\dfrac{2\pi mk_{B}}{h^{2}}\Big)^{3N/2}T^{3N/2}$
An approximate partition function for a gas of hard spheres can be obtained from the partition function of a monatomic gas by replacing $V$ in the given equation with $V-b$, where $b$ is related to the volume of the $N$ hard spheres. Derive expressions for the energy and the pressure of this system.
S17.12
We can use the partition function specified in the problem to find
$Q(N,V,T) = \dfrac{(V-b)^{N}}{N!}\Big(\dfrac{2\pi mk_{B}}{h^{2}}\Big)^{3N/2}T^{3N/2}$
$ln Q = \dfrac{3N}{2} ln T + terms \: not \: involving \: T$
After substitution into the given equation, we find that the energy (E) is the same as that for a monatomic ideal gas: $3Nk_{B}T/2$. We can use the partition function specified in the problem to find
$Q(N,V,T) = \dfrac{1}{N!}\Big(\dfrac{2\pi mk_{B}T}{h^{2}}\Big)^{3N/2}(V-b)^{N}$
$ln Q = N ln(V-b) + terms \: not \: involving \: V$
Similar substitution allows us to find
$\langle P \rangle = k_{B}T\Big(\dfrac{\partial ln Q}{\partial V}\Big)_{N,B} = \dfrac{Nk_{B}T}{V-b}$
Q17.13
Using the partition function
$Q(N,\:,A\:,\:T) = \dfrac{1}{N!}\Big(\dfrac{2\pi mk_{B}T}{h^{2}}\Big)^{N}A^{N}$,
calculate the heat capacity of a two-dimensional ideal gas.
S17.13
First, the energy must be found. This can be done by finding $\Big(\dfrac{\partial ln Q}{\partial T}\Big)_{N,\:V}$.
$ln[Q] = N ln[T] + ...$
where the ... refers to terms that do not depend on T. Because they do not depend on T, the partial with respect to those terms are 0.
The partial derivative is
$\Big(\dfrac{\partial ln Q}{\partial T}\Big)_{N,\:V} = \dfrac{N}{T}$.
The energy can be expressed as
$E = \Big(\dfrac{\partial ln Q}{\partial T}\Big)_{N,\:V}*k_{B}T^{2}$
$E = \dfrac{Nk_{B}T^{2}}{T}$
$E = Nk_{B}T$
In the case of our two-dimensional ideal gas,
$E = U$,
so we can use the relationship
$C_{v} = \Big(\dfrac{\partial U}{\partial T}\Big)_{N,V}$
$C_{v} = Nk_{B}$
Q17-14 This is correct, but show algebra for better understanding of the topic.
How is the energy of a monatomic van der Waals gas related to the capacity of a monatomic van der Waals gas?
S17-14
The heat capacity, Cv is a measure of how energy changes with temperature (given a constant amount and volume).
$C_{v}=(\frac{\partial<E>}{\partial T})_{N,V} = (\frac{\partial U}{\partial T})_{N,V}$ because <E>=U.
Q17.15
The partition function of the rigid rotator-harmonic oscillator model of an ideal diatomic gas is given by
$Q(N, V, \beta) = \frac{[q(V, \beta)]^N}{N!}$
where
$q(V, \beta) = (\frac{2\pi m}{h^2\beta})^(\frac{3}{2})*V*\frac{8\pi^2I}{h^2\beta}*\frac{e^(-\beta h\nu/2)}{1-e^(-\beta h\nu/2)}$
Find the expression for the pressure of an ideal atomic gas.
S17.15
(There needs to be way more details how you arrived from the first equation to the last. -- RM)
$Q(N, V, \beta)= \frac{1}{N!}(\frac{2\pi m}{h^2\beta})^N*V^N*\frac{8\pi^2I}{h^2\beta}^N*\frac{e^(-\beta h\nu/2)}{1-e^(-\beta h\nu/2)}^N$
$lnQ = NlnV$
$\frac{dlnQ}{dV} = \frac{N}{V}$
$\langle P\rangle = \frac{Nk_{B}T}{V}$ which is the ideal gas law.
It would be useful to show the intermediate step in how you got to the end solution...I feel that the more detail it is, the clearer the solution will be.
Q17.19
a.) Prove that a simplified form of the molar heat capacity of system of independent particles can be written as:
$\overline{C_{V}} = R(\beta\varepsilon)^{2} \dfrac{e^{-\beta\varepsilon}}{(1+e^{-\beta\varepsilon})^{2}}$
b.) Verify by plotting $\overline{C_{V}}$ versus $\beta\varepsilon$ that the maximum value occurs when $\beta\varepsilon = 2.40$.
S17.19
a.) We can write the partition function of the system as:
$q = e^{-\dfrac{\varepsilon_{0}}{k_{B}T}} + e^{-\dfrac{\varepsilon_{1}}{k_{B}T}}$
If we assume that the ground quantum state, $\varepsilon_{0}$ is equal to zero, we get:
$q = e^{-\dfrac{0}{k_{B}T}} + e^{-\dfrac{\varepsilon_{1}}{k_{B}T}} = 1 + e^{-\dfrac{\varepsilon_{1}}{k_{B}T}}$
We can then write the average energy of the system in terms of q:
$\langle E \rangle = RT^{2} \left( \dfrac{\partial ln(q)}{\partial T} \right)_{V} = RT^{2} \left( \dfrac{\partial ln \left( 1 + e^{-\dfrac{\varepsilon_{1}}{k_{B}T}} \right)}{\partial T} \right)_{V} = RT^{2}\dfrac{1}{1 + e^{-\dfrac{\varepsilon_{1}}{k_{B}T}}} \left( -\dfrac{\varepsilon_{1}}{k_{B}} \right) \left( -\dfrac{1}{T^{2}} \right) e^{-\dfrac{\varepsilon_{1}}{k_{B}T}} = R \left( \dfrac{\varepsilon_{1}}{k_{B}} \right) \dfrac{e^{-\dfrac{\varepsilon_{1}}{k_{B}T}}}{1 + e^{-\dfrac{\varepsilon_{1}}{k_{B}T}}}$
Using the definition of molar heat capacity, we can then plug in our expression for $\langle E \rangle$:
$\overline{C_{V}} = \left( \dfrac{\partial \langle E \rangle}{\partial T} \right)_{V} = \left( \dfrac{\partial \left( R \left( \dfrac{\varepsilon_{1}}{k_{B}} \right) \dfrac{e^{-\dfrac{\varepsilon_{1}}{k_{B}T}}}{1 + e^{-\dfrac{\varepsilon_{1}}{k_{B}T}}} \right)}{\partial T} \right)_{V}$
$= R \left( \dfrac{\varepsilon_{1}}{k_{B}} \right) \left( -\dfrac{\varepsilon_{1}}{k_{B}} \right) \left( -\dfrac{1}{T^{2}} \right) e^{-\dfrac{\varepsilon_{1}}{k_{B}T}} \left( 1 + e^{-\dfrac{\varepsilon_{1}}{k_{B}T}} \right)^{-2} = R \left( \dfrac{\varepsilon_{1}}{k_{B}T} \right)^{2} \dfrac{e^{-\dfrac{\varepsilon_{1}}{k_{B}T}}}{\left( 1 + e^{-\dfrac{\varepsilon_{1}}{k_{B}T}} \right)^{2}}$
If we define $\varepsilon_{1} = \varepsilon$ and $\dfrac{1}{k_{B}T} = \beta$, we can simplify the above expression for the molar heat capacity as:
$\overline{C_{V}} = R(\beta\varepsilon)^{2} \dfrac{e^{-\beta\varepsilon}}{(1 + e^{-\beta\varepsilon})^{2}}$
which is what the original problem asked us to show.
b.) Plotting $\overline{C_{V}}$ versus $\beta\varepsilon$ results in the following graph, with the maximum at $\beta\varepsilon = 2.40$ verified. (All calculations and graphs were done in MATLAB and produced by me)
Q17.20
Each of the N atoms of the crystal is assumed to vibrate independently about its lattice position, so that the crstal is pictures as N independent harmonic oscillators, each vibrating in three directions. The partition function of a harmonic oscillator is
$q_{ho}(T) = \sum_{\nu=0}^\infty e^{-\beta (\nu+\dfrac{1}{2}) hv}$
$e^{-\beta hv/2}\sum_{\nu=0}^\infty e^{\beta \nu hv}$
This summation is easy to evaluate if you recognize it as the so-called geometric series
$\sum_{\nu=0}^\infty x^\nu = \dfrac{1}{1-x}$
Show that
$q_{ho}(T) = \dfrac{e^{-\beta h \nu/2}}{1-e^{-\beta h \nu}}$
and that
$Q = e^{\beta U_0} \bigg( \dfrac{e^{-\beta h \nu/2}}{1-e^{-\beta h \nu}} \bigg)^3N$
where U0 simply represents the zero-of-energy, where all N atoms are infinitely separated.
S17.20
let $x = e^{-\beta h \nu} then \sum_{\nu=0}^\infty e^{-\beta h \nu} = \dfrac{1}{1- e^{-\beta h \nu}}$
$Q = q^N = (q_{vib}q_{elec})^N$
$q_{elec}=e^{-\beta U_0 /N}+q_{vib} = (q_{ho}^3)] \[Q = e^{-\beta U_0}\bigg( \dfrac{e^{-\beta h \nu/2}}{1-e^{-\beta h \nu}} \bigg)^{3N}$
Q17.21
Evaluate
$S = \sum_{i=0}^{2} \,\sum_{j=0}^{1}\,x^{i+j}= x(1+y)+x^2(1+y) = (x+x^2)(1+y)$
by summing over j first then over i. Now obtain the same result by writing S as a product of two separate summations.
S17.21
By evaluating the formula given in the problem, S can be written as a product of two separate summations, given by
$S = \sum_{i=0}^{2} \,\sum_{j=0}^{1}\ = (x+x^2)(y+1)$
Q17.22
Evaluate:
$S = \sum_{i=0}^{4} \,\sum_{j=1}^{i}\,5+j$
S17.22
$(1+5) +[(1+5)+(2+5)]+[(1+5)+(2+5)+(3+5)]+[(1+5)+(2+5)+(3+5)+(4+5)]=6+13+21+30=70$
Q17.24
Consider a system of two noninteracting identical femions, each has energy states of $\varepsilon_1$, $\varepsilon_2$ and $\varepsilon_3$. How many terms are there in the unrestriced evaluation of Q(2,V,T)?
S17.24
Q(2,V,T) = $\Sigma e^{-\beta(\varepsilon_i+\varepsilon_j)}$
Total terms = 9. From these 9 terms that will appear in the unrestricted evaluation of Q, only 3 of them will be allowed for 2 identical femions:
$\varepsilon_1$ + $\varepsilon_2$
$\varepsilon_1$ + $\varepsilon_3$
$\varepsilon_2$ + $\varepsilon_3$
The 6 terms that are not allowed by the fermion restriction would be (Aaron Choi):
$\varepsilon_1 + \varepsilon_3 = \varepsilon_3 + \varepsilon_1$
$\varepsilon_1 + \varepsilon_2 = \varepsilon_2 + \varepsilon_1$
$\varepsilon_2 + \varepsilon_3 = \varepsilon_3 + \varepsilon_2$
Q17.25
Looking at problem 17.24, how many allowed terms are there in the case of bosons instead of fermions?
S17.25
There are six allowable terms, three that were found in Q17.24 and an additional three, they are as follows:
$\epsilon_1 + \epsilon_3$
$\epsilon_1 + \epsilon_2$
$\epsilon_2 + \epsilon_3$
$\epsilon_1 + \epsilon_1$
$\epsilon_2 + \epsilon_2$
$\epsilon_3 + \epsilon_3$
Q17.26
Consider a system of three noninteracting identical bosons, each of which has states with energies $\varepsilon_{1}$, $\varepsilon_{2}$, and $\varepsilon_{3}$. How many terms are there in the unrestricted evaluation of $Q(3,V,T)?$ How many terms occur in $Q(3,V,T)$ when the boson restriction is taken into account? Enumerate the allowed total energies in the summation of Equation 17.37: $Q(N,V,T) = \sum_{i,j,k..} e^{-\beta(\varepsilon_{i}+\varepsilon_{j}+\varepsilon_{k}...)}$
S17.26
There are ten allowed terms, given by the total energies
$1. \varepsilon_{1} + \varepsilon_{2} + \varepsilon_{3}$
$2. \varepsilon_{1} + \varepsilon_{1} + \varepsilon_{3}$
$3. \varepsilon_{2} + \varepsilon_{2} + \varepsilon_{3}$
$4. \varepsilon_{2} + \varepsilon_{2} + \varepsilon_{1}$
$5. \varepsilon_{2} + \varepsilon_{2} + \varepsilon_{2}$
$6. \varepsilon_{2} + \varepsilon_{2} + \varepsilon_{3}$
$7. \varepsilon_{1} + \varepsilon_{3} + \varepsilon_{3}$
$8. \varepsilon_{3} + \varepsilon_{2} + \varepsilon_{3}$
$9. \varepsilon_{1} + \varepsilon_{1} + \varepsilon_{1}$
$10. \varepsilon_{3} + \varepsilon_{3} + \varepsilon_{3}$
Q17.26
Consider a system of 2 noninteracting bosons, each having a state with energy $\epsilon_{1},\epsilon_{2},\epsilon_{3},\epsilon_{4}$. How many terms are in the unrestricted evaluation $Q\big(2,V,T\big)$ Write the allowed energies in the summation form?.
S17.26
There are 10 allowed terms, given by the total energies
$\epsilon_{1} +\epsilon_{2}$ $\epsilon_{2} +\epsilon_{2}$
$\epsilon_{1} +\epsilon_{3}$ $\epsilon_{2} +\epsilon_{3}$
$\epsilon_{1} +\epsilon_{4}$ $\epsilon_{2} +\epsilon_{4}$
$\epsilon_{1} +\epsilon_{1}$ $\epsilon_{3} +\epsilon_{4}$
$\epsilon_{4} +\epsilon_{4}$ $\epsilon_{3} +\epsilon_{3}$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(LibreTexts)/17%3A_Boltzmann_Factor_and_Partition_Functions/17.E%3A_Boltzmann_Factor_and_Partition_Functions_%28Exercises%29.txt
|
In chemistry, we are typically concerned with a collection of molecules. However, if the molecules are reasonably far apart as in the case of a dilute gas, we can approximately treat the system as an ideal gas system and ignore the intermolecular forces. The present chapter deals with systems in which intermolecular interactions are ignored. In ensemble theory, we are concerned with the ensemble probability density, i.e., the fraction of members of the ensemble possessing certain characteristics such as a total energy E, volume V, number of particles N or a given chemical potential μ and so on. The molecular partition function enables us to calculate the probability of finding a collection of molecules with a given energy in a system. The equivalence of the ensemble approach and a molecular approach may be easily realized if we treat part of the molecular system to be in equilibrium with the rest of it and consider the probability distribution of molecules in this subsystem (which is actually quite large compared to systems containing a small number of molecules of the order of tens or hundreds).
• 18.1: Translational Partition Functions of Monotonic Gases
The energy levels of translation are very closely spaced, so a large number of translational states are accessible and available for occupation by the molecules of a gas. This result is very similar to the result of the classical kinetic gas theory.
• 18.2: Most Atoms are in the Ground Electronic State
The energy difference between the ground electronic state of a system and its first excited state are typically much larger the thermal energy, \(kT\). This means that most atoms are in their ground electronic state, unless the temperature of the system is very high.
• 18.3: The Energy of a Diatomic/Polyatomic Molecule Can Be Approximated as a Sum of Separate Terms
A reasonable partition function of a diatomic/polyatomic molecule is the product of the partition function for the translational, vibrational, rotational, and electronic degrees of freedom. The total energy of the molecule then becomes the sum of the translational, vibrational, rotational, and electronic energies.
• 18.4: Most Molecules are in the Ground Vibrational State
At room temperature, most molecules are in the ground vibrational state. This is because the vibrational energies of molecules are larger than the average thermal energy available.
• 18.5: Most Molecules are Rotationally Excited at Ordinary Temperatures
At room temperature, many rotational states will be populated. This is due to the smaller rotational energies compared to vibrational or electronic energies.
• 18.6: Rotational Partition Functions of Diatomic Gases Contain a Symmetry Number
Homonuclear diatomic molecules have a high degree of symmetry and rotating the molecule by 180° brings the molecule into a configuration which is indistinguishable from the original configuration. This leads to an overcounting of the accessible states. To correct for these symmetry factors, we divide the partition function by \(σ\), which is called the symmetry number.
• 18.7: Vibrational Partition Functions of Polyatomic Molecules Include the Partition Function for Each Normal Coordinate
The partition function for polyatomic molecules include the partition functions for translational, electronic, vibrational, and rotational states. For translational states, the number of states available is far greater than the number of molecules. For electronic states, we only consider the ground electronic state due to the large gap between electronic states. For vibrational states, we include all the normal modes of vibration.
• 18.8: Rotational Partition Functions of Polyatomic Molecules Depend on the Sphar of the Molecule
For a polyatomic molecule containing NNN atoms, the total number of degrees of freedom is 3N3N3N. Out of these, three degrees of freedom are taken up for the translational motion of the molecule as a whole. The translational partition function was discussed previously and now we have to consider the three rotational degrees of freedom and the 3N–63N–63N–6 vibrational degrees.
• 18.9: Molar Heat Capacities
The heat capacity of a substance is a measure of how much heat is required to raise the temperature of that substance by one degree Kelvin. For a simple molecular gas, the molecules can simultaneously store kinetic energy in the translational, vibrational, and rotational motions associated with the individual molecules. In this case, the heat capacity of the substance can be broken down into translational, vibrational, and rotational contributions.
• 18.10: Ortho and Para Hydrogen
The molecules of hydrogen can exist in two forms depending on the spins on the two hydrogen nuclei. If both the nuclear spins are parallel, the molecule is called ortho and if the spins are antiparallel, it is referred to as para (In disubstituted benzene, para refers to the two groups at two opposite ends, while in ortho, they are adjacent or “parallel” to each other).
• 18.11: The Equipartition Principle
The equipartition theorem states that every degree of freedom that appears only quadratically in the total energy has an average energy of ½kT in thermal equilibrium and contributes ½k to the system's heat capacity. Here, k is the Boltzmann constant, and T is the temperature in Kelvin. The law of equipartition of energy states that each quadratic term in the classical expression for the energy contributes ½kBT to the average energy.
• 18.E: Partition Functions and Ideal Gases (Exercises)
18: Partition Functions and Ideal Gases
Let us consider the translational partition function of a monatomic gas particle confined to a cubic box of length $L$. The particle inside the box has translational energy levels given by:
$E_\text{trans}= \dfrac{h^2 \left(n_x^2+ n_y^2+ n_z^2 \right)}{8 mL^2} \nonumber$
where $n_x$, $n_y$ and $n_z$ are the quantum numbers in the three directions. The translational partition function is given by:
$q_\text{trans} = \sum_i e^{−\epsilon_i/kT} \nonumber$
which is the product of translational partition functions in the three dimensions. We can write the translation partition function as the product of the translation partition function for each direction:
\begin{align} q_\text{trans} &= q_{x} q_{y} q_{z} \label{times1} \[4pt] &= \sum_{n_x=1}^{\infty} e^{−\epsilon_x/kT} \sum_{n_y=1}^{\infty} e^{−\epsilon_y/kT} \sum_{n_z=1}^{\infty} e^{−\epsilon_z/kT} \label{sum1} \end{align}
Since the levels are very closely spaced (continuous), we can replace each sum in Equation $\ref{sum1}$ with an integral. For example:
\begin{align} q_{x} &= \sum_{n_x=1}^{\infty} e^{−\epsilon_x/kT} \[4pt] &\approx \int_{n_x=1}^{\infty} e^{−\epsilon_x/kT} \label{int1} \end{align}
and after substituting the energy for the relevant dimension:
$\epsilon_x = \dfrac{h^2 n_x^2}{8mL^2} \nonumber$
we can extend the lower limit of integration in the approximation of Equation $\ref{int1}$:
$q_x= \int_{1}^{\infty} e^{− \frac{h^2 n_x^2}{8mL^2 kT}} \approx \int_{0}^{\infty} e^{− \frac{h^2 n_x^2}{8mL^2 kT}} \nonumber$
We then use the following solved Gaussian integral:
$\int_o^{\infty} e^{-an^2} dn = \sqrt{\dfrac{\pi}{4a}} \nonumber$
with the following substitution:
$a = \dfrac{h^2}{8mL^2 kT} \nonumber$
we get:
$q_x= \dfrac{1}{2} \sqrt{\dfrac{π}{a}} = \dfrac{1}{2} \sqrt{\dfrac{π 8m kT }{ h^2 }} L \nonumber$
or more commonly presented as:
$q_x = \dfrac{L}{\Lambda} \nonumber$
where $\Lambda$ is the de Broglie wavelength and is given by
$\Lambda = \dfrac{h}{\sqrt{2 π 8m kT}} \nonumber$
Multiplying the expressions for $q_x$, $q_y$ and $q_z$ (Equation $\ref{times1}$) and using $V$ as the volume of the box $L^3$, we arrive at:
$q_\text{trans} = \left( \dfrac{\sqrt{2 π 8m kT}} {h} \right)^{3/2} V = \dfrac{ V}{\Lambda^3} \label{parttransation}$
This is usually a very large number (1020) for volumes of 1 cm3 for small molecular masses. This means that such a large number of translational states are accessible and available for occupation by the molecules of a gas. This result is very similar to the result of the classical kinetic gas theory that said that the observed energy of an ideal gas is:
$U=\dfrac{3}{2} nRT \nonumber$
We postulate therefore that the observed energy of a macroscopic system should equal the statistical average over the partition function as shown above. In other words: if you know the particles your system is composed of and their energy states you can use statistics to calculate what you should observe on the whole ensemble.
Example
Calculate the translational partition function of an $I_2$ molecule at 300K. Assume V to be 1 liter.
Solution
Mass of $I_2$ is $2 \times 127 \times 1.6606 \times 10^{-27} kg$
\begin{align*} 2πmkT &= 2 \times 3.1415 \times (2 \times 127 \times 1.6606 \times 10^{-27}\, kg) \times 1.3807 \times 10^{-23} \, J/K \times 300 K \[4pt] &= 1.0969 \times 10^{-44}\; J\, kg \end{align*} \nonumber
\begin{align*} Λ &= \dfrac{h}{\sqrt{2 π m kT}} \[4pt] &= \dfrac{6.6262 \times 10^{-34}\;J\, s}{ \sqrt{1.0969 \times 10^{-44}\, J \, kg}} = 6.326 \times 10^{-12}\;m \end{align*} \nonumber
The via Equation \ref{parttransation}
$q_\text{trans}= \dfrac{V}{Λ^3}= \dfrac{1000 \times 10^{-6} m^3}{(6.326 \times 10^{-12} \; m)^3}= 3.95 \times 10^{30} \nonumber$
This means that $3.95 \times 10^{30}$ quantum states are thermally accessible to the molecular system
Contributors and Attributions
• www.chem.iitb.ac.in/~bltembe/pdfs/ch_3.pdf
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(LibreTexts)/18%3A_Partition_Functions_and_Ideal_Gases/18.01%3A_Translational_Partition_Functions_of_Monotonic_Gases.txt
|
Writing the electronic energies as $E_1, E_2 ,E_3, ...$ with corresponding degeneracies $g_1, g_2, g_3, \ldots$. The electronic partition function is then given by the following summation:
$q_{el}(T) = g_1 e^{-E_1/kT} + g_2 e^{-E_2/kT} + g_3 e^{-E_3/kT} + \ldots \label{Q1}$
Usually, the differences in electronic energies are significantly greater than thermal energy $kT$:
$k T \ll E_1 < E_2 < E_3 \nonumber$
If we treat the lowest energy electronic state $E_1$ as the reference value of zero of energy, the electronic partition function (Equation \ref{Q1}) can be approximated as:
$q_{elec}(T) = g_1 + g_2 e^{-E_2/kT} + g_3 e^{-E_3/kT} + \ldots \label{3.24}$
Typically electronic states are tens of thousands of wave numbers above the ground state. For example, the first excited electronic state of nitric oxide (NO) is ~40,000 cm-1. Using this value:
$\frac{E_2}{kT} = \frac{40000\text{ cm}^{-1}}{kT} = \frac{10^4\text{ K}}{T} \nonumber$
That means, that even at 1,000 K, the value of the second term in {\reference{3.24}} is:
$g_2 e^{-10} = g_2 4.5\times 10^{-5} \nonumber$
The result is that the higher electronic states are not accessible under ordinary temperatures. There are some cases, however, where the first excited state lies much closer to the ground state, but these are the exception rather than the rule.
Example 18.2.1
Find the electronic partition of $\ce{H_2}$ at 300 K.
Solution
The lowest electronic energy level of $\ce{H_2}$ is near $- 32\; eV$ and the next level is about $5\; eV$ higher. Taking -32 eV as the zero (or reference value of energy), then
$q_{el} = e_0 + e^{-5 eV/ kT} + ... \nonumber$
At 300 K, T = 0.02\; eV and
\begin{align*} q_{el} &= 1 + e^{-200} +... \[4pt] &\approx 1.0 \end{align*} \nonumber
Where all terms other than the first are essentially 0. This implies that $q_{el} = 1$.
The physical meaning of the result from Example 18.2.1 is that only the ground electronic state is generally thermally accessible at room temperature.
Contributors and Attributions
• www.chem.iitb.ac.in/~bltembe/pdfs/ch_3.pdf
18.03: The Energy of a Diatomic Molecule Can Be Approximated as a Sum of Separate Terms
A monatomic gas has three degrees of freedom per molecule, all of them translational:
1. movement in x-direction
2. movement in y-direction
3. movement in z-direction
A polyatomic gas, including diatomic molecules, has other levels that you can 'stuff' energy into. Polyatomic molecules can rotate and vibrate, and if enough energy is available you could also excite the electrons involved in the $σ$ and $π$ bonds. A reasonable approximation of the partition function of the molecule would become:
$q_\text{tot}(V,T) = q_\text{trans}q_\text{vib}q_\text{rot}q_\text{elec}$
The partition function of a molecular idea gas is then:
$Q(N,V,T) = \frac{(q_\text{trans}q_\text{vib}q_\text{rot}q_\text{elec})^N}{N!}$
The total energy of the molecule is then the sum of the translation, vibrational, rotational, and electronic energies:
$E_{tot} = E_\text{trans}E_\text{vib}E_\text{rot}E_\text{elec}$
We will only scratch the surface of the additional degrees of freedom and their partition functions.
Electronic
At room temperature the system is usually in its ground electronic state. This means that the electronic partition function $q_\text{elec} = 1$. Usually we do not have to worry about these degrees of freedom. If we do, there are usually just a few levels to worry about. This includes their degeneracy g. If there is a single state at a certain energy ($g=1$), two states ($g=2$), or more states, we must multiply the Boltzmann factor by this degeneracy number.
If there are more than one state to worry about, we could follow the same procedure as we did for the translational states:
1. Define the energy states and their degeneracies
2. Compose the partition function $q$ for the molecule and $Q$ for the gas
3. Use the ($\beta$ or $T$) derivative of $\ln Q$ to determine $\langle E\rangle$
4. Use the $T$ derivative of $U \approx\langle E\rangle$ to find the contribution to the heat capacity
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(LibreTexts)/18%3A_Partition_Functions_and_Ideal_Gases/18.02%3A_Most_Atoms_are_in_the_Ground_Electronic_State.txt
|
The vibrational energy levels of a diatomic are given by:
$E_v = (v +1/2) h \nu$
where is $\nu$ the vibrational frequency and $v$ is the vibrational quantum number. In this case, it is easy to sum the geometric series shown below:
\begin{align*} q_\text{vib} &= \sum_{v=0}^{\infty} e^{-( v + 1/2) h\nu / k T} \[4pt] &= e^{-h \nu/ 2kT} \left( 1 + e^{-h \nu/ 2kT} + e^{-2 h \nu/ 2kT} + ... \right) \end{align*} \nonumber
or rewritten as:
$q_\text{vib} = e^{-h \nu / 2kT} \left( 1 +x + x^2 +x^3 + ... \right) \label{eq0}$
where $x = e^{-h \nu/kT}$. Given the following power series expansion:
$\dfrac{1}{1 - x} = 1 + x + x^2 + x^3 + x^4 + .... \nonumber$
Equation $\ref{eq0}$ can be rewritten as:
$q_\text{vib} = e^{-h \nu / 2kT} \left( \dfrac{1}{1-x} \right) \nonumber$
or:
$q_\text{vib} = \dfrac{e^{-h \nu / 2 k T}} {1 - e^{-h \nu/ k T}} \label{eq1}$
If the zero of energy scale is at $h \nu /2kT$, then Equation $\ref{eq1}$ can be rewritten as:
$q_\text{vib} \approx \dfrac{1} {1 - e^{-h \nu / kT}} \label{VIBQ}$
A vibrational temperature $Θ_\text{vib}$ may be defined as:
$Θ_\text{vib}= \dfrac{ hc \tilde{\nu}}{k} \nonumber$
where $\tilde{\nu}$ is the vibrational frequency in cm-1. $Θ_\text{vib}$ is a good way to express the stiffness of the vibrating bond in units of the Boltzmann constant. Because the stiffness obviously depends on what bond your are talking about, this is a good way to do the same thing we did for the critical temperature of the non-ideal gases.
Molecule g Bond Length (pm) $ω$ (cm-1) $Θ_\text{vib}$ (K) $\tilde{B}$ (cm-1) $Θ_\text{rot}$ (K) Force constant $k$ (dynes/cm) $D_0$ (kcal/ mol)
Table 18.4.1 : Representative molecular data for a few diatomics
$H_2$ 1 0.7474 4400 6332 60.9 87.6 5.749 103.2
$D_2$ 1 0.7415 3118 4487 30.45 43.8 5.77 104.6
$N_2$ 1 1.097 2358 3393 2.001 2.99 22.94 225.1
$O_2$ 3 1.207 1580 2274 1.446 2.08 11.76 118.0
$Cl_2$ 1 1.987 560 805 0.244 0.351 3.2 57.0
$CO$ 1 1.128 2170 3122 1.931 2.78 19.03 255.8
$NO$ 2 1.15 190 2719 1.695 2.45 15.7 150.0
$HCl$ 1 1.275 2938 4227 10.44 15.02 4.9 102.2
$HI$ 1 1.609 2270 3266 6.46 9.06 3.0 70.5
$Na_2$ 1 3.096 159 229 0.154 0.221 0.17 17.3
$K_2$ 1 3.979 92.3 133 0.0561 0.081 0.10 11.8
Example 18.4.1
The vibrational frequency of of $I_2$ is $214.57\; cm^{-1}$. Calculate the vibrational partition function of $I_2$ at 300 K.
Solution:
$\dfrac{h\nu}{kT} = \dfrac{ 214.57}{209.7} = 1.0232 \nonumber$
so
$e^{-h\nu/kT} = 0.3595 \nonumber$
and
$q_\text{vib} = \dfrac{1}{1-0.3595} = 1.561 \nonumber \nonumber$
This implies, as before, that very few vibrational states are accessible and much less than rotation states and many orders less than translation states.
Vibrational Heat Capacity
The vibrational energy is given by the above expression and the molar heat capacity at constant volume, $\bar{C}_V$ is given by:
$\bar{C}_V= \left(\dfrac{\partial E}{\partial T} \right)_V \nonumber$
We have:
\begin{align} \dfrac{\partial }{\partial T} &= \dfrac{\partial \beta}{\partial T} \dfrac{∂}{\partial \beta} \[4pt] &= \dfrac{-1}{kT^2} \dfrac{\partial }{\partial \beta} \[4pt] &= \left(-k \beta^2 \right) \left(\dfrac{\partial }{\partial \beta}\right) \label{3.54} \end{align}
Therefore:
$\bar{C}_V = (-k \beta^2 ) \left(\dfrac{\partial ε_\text{vib}}{\partial \beta}\right) \nonumber$
and when the vibrational partition function (Equation $\ref{VIBQ}$) is introduced:
\begin{align} q_\text{vib} &= -k \beta^2 \dfrac{ \left[ (1 - e^{-hc \nu / k T}) (-hc \tilde{\nu}) - e^{-hc \nu / k T} (+ hc\tilde{\nu}) \right] e^{-hc \nu / k T} }{ (1 - e^{-hc \nu / k T})^2 } hv \tilde{\nu} \[4pt] &= k \left( \dfrac{ Θ_\text{vib} }{T} \right)^2 \dfrac{ e^{- Θ_\text{vib} /T} }{ \left( 1- e^{- Θ_\text{vib}/T} \right)^2 } \label{FinalQ} \end{align}
For large $T$, the $\bar{C}_V$ becomes:
$N_A k = R \nonumber$
and for small T, $\bar{C}_V$ goes to zero as demonstrated in Figure 18.4.1 .
The vibrational heat capacity is shown as function of the reduced temperature $T/Θ$ to get a general picture valid for all diatomic gases. Compare the parameters on Table 18.4.1 to see on what different absolute scales we have to think for different gases. Clearly the vibrational contribution to the heat capacity depends on temperature. For many molecules (especially light ones), the vibrational contribution only kicks in at quite high temperatures.
The value of $Θ_\text{vib}$ is determined mostly by
1. the strength of a bond (the stronger the higher $Θ_\text{vib}$ )
2. the (effective) mass of the molecule (the lighter the higher $Θ_\text{vib}$ )
Molecules with low $Θ_\text{vib}$ often dissociate at lower temperatures, although the harmonic oscillator model is not sufficient to describe that phenomenon.
Vibrational Populations
We can calculate the fraction of molecules in each vibrational state. The fraction of molecules in the $v$th vibrational state is given by:
$f_v = \frac{e^{-h\nu(v+\frac{1}{2})/kT}}{q_\text{vib}}$
Substituting in Equation $\ref{eq1}$, we get:
$\begin{split} f_v &= \left(1-e^{-h\nu/kT}\right)e^{-h\nu v/kT} \ &= \left(1-e^{-\Theta_\text{vib}/T}\right)e^{-\Theta_\text{vib} v/T} \end{split} \label{vib1}$
We can plot the fraction molecules in each vibrational state.
From the figure, we can see that most of the Cl2 molecules are in the ground vibrational state at room temperature (300 K). This is true for most molecules. Only molecules with very weak bonds and low vibrational temperatures will populate a significant fraction of molecules in excited vibrational states.
Contributors and Attributions
• www.chem.iitb.ac.in/~bltembe/pdfs/ch_3.pdf
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(LibreTexts)/18%3A_Partition_Functions_and_Ideal_Gases/18.04%3A_Most_Molecules_are_in_the_Ground_Vibrational_State.txt
|
The rotational quantum number is $J$ and the energy of the rotations is:
$E(J)= \tilde{B} J(J+1) \nonumber$
Where $\tilde{B}$ is in units of cm-1 and is equivalent to:
$\frac{h}{8\pi^2Ic} \nonumber$
Where $c$ is the speed of light in units of cm-1. However, with the exception of the ground state (i.e., $J=0$), multiple eigenstates exist for a given value of $J$. This is a form of degeneracy, ($g$), and the number of levels per energy goes as 1, 3, 5, 7, 9 etc or:
$g(J)=2J+1 \nonumber$
Therefore, the summation to find the partition function $q_\text{rot}(T)$ contains an extra factor $g=2J+1$. This factor is mathematically very handy, because it is the derivative of:
$g(J)=J(J+1) \nonumber$
This makes it possible to change variables to $X=J(J+1)$ and the integration becomes very easy. Notice that just like in the translation partition function, we approximate the summation with an integral. However, because we are dealing with far fewer levels, this is less justified in the rotational case. How justified it is depends on the gas and the temperature we consider. Roughly speaking we should be at a temperature $T>> Θ_\text{rot}$.
Exercise 18.5.1
Look at table 18.2. When do we need to actually work out the (discrete) summation instead of making a (continuous) integration out of it?
Answer
Only for the lightest gases like $\ce{He}$ and $\ce{H2}$ do we need to worry and then only at pretty low cryogenic circumstances. At room temperature, we can take the rotational levels as a continuous set and use an integral. Notice that the vibrational $Θ$ values are much larger! At $T=300\,\text{K}$, usually only a single level is occupied and we are in the discrete limit. On the surface of the sun that would be a different matter of course.
The rotational constant $B$ is directly linked to the moment of inertia:
$I = μr^2$
Where $μ$ is the reduced mass of the molecule:
$\dfrac{1}{μ} = \dfrac{1}{m_1} + \dfrac{1}{m_2} \nonumber$
and $r$ is the bond length. Again we can scale the behavior of different systems to one and the same picture by introducing a characteristic temperature:
$\Theta_{rot} = \dfrac{hb}{k} = \dfrac{\hbar^2}{2Ik_b} \nonumber$
As we did for the translations, we can calculate the moment $\langle \epsilon \rangle$. For vibrations we get a relatively complicated function of temperature. However, for rotations, the moment is simply equal to $NkT$ and this means that the rotational contribution to the molar $C_v$ of a diatomic is simply $R$. As the vibrational and electronic contributions to the heat capacity are typically negligible at room temperature, we get:
• monatomic gas $C_v= C_v^{trans} = \dfrac{3}{2} nR \nonumber$
• diatomic \begin{align} C_v &= C_v^{trans}+C_v^{rot} \[4pt] &= \dfrac{3}{2} nR + nR \[4pt] &= \dfrac{5}{2}nR \end{align} \nonumber
Rotational energies are less than vibrational and electronic energies and, at room temperature, many rotational states will be populated.
18.06: Rotational Partition Functions of Diatomic Gases
The rotational energy levels of a diatomic molecule are given by:
$E_\text{rot}(J) = \tilde{B} J (J + 1) \label{Eq0}$
where:
$\tilde{B} = \dfrac{h}{8 π^2 I c} \nonumber$
Here, $\tilde{B}$ is the rotational constant expressed in cm-1. The rotational energy levels are given by:
$E_j = \dfrac{J(J+1) h^2}{8 \pi I} \nonumber$
where $I$ is the moment of inertia of the molecule given by $μr^2$ for a diatomic, and $μ$ is the reduced mass, and $r$ the bond length (assuming rigid rotor approximation). The energies can be also expressed in terms of the rotational temperature, $Θ_\text{rot}$, defined as:
$Θ_\text{rot} = \dfrac{r^2}{8 \pi^2 I k} \label{3.12}$
The interpretation of $θ_\text{rot}$ is as an estimate of the temperature at which thermal energy ($\approx kT$) is comparable to the spacing between rotational energy levels. At about this temperature the population of excited rotational levels becomes important. See Table 1.
Table 1: Select Rotational Temperatures. In each case the value refers to the most common isotopic species.
Molecule $H_2$ $N_2$ $O_2$ $F_2$ $HF$ $HCl$ $CO_2$ $HBr$ $CO$
$\Theta_\text{rot}$ 87.6 2.88 2.08 1.27 30.2 15.2 0.561 12.2 2.78
In the summation for the expression for rotational partition function ($q_\text{rot}$), Equation $\ref{3.13}$, we can do an explicit summation:
$q_\text{rot} = \sum_{j=0} (2J+1) e^{-E_J/ k T} \label{3.13}$
if only a finite number of terms contribute. The factor $(2J+1)$ for each term in the expansion accounts for the degeneracy of a rotational state $J$. For each allowed energy $E_J$ from Equation $\ref{Eq0}$ there are $(2 J + 1)$ eigenstates. The Boltzmann factor must be multiplied by $(2J+ 1)$ to properly account for the degeneracy these states:
$(2J+ 1)e^{ -E_J / k T}$
If the rotational energy levels are lying very close to one another, we can integrate similar to what we did for $q_{trans}$ previously to get:
$q_\text{rot} = \int _0 ^{\infty} (2J+1) R^{-\tilde{B} J (J+1) / k T} dJ \nonumber$
This integration can easily be done by substituting $x = J ( J+1)$ and $dx = (2J + 1) dJ$:
$q_\text{rot} = \dfrac{kT}{\tilde{B}} \label{3.15}$
For a homonuclear diatomic molecule, rotating the molecule by 180° brings the molecule into a configuration which is indistinguishable from the original configuration. This leads to an overcounting of the accessible states. To correct for this, we divide the partition function by $σ$, which is called the symmetry number and is equal to the distinct number of ways by which a molecule can be brought into identical configurations by rotations. The rotational partition function becomes:
$q_\text{rot}= \dfrac{kT}{\tilde{B} σ} \label{3.16}$
or commonly expressed in terms of $Θ_\text{rot}$:
$q_\text{rot}= \dfrac{T}{ Θ_\text{rot} σ} \label{3.17}$
Example 18.6.1
What is the rotational partition function of $H_2$ at 300 K?
Solution
The value of $\tilde{B}$ for $H_2$ is 60.864 cm-1. The value of $k T$ in cm-1 can be obtained by dividing it by $hc$, i.e., which is $kT/hc = 209.7\; cm^{-1}$ at 300 K. $σ = 2$ for a homonuclear molecule. Therefore from Equation $\ref{3.16}$,
\begin{align*} q_\text{rot} &= \dfrac{kT}{\tilde{B} σ} \[4pt] &= \dfrac{209.7 \;cm^{-1} }{(2) (60.864\; cm^{-1})} \[4pt] &= 1.723 \end{align*} \nonumber
Since the rotational frequency of $H_2$ is quite large, only the first few rotational states are accessible at 300 K
Contributors and Attributions
• www.chem.iitb.ac.in/~bltembe/pdfs/ch_3.pdf
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(LibreTexts)/18%3A_Partition_Functions_and_Ideal_Gases/18.05%3A_Most_Molecules_are_Rotationally_Excited_at_Ordinary_Temperatures.txt
|
As with diatomic molecules, the energies of polyatomic molecules can be approximated by the sum of its individual degrees of freedom. Therefore, we can write the partition function as:
$Q(N,V,T) = \frac{[q(V,T)]^2}{N!}$
We can write the polyatomic analog to diatomic molecules:
$Q(N,V,T) = \frac{(q_\text{trans}q_\text{rot}q_\text{vib}q_\text{elec})^N}{N!}$
When derived for diatomic molecules, we assumed the rigid rotor model for rotations and the harmonic oscillator for vibrations. This allowed us to separate the rotational motion from the vibrational motion of the molecule. Polyatomic molecules are a bit more complicated, but we will still make use of these approximations.
The number of translational states available to any given molecule is far greater than the number of molecules in the system. $q_\text{trans}$ is given by:
$q_\text{trans} = \left[\frac{2\pi MkT}{h^2}\right]^{3/2}V$
where $M$ is the mass of the particle. Moving to the electronic partition function:
$q_\text{elec} = \sum_i{g_{i}e^{-E_i/kT}}$
where $E_i$ is the energy of the electronic state $i$ and {g_i} is its degeneracy. Electronic states are typically spaced far apart from each other. The probability of the system being in any state but the ground state is extremely small. We can therefore simplify the electronic partition function to include only the ground electronic state:
$q_\text{elec} = g_{e1}e^{D_e/kT}$
where $-D_e$ is the energy of the ground electronic state. To complete the polyatomic partition function, we still need $q_\text{vib}$ and $q_\text{rot}$. We will finish this section with $q_\text{vib}$ and talk about $q_\text{rot}$ in the next.
The vibrational motion of diatomic molecules can be expressed as a set of independent harmonic oscillators. For polyatomic molecules, the independent vibrational motions are referred to as normal modes of vibration. The vibrational energy is then the sum of the energies for each normal mode:
$E_\text{vib} = \sum_i^\alpha{\left(v_i + \frac{1}{2}\right)h\nu}$
where $\nu_i$ is the vibrational frequency for the $i$th normal mode and $\alpha$ is the number of vibration degrees of freedom. A linear molecule has $3n-5$ vibrational degrees of freedom and a nonlinear molecule has $3n-6$ vibrational degrees of freedom. Because the normal modes are independent of each other, we can take out results from previous sections:
$q_\text{vib} = \prod_i^\alpha{\frac{e^{-\theta_{\text{vib},i}/2T}}{1-e^{-\theta_{\text{vib},i}/T}}}$
$E_\text{vib} = Nk \sum_i^\alpha{\left( \frac{\theta_{\text{vib},i}}{2} + \theta_{\text{vib},i} \frac{e^{-\theta_{\text{vib},i}/T}}{1-e^{-\theta_{\text{vib},i}/T}} \right)}$
$C_{V,\text{vib}} = Nk \sum_i^\alpha{\left( \left(\frac{\theta_{\text{vib},i}}{2}\right)^2 + \frac{e^{-\theta_{\text{vib},i}/T}}{(1-e^{-\theta_{\text{vib},i}/T})^2} \right)}$
where $\theta_{\text{vib},i}$ is the characteristic vibrational temperature defined by:
$\theta_{\text{vib},i} =\frac{h\nu_i}{k}$
Vibrational Entropy
There is a great deal of utility for thermodynamic functions calculated from the vibrational normal modes of a molecule. The vibrational energy and entropy depend on the shape a multidimensional potential energy surface. If one performs a conformational search of macromolecule it is one obtains energies and structures but little direct information concerning the shape of the potential energy surface for each conformation. The vibrational entropy gives a means determining whether there are significant entropic differences in the structures and therefore whether certain conformations will be favored based on the entropy.
However, it is possible to take appropriate linear combinations of the coordinates so that the cross terms are eliminated and the classical Hamiltonian as well as the operator corresponding to it contains no cross terms and in terms of the new coordinates, the Hamiltonian can be written as,
$H = \sum_{i=1}^{f} \dfrac{h^2}{2 \mu_i} \dfrac{\partial}{\partial q_i^2}+ \sum_{i=1}^{f} \dfrac{k_i}{2} q_i^2 \label{3.81}$
Here, the degrees of freedom $f$ is $3N - 5$ for a linear molecule and $3N - 6$ for a nonlinear molecule. Here, $k_i$ is the force constant and $μ_i$ is the reduced mass for that particular vibrational mode which is referred to as a normal mode.
The Equation $\ref{3.81}$ represents $f$ linearly independent harmonic oscillators and the total energy for such a system is
$\epsilon_{vib} = \sum_{i=1}^{f} \left( v_i + \dfrac{1}{2} \right) h \nu_i \nonumber$
The vibrational frequencies are given by
$\nu_i = \dfrac{1}{2 \pi} \sqrt{\dfrac{k_i}{\mu_i}} \nonumber$
The vibrational partition function is given by the product of $f$ vibrational functions for each frequency.
$q_{vib} = \prod_{i=1}^f \dfrac{ e^{-\Theta_{vib,i}/2T} }{1- e^{-\Theta_{vib,i}/T}} \label{Qvib1}$
with
$\Theta_{vib,i} = \dfrac{h\nu_i}{k_B} \nonumber$
As with the previous discussion regarding simple diatomics, $\Theta_{vib,i}$ is called the characteristic vibrational temperature. The molar energies and the heat capacities are given by
$\langle E_{vib} \rangle = Nk \sum_{i=1}^f \left[ \dfrac{ \Theta_{vib,i} }{2} + \dfrac{ \Theta_{vib,i} e^{-\Theta_{vib,i}/T} }{1- e^{-\Theta_{vib,i}/T}} \right] \nonumber$
and
$\bar{C}_V = Nk_B \sum_{i=1}^f \left( \dfrac{ \Theta_{vib,i} }{T} \right)^2 \dfrac{ e^{-\Theta_{vib,i}/T} }{\left(1- e^{-\Theta_{vib,i}/T}\right)^2} \nonumber$
Example $NO_2$
The three characteristic vibrational temperatures for $\ce{NO2}$ are 1900 K, 1980 K and 2330 K. Calculate the vibrational partition function at 300 K.
Solution
The vibrational partition is (Equation $\ref{Qvib1}$)
$q_{vib} = \prod_{i=1}^f \dfrac{ e^{-\Theta_{vib,i}/2T} }{1- e^{-\Theta_{vib,i}/T}} \nonumber$
If we calculate $q_{vib}$ by taking the zero point energies as the reference points with respect to which the other energies are measured
\begin{align*} q_{vib} &= \prod_{i=1}^f \dfrac{ 1 }{1- e^{-\Theta_{vib,i}/T}} = \left( \dfrac{ 1 }{1- e^{-1900/300}} \right) \left( \dfrac{ 1 }{1- e^{-1980/300}} \right) \left( \dfrac{ 1 }{1- e^{-2330/300}} \right) \[4pt] & =(1.0018) ( 1.0014)(1.0004) = 1.0035 \end{align*} \nonumber
The implication is that very few vibrational states of $\ce{NO2}$ (other than the ground vibrational state) are accessible at 300 K. This is standard of the vibrations of most molecules.
Contributors and Attributions
• www.chem.iitb.ac.in/~bltembe/pdfs/ch_3.pdf
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(LibreTexts)/18%3A_Partition_Functions_and_Ideal_Gases/18.07%3A_Vibrational_Partition_Functions_of_Polyatomic_Molecules.txt
|
For a polyatomic molecule containing $N$ atoms, the total number of degrees of freedom is $3N$. Out of these, three degrees of freedom are taken up for the translational motion of the molecule as a whole. For nonlinear molecules, there are three rotational degrees of freedom and the $3N – 6$ vibrational degrees. For linear molecules, the rotational motion along the molecular axis is quantum mechanically not meaningful as the rotated configuration is indistinguishable from the original configuration. Therefore, linear molecules have two rotational degrees of freedom and $3N – 5$ vibrational degrees of freedom.
To investigate the rotational motion, we need to fix the center of mass of the molecule and calculate the three principal moments of inertia $I_A$, $I_B$, and $I_C$ of the ellipsoid of inertia. The center of mass is defined as the point for which the following identities hold:
$\sum_i m_i x_i = \sum_i m_i y_i = \sum_i m_i z_i \nonumber$
The inertia products are defined by:
$I_{xx} = \sum_i m_i \left(y_i^2+ z^2_i\right) \nonumber$
$I_{xy} = \sum_i m_i \left(x_i y_i\right) \nonumber$
The other components $I_{yy}$, $I_{xz}$,.. are defined analogously. To find the direction cosines $\alpha_i, \beta_o, \gamma_i$ of the three principle moments of inertia, we need to solve the following matrix equations:
$\alpha (I_{xx} - \eta) - \beta I_{xy} - \gamma I_{xz} =0\nonumber$
$\alpha I_{xy} - \beta (I_{yy} - \eta) - \gamma I_{yz} = 0\nonumber$
$-\alpha I_{xz} - \beta I_{yz} + \gamma (I_{zz} - \eta ) =0\nonumber$
If the off diagonal terms $I_{xy}$ are zero in the above equations, then the $x$, $y$, $z$ axis will be the principal axis. The energy of a rotor with the three moments of inertia $I_A$, $I_B$. and $I_C$ is given by:
\begin{align*} \epsilon &= \dfrac{1}{2} I_A \omega_A^2 + \dfrac{1}{2} I_B \omega_B^2 + \dfrac{1}{2} I_C \omega_C^2 \[4pt] &= \dfrac{L_A^2}{2I_A} + \dfrac{L_B^2}{2I_B} + \dfrac{L_C^2}{2I_C} \end{align*} \nonumber
Each of the rotational degrees of freedom will have a characteristic rotational temperature in terms of the moment of inertia:
$\Theta_{rot,i} = \frac{hbar^2}{2I_ik} \qquad i=A,B,C$
There are many different shapes of molecules and these shapes affect the rotational behavior of the molecules. Molecules are therefore classified according to symmetry into different groups called tops. The three different tops are:
$\text{Spherical top} \qquad \Theta_{rot,A} = \Theta_{rot,B} = \Theta_{rot,C}$
$\text{Symmetric top} \qquad \Theta_{rot,A} = \Theta_{rot,B} \neq \Theta_{rot,C} \nonumber$
$\text{Asymmetric top} \qquad \Theta_{rot,A} \neq \Theta_{rot,B} \neq \Theta_{rot,C} \nonumber$
Spherical Tops
The spherical top can be solved exactly to give:
$E_J = \frac{J(J+1)\hbar^2}{2I}$
$g_J = (2J+1^2) \qquad J=0,1,2,\ldots$
The rotational partition function is:
$\begin{split} q_\text{rot} &= \sum_{J=0}^\infty (2J+1)^2 e^{\hbar^2J(J+1)/2IkT} \ &= \sum_{J=0}^\infty (2J+1)^2 e^{\Theta_\text{rot}J(J+1)/T} \end{split}$
For almost all spherical top molecules:
$\Theta_\text{rot} \gg T \nonumber$
Therefore, we can convert the sum to an integral:
$q_\text{rot} = \frac{1}{\sigma}\int_{0}^\infty (2J+1)^2 e^{\Theta_\text{rot}J(J+1)/T}$
where we have now included the symmetry term, $\sigma$. Solving for this integral, we get:
$q_\text{rot} = \frac{\pi^{1/2}}{\sigma}\left(\frac{T}{q_\text{rot}}\right)^{3/2}$
Symmetric Tops
Here, $\omega_A$, $\omega_B$, and $\omega_C$ are the three angular speeds and $L_A$, $L_B$, and $L_C$ are the three angular momenta. For a symmetric top molecule such as ammonia, or chloromethane, two components of the moments of inertia are equal, i.e., $I_B = I_C$. The rotational energy levels of such a molecule are specified by two quantum numbers $J$ and $K$. The total angular momentum is determined by $J$ and the component of this angular momentum along the unique molecular axis is determined by $K$. The energy levels are given by:
$\epsilon_{J,K} = \tilde{B} J(J+1) + (\tilde{A} - \tilde{B}) K^2\nonumber$
with rotational constants in units of wavenumbers:
$\tilde{B} = \dfrac{h}{8\pi^2 c I_B}\nonumber$
and:
$\tilde{A} = \dfrac{h}{8\pi^2 c I_A}\nonumber$
where
• $J$ takes on values $0,1,2,.... \infty$ and
• $K = -J, -J + 1, - J + 2,...0, 1, 2,...J$.
The rotational partition function is given by:
$q_{rot} = \dfrac{1}{\sigma} \sum_{J=0}^{\infty} (2J+1) e^{-\tilde{B} J(J+1)/kT} \sum_{K=-J}^{J} (2J+1) e^{\tilde{A} - \tilde{B} )K^2/kT} \nonumber$
This can be converted to an integral and the result is:
$q_{rot} = \dfrac{ \sqrt{\pi}}{\sigma} \left( \dfrac{8 \pi I_B kT}{h^2} \right) \left( \dfrac{8 \pi I_A kT}{h^2} \right)^{1/2} \nonumber$
Asymmetric Tops
For asymmetric tops, the expressions for rotational energies are more complex and the conversions to integrations are not easy. One can actually calculate the sum of terms using a computer. An intuitive answer can be obtained by integrating over the angular momenta $L_A$, $L_B$ and $L_C$ as:
$\int_{-\infty}^{\infty} \int_{-\infty}^{\infty} \int_{-\infty}^{\infty} e^{H(p,q)/kT} \,dL_A \,dL_B \,dL_C = \sqrt{ 2\pi I_A kT} \sqrt{ 2\pi I_B kT} \sqrt{ 2\pi I_C kT} \nonumber$
And then multiplying by a factor of $8 π^2 /σ h^3$, we get the rotational partition function. The factor of $8π^2$ accounts for the angular integration. For any axis chosen in a molecule, a complete rotation contributes a factor of $2π$. Integration over all possible orientations of this axis contribute another factor of $4π$. The factor of $h^3$ is for the conversion from the classical phase space to the quantum mechanical phase space.
Symmetry Number
As discussed previously, the symmetry number $σ$ corrects for overcounting of rotational configurations. If the molecule has no center of symmetry (e.g. $\ce{HCN}$) $\sigma = 1$ whereas if the molecule has a center of symmetry (e.g. $\ce{CO2}$) then $\sigma = 2$.
The final result is:
$q_{rot} = \dfrac{\pi^2}{\sigma} \sqrt{\dfrac{8 \pi I_A kT}{h^2}} \sqrt{\dfrac{8 \pi I_B kT}{h^2}} \sqrt{\dfrac{8 \pi I_C kT}{h^2}} \label{big0}$
Classical Derivation (Optional)
We can explicitly obtain the classical rotational partition function of an asymmetric top by writing the classical expression for the rotational energy in terms of the Euler angles. The orientation of a rigid rotor can be specified by there Euler angles $θ$, $φ$, and $ψ$ with the ranges of angles $0$ to $π$, $0$ to $2π$ and $0$ to $2π$ respectively. The rotational Hamiltonian for the kinetic energy can be written in terms of the angles and their conjugate momenta ( $p_{\theta}$, $p_{\phi}$, $p_{\psi}$)
$H = \dfrac{\sin^2 \psi}{2I_A} \left( p_{\theta} - \dfrac{\cos \psi}{\sin \theta \sin \psi} ( p_{\phi} - \cos \theta p_{\psi} ) \right)^2 \nonumber$
$+ \dfrac{\cos^2 \psi}{2I_B} \left( p_{\theta} - \dfrac{\sin \psi}{\cos \theta \cos \psi} ( p_{\phi} - \cos \theta p_{\psi} ) \right)^2 + \dfrac{1}{2I_C} p_{\psi}^2\nonumber$
The classical rotational partition function is given by
$q_{rot} = \int_{-\infty}^ {\infty} \int_{-\infty}^ {\infty} \int_{-\infty}^ {\infty} \int_{0}^ {\pi} \int_{0}^ {2\pi} \int_{0}^ {2\pi} \dfrac{1}{h^3} e ^{-H(p,q)/kT} dp_{\theta}\, dp_{\phi} \, dp_{\psi} \,d \theta \,d \phi \,d \psi \nonumber$
The integrations can be simplified by rewriting $H(p,q) / kT$ as
$\dfrac{H}{kT} = \dfrac{1}{2I_AkT} \left( \dfrac{\sin^2 \psi}{I_A} + \dfrac{\cos^2 \psi}{I_B} \right) \left( p_{\theta} + \left( \dfrac{1}{I_B} - \dfrac{1}{I_A} \right) \dfrac{\sin \psi \cos \psi}{\sin \theta \left(\dfrac{\sin^ \psi}{I_A} + \dfrac{\cos^2 \psi}{I_B} \right) } (p_{\phi} - \cos \theta p_{\psi}) \right)^2\nonumber$
$+ \dfrac{1}{2 kT I_AI_B \sin^2 \theta} \left( \dfrac{1}{\sin \theta \left(\dfrac{\sin^ \psi}{I_A} + \dfrac{\cos^2 \psi}{I_B} \right) } (p_{\phi} - \cos \theta p_{\psi}) \right)^2 + \dfrac{1}{2KT I_c} p^2_{\psi}\nonumber$
Using the following integral,
$\int_{-\infty}^ {\infty} e^{-a(x+b)^2} dx = \int_{-\infty}^{\infty} e ^{-ax^2} dx = \sqrt{\dfrac{\pi}{a}}\nonumber$
Integration over $p_θ$ gives using the above expression
$\sqrt{ 2 \pi kT} \left( \dfrac{\sin^2 \psi}{I_A} + \dfrac{\cos^2 \psi}{I_B} \right)^{-1/2} \label{SA1}$
Integration over $p_φ$ gives the factor,
$\sqrt{ 2 \pi kT I_AI_B} \sin \theta \left( \dfrac{\sin^2 \psi}{I_A} + \dfrac{\cos^2 \psi}{I_B} \right)^{1/2}\nonumber$
This cancels partly the second square root in Equation $\ref{SA1}$. Integration over $p_ψ$ gives the factor
$\sqrt{2 \pi k T I_c}\nonumber$
Integration over $θ$, $φ$ and $ψ$ gives a factor of $8 π^ 2$.
$\int _0^{\pi} \sin \theta \,d\theta =2\nonumber$
$\int_0^{2\pi} d\phi = 2 \pi\nonumber$
$\int_0^{2\pi} d \psi = 2\pi\nonumber$
Combining all the integrals, we finally get Equation $\ref{big0}$ after reintroduced the symmetry number $σ$ as before with diatomic molecular rotation.
We can simplify calculations by defining characteristic rotational temperatures for each axis of rotational:
$\Theta _A = \dfrac{h^2}{8 \pi^2 I_A k} \nonumber$
$\Theta _B = \dfrac{h^2}{8 \pi^2 I_B k} \nonumber$
$\Theta_C = \dfrac{h^2}{8 \pi^2 I_C k} \nonumber$
The polyatomic rotational partition function expressed in Equation $\ref{big0}$ can then be re-expressed as:
$q_{rot} = \dfrac{\sqrt{\pi}}{\sigma}\sqrt{\dfrac{T^3}{ \Theta_A \Theta_B \Theta_C }} \label{RotQ1}$
or alternatively:
$\ln q_{rot} = \dfrac{1}{2} \ln \dfrac{\pi T}{ \Theta_A \Theta_B \Theta_C \sigma^2}\nonumber$
Example: Nitrogen Dioxide
The three characteristic rotational temperatures for $\ce{NO_2}$ are $111.5\, K$, $0.624\, K$ and $0.590\, K$. Calculate the rotational partition function at 300 K.
Solution
The rotational temperature is given by Equation $\ref{RotQ1}$:
$q_{rot} = \dfrac{\sqrt{\pi}}{\sigma} \sqrt{\dfrac{T^3}{ \Theta_A \Theta_B \Theta_C }} \nonumber$
The rotational partition function becomes,
$q_{rot} = \dfrac{1.772}{2} \sqrt{\dfrac{300\; K}{ (11.5\; K)(0.624\,K) ( 22.55\;K)}} = 2242.4 \nonumber$
Thermodynamics Properties
The molar thermodynamic functions can be readily calculated including average rotation energy and molar heat capacity:
$E_{rot} = \dfrac{3}{2} RT\nonumber$
and:
$\bar{C_V}= \dfrac{3}{2} R\nonumber$
Improvements over the classical approximation for the rotational partition function derived above have been obtained. One of the improved versions (with no derivation) is:
$q_{rot} = q_{rot}^0 \left[ 1+ \dfrac{h^2}{96 \pi^2 kT} \left( \dfrac{2}{I_A} + \dfrac{2}{I_C} + \dfrac{2}{I_C} + \dfrac{I_C}{I_BI_B} - \dfrac{I_A}{I_BI_C} - \dfrac{I_B}{I_AI_C} \right) \right]\nonumber$
where $q_{rot}^0$ is the classical approximation in Equation $\ref{big0}$.
Comparing this result with the vibrational partition function calculation before ($q_{vib} = 1.0035$), give the implication that while multiple rotational states are accessible at room temperature, very few vibrational states (other than the ground vibrational state) are accessible.
Contributors and Attributions
• www.chem.iitb.ac.in/~bltembe/pdfs/ch_3.pdf
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(LibreTexts)/18%3A_Partition_Functions_and_Ideal_Gases/18.08%3A_Rotational_Partition_Functions_of_Polyatomic_Molecules.txt
|
Learning Objectives
• Goal: Specific heat capacity data for a wide range of elements are used to assess the accuracy and limitations of the Dulong-Petit Law.
• Prerequisites: An introductory knowledge of statistical thermodynamics including the derivation of the vibrational (harmonic oscillator) contributions to the heat capacity are recommended.
• Resources you will need: This exercise should be carried out within a data analysis software environment which is capable of graphing and generating a best-fit line for an x-y data set.
The heat capacity ($C$) of a substance is a measure of how much heat is required to raise the temperature of that substance by one degree Kelvin. For a simple molecular gas, the molecules can simultaneously store kinetic energy in the translational, vibrational, and rotational motions associated with the individual molecules. In this case, the heat capacity of the substance can be broken down into translational, vibrational, and rotational contributions;
$C = C_{trans} + C_{vib} + C_{rot}$
Monoatomic crystalline solids represent a much simpler case. Einstein proposed a simple model for such substances whereby the atoms only have vibrational energy (each atom can vibrate in three perpendicular directions around its lattice position). Specifically, the ‘Einstein Solid Model’ assumes that the atoms act like three-dimensional harmonic oscillators (with the vibrational motion of each atom in each perpendicular dimension entirely independent). Statistical mechanics provides a relatively simple expression for the constant volume molar heat capacity ($C_{V,m}$) of a one-dimensional harmonic oscillator
$C_{V,m}^{1-D} = R \left( \dfrac{\Theta_v }{T} \right)^2 \left( \dfrac{e^{-\Theta_v/2T} }{1- e^{-\Theta_v/T}} \right) ^2 \label{1}$
where $R$ is the universal gas constant, $T$ is absolute temperature, and $Θ_v$ is called the ‘characteristic vibrational temperature’ of the oscillator and depends on the vibrational frequency ($ν$) according to
$\Theta_v = \dfrac{hv}{k} \label{2}$
with $h$ representing Plank’s constant and $k$ representing Boltzmann’s constant.
Since the vibrations in each dimension are assumed to be independent, the expression for the constant volume molar heat capacity of a 'three-dimensional' Einstein Solid is obtained by simply multiplying Equation \ref{1} by three;
$C_{V,m}^{3-D} = 3R \left( \dfrac{\Theta_v }{T} \right)^2 \left( \dfrac{e^{-\Theta_v/2T} }{1- e^{-\Theta_v/T}} \right) ^2 \label{3}$
The temperature variation of the heat capacity of most metallic solids is well described by Equation \ref{3}. Furthermore, plots of Equation \ref{3} as a function of temperature for metals with widely varying vibrational frequencies reveal that the heat capacity always approaches the same asymptotic limit of $3R$ at high temperatures. Stated another way, at high temperatures
$\lim_{T \to \infty} \left[ \left( \dfrac{\Theta_v }{T} \right)^2 \left( \dfrac{e^{-\Theta_v/2T} }{1- e^{-\Theta_v/T}} \right) ^2 \right] = 1 \label{4}$
and Equation \ref{3} reduces to
$\lim_{T \to \infty} \left[ C_{V,m}^{3D} \right] = 3R \label{5}$
(You will be asked to verify this result in the exercise below). According to Equation \ref{5}, the molar heat capacities of metallic solids should approach 24.9 J/(K mol) at high temperatures, regardless of the identity of the metal.
The vibrational frequencies of most metallic solids are usually small enough so that $Θ_v$ lies considerably below room temperature ($Θ_v \ll 298\, K$). For these substances, the limits implied by Equations \ref{4} and \ref{5} are well approximated even at room temperature, leading to the result that $C_{v,m} = 24.9\, J/(K·mol)$ for most metals at room temperature.
In the early 1800s, two French scientists by the names of Pierre Louis Dulong and Alexis Therese Petit empirically discovered the same remarkable result. The Dulong-Petit Law is normally expressed in terms of the specific heat capacity ($C_s$) and the molar mass ($M$) of the metal
$C_s M = C_{V,m} \approx 25 (J\, K^{-1} \, mol^{-1}) \label{6}$
where $C_s$ represents how much heat is required to raise the temperature of 'one gram' of that substance by one degree Kelvin. Dulong and Petit, as well as other scientists of their time, used this famous relationship as a means of establishing more accurate values for the atomic weight of metallic elements (by instead measuring the specific heat capacity of the element and using the Dulong-Petit relationship, which is a relatively simple method of establishing weights in comparison to the more disputable gravimetric methods that were being used at the time to establish the equivalent weights of elements).
In the exercise below, you will look up the specific heat capacities of a number of elements that exist as simple monoatomic solids at room temperature and assess the accuracy of the Dulong-Petit law.
Experimental Data
Consult the CRC Handbook of Chemistry and Physics (CRC Press: Boca Raton, FL) and compile a table of specific heat capacities for a large number of elements that are known to exist as monoatomic solids at room temperature. Also look up and record the molar mass of these elements. The elements that you consider should be restricted to those appearing in groups 1-14 of the periodic table. Make sure you generate a fairly large list which includes a number of elements that are normally considered as metallic in character (such as copper, iron, sodium, lithium, gold, platinum, barium, and aluminum), but also some non-metallic elements that are nonetheless monoatomic isotropic solids (such as carbon-diamond, beryllium, boron, and silicon). Heat capacities that are usually reported in the literature are not actual constant volume heat capacities ($C_v$), but are instead constant pressure heat capacities ($C_p$). Fortunately, $C_p$ and $C_v$ are essentially equal for simple solids (within the level of precision that we consider in this exercise), and you can assume that the values from the CRC Handbook represent $C_s$.
Exercises
1. Enter the element name, the specific heat capacity, and the molar mass of each element in a spreadsheet. Calculate the product of specific heat and molar mass for each element and calculate how much this product differs from the Dulong-Petit prediction (express your result as a percent difference relative to $3R$).
2. Assess the generality of the Dulong-Petit law in an alternate way by generating a plot of specific heat as a function of reciprocal Molar Mass ($C_s$ versus $1/M$), which should be linear with a slope equal to 3R if the data behave according to Equation \ref{6}.
3. Inspect your results from 1 and 2 above and identify any elements that significantly deviate from the Dulong-Petit law. When they occur, do deviations tend to be smaller or larger than 3R? Does the degree of deviation from the Dulong-Petit law seem to correlate with periodic trends in metallic (or covalent) bonding for these elements? Do deviations tend to occur more readily for elements of smaller or higher atomic weight? Explain how the type of bonding and the magnitude of the atomic weight can lead to deviations from the arguments made in Equations \ref{4}-\ref{6} above.
4. Use the plotting method that you employed in step 2 above as a means of determining a value for the universal gas constant ($R$) - but make sure you throw-out any specific heat data for elements that you suspect do not fall within the limit $Θ_v \ll 298 \,K$. Calculate the percent error in the value of $R$ that you determine.
5. Verify that the limit expressed in Equation \ref{4} above is true (HINT: expand each of the exponential terms in a power series and note that higher-order terms are negligible in the limit $T \gg Θ_v$).
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(LibreTexts)/18%3A_Partition_Functions_and_Ideal_Gases/18.09%3A_Molar_Heat_Capacities.txt
|
The molecules of hydrogen can exist in two forms depending on the spins on the two hydrogen nuclei. If both the nuclear spins are parallel, the molecule is called ortho and if the spins are antiparallel, it is referred to as para (in disubstituted benzene, para refers to the two groups at two opposite ends, while in ortho, they are adjacent or “parallel” to each other). The spin on the hydrogen nucleus has a magnitude of $½ ħ$. The presence of nuclear spins leads to very interesting consequences for the populations of the rotational states and on a macroscopic scale, has consequences on measured entropies and heat capacities as well. The total partition function of $\ce{H_2}$ can be written as:
$q_\text{tot} = q_\text{elec} q_\text{vib} q_\text{rot} q_\text{trans} q_\text{nuc} \nonumber$
where, the subscripts refer to the respective motions. After “half” a rotation, the nuclei are superimposed on each other. Since a proton is a spin half nucleus, the total wave function must be antisymmetic with respect to the exchange of the particles. i.e.:
$\psi(1,2) = - \psi (2,1) \nonumber$
The translational motion refers to the motion of the molecular center of mass and has no influence on the symmetry of the nuclear wave function. Vibrational motion depends on the magnitude of the internuclear distance and has no effect on the particle exchange. The electronic motion also has no effect on the symmetry properties of the nuclear wave function. Therefore, the product of the nuclear spin and rotational wave functions must be antisymmetric with respect to the particle exchange. For the nuclear spin functions, there are four combinations. One combination is a singlet:
$| \psi_{\nu,s} \rangle = \alpha(1) \beta(2) - \alpha (2) \beta (1) \nonumber$
And the other three combinations are the three states of a triplet:
\begin{align*} | \psi_{\nu,s} \rangle &= \alpha(1) \alpha(2) \[4pt] | \psi_{\nu,s} \rangle &= \alpha(1) \beta(2) + \alpha (2) \beta (1) \[4pt] | \psi_{\nu,s} \rangle &= \beta(1) \beta(2) \end{align*}
The rotational wavefunctions ($ψ_r$) are given in terms of the associated Legendre polynomials $P^{|m|}_l (x)$ where $x = \cos θ$:
$| ψ_r \rangle = e^{im\phi} P^{|m|}_l (\cos \theta) \label{3.91}$
with:
\begin{align} P_{l}^{(m)}(x) &=\left(1-x^{2}\right)^{| m / 2} \frac{d^{|m|} P_{l}(x)}{d x^{m |}} \[4pt] \quad P_{l}(x) &=\frac{1}{2^{l} l !} \frac{d^{l}\left(x^{2}-1\right)^{l}}{d x^{l}} \label{3.92} \end{align}
When the nuclei are interchanged, $θ$ becomes $π - θ$ and $φ$ is changed to $φ + π$. The polynomials change as:
$P_{l}(-x)=(-1)^{\prime} P_{l}(x) ; \quad P_{l}^{|m|}(-x)=(-1)^{l-|m|||} P_{l}^{|m|}(x) \label{3.93}$
The exponential function changes as:
$e^{i m(\phi+\pi)}=e^{i m \pi} e^{i m \varphi}=(-1)^{|m|} e^{i m \varphi} \label{3.94}$
Therefore, the rotational wave function changes as:
\begin{align} P_l(-x) &= (-1)^l P_l(x) \[4pt] P^{|m|}_l(-x) &= (-1)^{l-|m|} P^{|m|}_l(x) \label{3.95} \end{align}
• $| \psi_r \rangle$ is symmetrical for even $J$, and
• $| \psi_r \rangle$ is antisymmetrical for odd $J$
Combining the nuclear spin and the rotational parts, we see that, the product $\psi_{r}, \psi_{m}$ must be antisymmetrical (with respect to the exchange of nuclei) for half integral nuclear spins and symmetrical for integral spins. To accomplish this, the singlet nuclear states (para) must be combined with the even rotational functions and the triplet nuclear states must be combined with the odd rotational states. The rotational partition functions for ortho and para hydrogens are, thus:
$q_{\text{ortho}}=q_{\nu, t} q_{r, \text{odd}}=3 \sum_{J=1,3,5, \ldots}(2 J+1) e^{-J(J+1) \Theta_{R} / T} \label{3.97}$
and:
$q_{\text{para}}=q_{\text {\nu,s }} q_{r, \text{even}}=1 \sum_{J=0,2,4, \ldots}(2 J+1) e^{-J(J+1) \Theta_{R} / T} \label{3.98}$
where $Θ_R$ is the rotational temperature defined previously. The total partition function consisting both ortho and para hydrogens is given by:
$q_{\text{rot}, nu}=1 \sum_{j=0.2 .4}(2 J+1) e^{-J(J+1) \Theta_{k} / T}+3 \sum_{j=1,3,5, n}(2 J+1) e^{-J(J+1) \Theta_{R} / T} \label{3.99}$
The ratio of ortho to para hydrogens at thermal equilibrium is given by:
$\frac{N_{o}}{N_{p}}=\frac{3 \sum_{j=1,2,3 \ldots}(2 J+1) e^{-J(J+1) \Theta_{R} / T}}{\sum_{j=0} 2(2 J+1) e^{-J(J+1) \Theta_{R} / T}} \label{3.100}$
At high temperature, the two summations become equal and therefore, the high temperature limit of $N_{o} / N_{p}$ is 3. At low temperature, the ratio becomes:
$\frac{N_{o}}{N_{p}}=\frac{3\left(3 e^{-2 \Theta_{k} / T}+\ldots \ldots\right)}{1\left(1+5 e^{-5 \Theta_{n} / T} + \ldots \right)} \rightarrow 0, \text { as } T \rightarrow 0 \label{3.101}$
A good experimental verification of the above analysis is a comparison between the calculated rotational heat capacities at constant volume ($C_V$) rot, nu (calculated from:
$C_V= \dfrac{\partial \langle E \rangle }{\partial T} \nonumber$
where:
$\langle E \rangle = \dfrac{\partial \ln q_{\text{rot},\nu} }{\partial \beta} \nonumber$
The heat capacities are shown as a function of temperature in Figure 18.10.1 .
The purple curve marked 3:1 gives the experimental data, the curve eq represents the data for an equilibrated mixture of o- and p- at a given temperature. The curves o- and p- represent the heat capacities of pure o- and pure p- hydrogens calculated from the o- and p- partition functions given by Equations $\ref{3.97}$ and $\ref{3.98}$ respectively. Initially it was a puzzle as to why the experimental data differs from the calculated values. In fact, the experimental data seemed to agree very well with the following equation:
$(C_V)_{\text{rot},\nu} = \dfrac{3}{4} (C_V)_{\text{rot},\nu} (\text{ortho}) + \dfrac{1}{4} (C_V)_{\text{rot},\nu} (\text{para}) \label{3.102}$
The reason for this is that, when $\ce H_2$ is cooled down from a higher temperature, the ortho:para ratio continues to remain 0.75 / 0.25 (the high temperature value) because the $\text{ortho} \rightarrow \text{para}$ interconversion rate is very very small and we do not reach the equilibrium composition unless a catalyst such as activated charcoal is added to the gas mixture. Equation $\ref{3.102}$ corresponds to a “frozen high temperature mixture” of ortho:para hydrogens. In the presence of the catalyst, the experiments also give the curve labeled as eq in the graph. This in indeed a very nice case where the experiments support not only the methods of statistical thermodynamics but also of the antisymmetry principle for bosons and fermions. If we consider the case of $\ce{^{16}O2}$, where the nuclear spins are zero, the rotational wave function has to be symmetric as only symmetric wave functions are permitted for bosons. Thus, only even rotational states contribute to the partition function:
$q_{\text{rot}, V} (^{16}\ce{O_2}) = \sum_{J=0,2,4,\ldots} (2J+1) e^{-J(J+1) \theta_R/T} \nonumber$
Contributors and Attributions
• www.chem.iitb.ac.in/~bltembe/pdfs/ch_3.pdf
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(LibreTexts)/18%3A_Partition_Functions_and_Ideal_Gases/18.10%3A_Ortho_and_Para_Hydrogen.txt
|
The equipartition theorem, also known as the law of equipartition, equipartition of energy or simply equipartition, states that every degree of freedom that appears only quadratically in the total energy has an average energy of $½k_BT$ in thermal equilibrium and contributes $½k_B$ to the system's heat capacity. Here, $k_B$ is the Boltzmann constant, and T is the temperature in Kelvin. The law of equipartition of energy states that each quadratic term in the classical expression for the energy contributes ½kBT to the average energy. For instance, the motion of an atom has three degrees of freedom (number of ways of absorbing energy), corresponding to the x, y and z components of its momentum. Since these momenta appear quadratically in the kinetic energy, every atom has an average kinetic energy of $3/2k_BT$ in thermal equilibrium. The number of degrees of freedom of a polyatomic gas molecule is $3N$ where $N$ is the number of atoms in the molecule. This is equal to number of coordinates for the system; e.g. for two atoms you would have x, y, z for each atom.
Translations
The translational contribution to the average energy is derived in terms of the derivative of the translational partition:
$\langle E_{trans} \rangle = - \dfrac{1}{q_{trans}} \dfrac{\partial q_{trans}}{\partial \beta} \label{Eq1}$
Introducing the translational partition function derived earlier, Equation $\ref{Eq1}$ becomes
$= - \dfrac{\Lambda^3}{V} \dfrac{\partial }{\partial \beta} \dfrac{V}{\Lambda^3} = - \dfrac{3}{\Lambda} \dfrac{\partial \Lambda}{\partial \beta} = \dfrac{3}{2} k_BT \nonumber$
Thus, the three translational degrees of freedom in three dimensions satisfy the equipartition theorem with each translational degree providing $½ k_BT$ of energy.
Rotations
Consider the molecular partition functions. The average rotational energy to the average energy is derived in terms of the derivative of the translational partition:
$\langle E_{rot} \rangle = - \dfrac{1}{q_{rot}} \dfrac{\partial q_{rot}}{\partial \beta} \label{Eq2}$
which when you introduce the rotational partition function, Equation $\ref{Eq2}$ becomes
$\langle E_{rot} \rangle = -\sigma \beta \tilde{B} \dfrac{1}{\sigma \tilde{B}} \dfrac{\partial}{\partial \beta} \dfrac{1}{\beta} = \dfrac{1}{\beta} = k_BT \nonumber$
The classical expression for the rotational energy of a diatomic molecule is
$E_{rot}^{(classical)}= \dfrac{1}{2} I (\omega_x^2 + \omega_y^2) \nonumber$
where $I$ is the moment of inertia and $\omega _x$ and $\omega _y$ are the angular velocities in the $x$ and $y$ directions. The rotation along the molecular axis (the $z$ axis here) has no meaning in quantum mechanics because the rotations along the molecular axis lead to configurations which are indistinguishable from the original configuration. The two rotational degrees of freedom have thus given a value of $kT$ with with each rotational degree providing $½ k_BT$ of energy.
Vibrations
Consider vibrational motions. The average vibrational energy to the average energy is derived in terms of the derivative of the translational partition:
$\langle E_{vib} \rangle = - \dfrac{1}{q_{vib}} \dfrac{\partial q_{vib}}{\partial \beta} \label{Eq3}$
which when you introduce the partition function for vibration, Equation $\ref{Eq3}$ becomes
$\langle E_{vib} \rangle = \dfrac{-1}{q_{vib}} \left( -hc\tilde{\nu}\dfrac{ e^{-hc \tilde{\nu}/k_BT}}{(1-e^{-hc \tilde{\nu}/k_BT})^2 } \right) = hc\tilde{\nu} \dfrac{ e^{-hc \tilde{\nu}/k_BT}}{\left(1-e^{-hc \tilde{\nu}/k_BT}\right) } \label{18.1.7}$
This can be simplified by dividing both numerator and denominator of Equation $\ref{18.1.7}$ by $e^{-hc \tilde{\nu}/k_BT}$
$\langle E_{vib} \rangle = hc \tilde{\nu} \left( \dfrac{ 1 }{e^{hc \tilde{\nu}/k_BT} -1} \right) \label{18.1.7B}$
Equation $\ref{18.1.7B}$ is applicable at all temperatures, but if $hc \tilde{\nu}/k_BT \ll 1$ (i.e., the high temperature limit), then the exponential in the denominator can be expanded
$e^{hc \tilde{\nu}/k_BT} -1 \approx 1 + hc \tilde{\nu}/k_BT -1 = hc \tilde{\nu}/k_BT \label{expansion}$
and Equation $\ref{18.1.7B}$ becomes
$\langle E_{vib} \rangle \approx \cancel{hc \tilde{\nu}} \left( \dfrac{1}{\cancel{ hc \tilde{\nu}}/k_BT}\right) \nonumber$
$\langle E_{vib} \rangle \approx k_BT \label{18.1.10}$
with each vibrational degree providing $k_BT$ of energy (since there ar two quadratic terms in the Hamiltonian for a harmonic oscillator (kinetics energy and potential energy).
Compare Equation $\ref{18.1.10}$ with the classical expression for the vibrational energy
$E_{vib}^{(classical)} = ½ kx^2 + ½ μv_x^2 \nonumber$
At high temperature the equipartition theorem is valid, but at low temperature, the expansion in Equation $\ref{expansion}$ fails (or more terms are required). In this case, only a few vibrational states are occupied and the equipartition principle is not typically applicable.
Heat Capacity
Heat capacity at constant volume $C_v$, is defined as
$C_v = \left(\dfrac{\partial U}{\partial T} \right)_v \nonumber$
The equipartition theorem requires that each degree of freedom that appears only quadratically in the total energy has an average energy of ½kBT in thermal equilibrium and, thus, contributes ½kB to the system's heat capacity. Thus the three translational degrees of freedom each contribute ½R to (3/2 R). The contribution of rotational kinetic energy will be R for the linear, and 3/2R for the nonlinear molecules. For the vibration, an oscillator has quadratic kinetic and potential terms, making the contribution of each vibrational mode R. However, kBT has to be much greater than the spacing between the quantum energy levels. If this is not satisfied, the heat capacity will be reduced and which drop to zero at low temperatures. The corresponding degree of freedom is said to be frozen out; this is the situation for the vibrational degrees of freedom at room temperature and that is why the usual assumption is that they will not contribute.
Example: CO2 vs. NO2
For comparing the molar heat capacities of nitrogen dioxide and carbon dioxide at constant volume (at room temperature), let us use the law of equipartition and assume the vibrations to be frozen out at room temperature. The predicted molar for the linear $CO_2$ (with three translational and two rotational degrees of freedom) is $5/2R$ $20.8\, JK^{-1}mo'^{-1})$.
The estimated molar for $NO_2$ (a bent molecule, with three translational and three rotational degrees of freedom) is $3R \,(25.0\, JK^{-1}mol^{-1})$. These estimations are close to the experimental values:
• 30.1 JK-1 mol-1 for $CO_2$
• 29.5 JK-1 mol-1 for $NO_2$
Especially for $CO_2$, the deviation is significant. This suggests that, although not all vibrational degrees of freedom are available, they cannot be totally ignored. The bigger deviation in the prediction of molar heat capacities is probably due to the existence of the lower frequency-bending vibration in carbon dioxide.
Contributors and Attributions
• Zane Sterkewolf (UC Davis)
• www.chem.iitb.ac.in/~bltembe/pdfs/ch_3.pdf
• Sudarson S Sinha (Tel Aviv University)
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(LibreTexts)/18%3A_Partition_Functions_and_Ideal_Gases/18.11%3A_The_Equipartition_Principle.txt
|
These are homework exercises to accompany Chapter 18 of McQuarrie and Simon's "Physical Chemistry: A Molecular Approach" Textmap.
1) If the nucleus has a spin of $s_n$, then its spin degeneracy $g_n = 2 s_n + 1$. The diatomic molecule formed from such a nucleus will have g n 2 spin functions which have to be combined to form symmetric and antisymmetric functions. Carry out an analysis similar to that of H 2 for D 2 where the deuterium nucleus has a spin of 1.
2) Derive the thermodynamic functions from the polyatomic rotational partition function. 3) Carry out the integration for the rotational partition function of the symmetric top.
4) Calculate the total partition function and the thermodynamic functions of water at 1000K. The three moments of inertia of water are 1.02, 1.91 and 2.92 in 10 - 47 kg m 2 . The symmetry number is 2. The vibrational data in given in Fig. 3.5. Assume a non- degenerate electronic ground state.
5) Verify that the symmetry numbers for methane, benzene and SF 6 are 12, 12 and 24 respectively.
6) The ground state of Na is a doublet (two states with the same energy). Assuming this to be the zero of energy and assuming that the next energy level to be 2 eV higher than the ground state, calculate q el .
7) The bond length $r_{eq}$ of $O_2$ is 1.2 Å. The moment of inertia $I$ is $mr_{eq}^2 / 2$ where m of O is $16 X 1.66 times 10^{-27}\; kg$. Calculate $\tilde{B}$ and the rotational partition function of $O_2$ at 300 K.
8) The vibrational frequency ν of ICl is 384 cm -1 . What is its vibrational partition function at 300 K? What is the fraction of molecules in the ground state (n = 0) and the first excited state n = 1? 9) Calculate the translational partition function of N 2 at 300 K. For volume, use the molar volume at 300 K.
10) An isotope exchange reaction between isotopes of bromine is 79 79 81 81 79 81 2 Br Br Br Br Br Br +
The fundamental vibrational frequency of 79 81 Br Br 323.33 cm -1 . All the molecules can be assumed to have the same bond length and have a singlet ground electronic state. Calculate the equilibrium constant at 300K and 1000K.
11) For the reaction I 2 ↔ 2I, calculate the equilibrium constant at 1000K. The relevant data are as follows. The ground electronic state of I is 2 3/ 2 P whose degeneracy is 4. The rotational and vibrational frequencies of I 2 are 0.0373 cm -1 and 214.36 cm -1 respectively. The dissociation energy of I 2 is 1.5422 eV.
12) The representative molecular data for a few molecules is given in table 3.1. Using the relevant data, calculate the equilibrium constant for the reaction H 2 + Cl 2 I HCl at 1000K. What is the value of the equilibrium constant as T → ∞?
13) Eq. (3.50) is related to the Giauque function. Estimate the total molar Giauque function for molecules that behave as harmonic oscillators-rigid rotors.
14) The energy of a molecule in the rigid rotor – harmonic oscillator approximation is E vib, rot is (n +1/2) hν + B J (J+1) Real molecules deviate from this behaviour due to the existence of anharmonicity (anharmonicity constant x e ), centrifugal distortion (centrifugal distortion constant D ) and the interaction between vibration and rotation (α is the coupling constant between the vibrational and rotational modes). The expression for the energy when these affects are included is 2 2 2 1 1 1 2 2 2 , ( ) ( 1) ( ) ( 1) ( ) ( 1) vib rot e E n h BJ J x n h DJ J n J J ν ν α = + + + − + + + − + + Here, the third term is due to anharmonicity, the fourth term is due to centrifugal distortion and the last term is due to the interaction between vibration and rotation. Calculate the q vib, rot which includes the effects of these distortions.
Q18.4
Using the data in Table 8.6, calculate the fraction of sodium atoms in the first excited state at temperatures 300 K, 1000 K, and 2000 K.
S18.4
Using Equation 18.10, we can calculate the fraction of sodium atoms in the first excited state, with $g_{e1} = 2 , g_{e2} = 2 , g_{e3} = 4 , g_{e4} = 2$ :
$f_2 = \dfrac{2e^{-\beta \epsilon_{e2}}} {2+2e^{-\beta \epsilon_{e2}} + 4e^{-\beta \epsilon_{e3}} + 2e^{-\beta \epsilon_{e4}} + ... }$
Using the data in Table 8.6, the numerator of this fraction becomes
$2 exp [ - \dfrac{16956.183 cm^{-1}}{(0.6950 cm^{-1} K^{-1} ) T} ]$
and the denominator becomes
$2+ 2 exp [ - \dfrac{16956.183 cm^{-1}}{(0.6950 cm^{-1} K^{-1} ) T} ] + 4 exp [ - \dfrac{16973.379 cm^{-1}}{(0.6950 cm^{-1} K^{-1} ) T} ] + 2 exp [ - \dfrac{25739.86 cm^{-1}}{(0.6950 cm^{-1} K^{-1} ) T} ] + ...$
Using these values, we can find the values of $f_2$ at the different temperatures.
$f_2 ( T = 300 K) = 4.8 x 10^{-36}$
$f_2 ( T = 1000 K) = 2.5 x 10^{-11}$
$f_2 ( T = 2000 K) = 5.0 x 10^{-6}$
Q18.5
Using the data in the table, calculate the fraction of hydrogen atoms in the first excited state at $400\ K$, $1800\ K$, and $2100\ K$.
Electronic Configuration Term Symbol Degeneracy $g_{e} = 2J+1$ energy/$cm^{-1}$
$1s$ $^{2}S_{1/2}$ $2$ $0$
$2p$ $^{2}P_{1/2}$ $2$ $82\ 258.907$
$2s$ $^{2}S_{1/2}$ $2$ $82\ 258.942$
$2p$ $^{2}P_{3/2}$ $4$ $82\ 259.272$
S18.5
Use the equation:
$f_{2} = \dfrac{g_{e2}e^{- \beta \varepsilon _{e2} }}{g_{e1}+g_{e2}e^{- \beta \varepsilon _{e2} }+g_{e3}e^{- \beta \varepsilon _{e3} }+ \cdots}$
and
$\beta \approx \dfrac{1}{T\ 0.6950\ cm^{-1}K^{-1}}$
to get:
$f_{2} = \dfrac{2e^{- \dfrac{ 82\ 258.907\ cm}{T\ 0.6950\ cm^{-1}K^{-1}}}}{2+2e^{- \dfrac{ 82\ 258.907\ cm}{T\ 0.6950\ cm^{-1}K^{-1}} }+2e^{- \dfrac{ 82\ 258.942\ cm}{T\ 0.6950\ cm^{-1}K^{-1}} }+ 4e^{- \dfrac{ 82\ 259.272\ cm}{T\ 0.6950\ cm^{-1}K^{-1}}}}$
$f_{2}(400\ K) = 0.2498$
$f_{2}(900\ K) = 0.2499$
$f_{2}(2100\ K) = 0.2500$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(LibreTexts)/18%3A_Partition_Functions_and_Ideal_Gases/18.E%3A_Partition_Functions_and_Ideal_Gases_%28Exercises%29.txt
|
• 19.1: Overview of Classical Thermodynamics
Joule was able to show that work and heat can have the same effect on matter – a change in temperature! It would then be reasonable to conclude that heating, as well as doing work on a system will increase its energy content, and thus it’s ability to perform work in the surroundings. This leads to an important construct of the First Law of Thermodynamics: The capacity of a system to do work is increased by heating the system or doing work on it.
• 19.2: Pressure-Volume Work
Work in general is defined as a product of a force FFF and a path element dsdsds. In the case of a cylinder, the movement of the piston is constrained to one direction, the one in which we apply pressure (\(P\) being force \(F\) per area \(A\)). We can therefore introduce the area of the piston, \(A\), and forget about the vectorial nature of force. This form of work is called pressure-volume (\(PV\)) work, and it plays an important role in the development of our theory.
• 19.3: Work and Heat are not State Functions
Heat and work are both path functions: they depend on the path taken. In order to calculate the heat transfer or work being done on/by a system, the path taken must be known.
• 19.4: Energy is a State Function
Unlike heat and work, energy is a state function. That is, it is independent of the path taken. Any path can be used to calculate the change in energy between two states.
• 19.5: An Adiabatic Process is a Process in which No Energy as Heat is Transferred
Work is a path function as it always depends on the path taken, even if it is done reversibly.
• 19.6: The Temperature of a Gas Decreases in a Reversible Adiabatic Expansion
In an adiabatic process, no heat transfer occurs. During the adiabatic expansion of a gas, the internal energy of the gas is converted to work being done by the system, decreasing the temperature of the gas.
• 19.7: Work and Heat Have a Simple Molecular Interpretation
The internal energy of a system, \(dU\), is exchanged with the system's surroundings thought work and heat.
• 19.8: Pressure-Volume Work
An important point is that pressure-volume work −PdV is only one kind of work. It is the important one for gases but for most other systems we are interested in other kinds of work (e.g. electrical work in a battery).
• 19.9: Heat Capacity is a Path Function
• 19.10: Relative Enthalpies Can Be Determined from Heat Capacity Data and Heats of Transition
• 19.11: Enthalpy Changes for Chemical Equations are Additive
As enthalpy and energy are state functions we should expect additivity of U and H when we study chemical reactions. This additivity is expressed in Hess's Law. The additivity has important consequences and the law finds wide spread application in the prediction of heats of reaction. The reverse reaction has the negative enthalpy of the forward one. If we can do a reaction in two steps we can calculate the enthalpy of the combined reaction by adding them up.
• 19.12: Heats of Reactions Can Be Calculated from Tabulated Heats of Formation
Reaction enthalpies are important, but difficult to tabulate. However, because enthalpy is a state function, it is possible to use Hess’ Law to simplify the tabulation of reaction enthalpies. Hess’ Law is based on the addition of reactions. By knowing the reaction enthalpy for constituent reactions, the enthalpy of a reaction that can be expressed as the sum of the constituent reactions can be calculated.
• 19.13: The Temperature Dependence of ΔH
It is often required to know thermodynamic functions (such as enthalpy) at temperatures other than those available from tabulated data. Fortunately, the conversion to other temperatures isn’t difficult.
• 19.E: The First Law of Thermodynamics (Exercises)
• Enthalpy is a State Function
Enthalpy is the energy transferred as heat in an isobaric process when on P-V work is involved.
Thumbnail: A thermite reaction using iron(III) oxide. The sparks flying outwards are globules of molten iron trailing smoke in their wake. (CC SA-BY 3.0; Nikthestunned).
19: The First Law of Thermodynamics
One of the pioneers in the field of modern thermodynamics was James P. Joule (1818 - 1889). Among the experiments Joule carried out, was an attempt to measure the effect on the temperature of a sample of water that was caused by doing work on the water. Using a clever apparatus to perform work on water by using a falling weight to turn paddles within an insulated canister filled with water, Joule was able to measure a temperature increase in the water.
Thus, Joule was able to show that work and heat can have the same effect on matter – a change in temperature! It would then be reasonable to conclude that heating, as well as doing work on a system will increase its energy content, and thus it’s ability to perform work in the surroundings. This leads to an important construct of the First Law of Thermodynamics:
The capacity of a system to do work is increased by heating the system or doing work on it.
The internal energy (U) of a system is a measure of its capacity to supply energy that can do work within the surroundings, making U the ideal variable to keep track of the flow of heat and work energy into and out of a system. Changes in the internal energy of a system ($\Delta U$) can be calculated by
$\Delta U = U_f - U_i \label{FirstLaw}$
where the subscripts $i$ and $f$ indicate initial and final states of the system. $U$ as it turns out, is a state variable. In other words, the amount of energy available in a system to be supplied to the surroundings is independent on how that energy came to be available. That’s important because the manner in which energy is transferred is path dependent.
There are two main methods energy can be transferred to or from a system. These are suggested in the previous statement of the first law of thermodynamics. Mathematically, we can restate the first law as
$\Delta U = q + w \nonumber$
or
$dU = dq + dw \nonumber$
where q is defined as the amount of energy that flows into a system in the form of heat and w is the amount of energy lost due to the system doing work on the surroundings.
Heat
Heat is the kind of energy that in the absence of other changes would have the effect of changing the temperature of the system. A process in which heat flows into a system is endothermic from the standpoint of the system ($q_{system} > 0$, $q_{surroundings} < 0$). Likewise, a process in which heat flows out of the system (into the surroundings) is called exothermic ($q_{system} < 0$, $q_{surroundings} > 0$). In the absence of any energy flow in the form or work, the flow of heat into or out of a system can be measured by a change in temperature. In cases where it is difficult to measure temperature changes of the system directly, the amount of heat energy transferred in a process can be measured using a change in temperature of the soundings. (This concept will be used later in the discussion of calorimetry).
An infinitesimal amount of heat flow into or out of a system can be related to a change in temperature by
$dq = C\, dT \nonumber$
where C is the heat capacity and has the definition
$C = \dfrac{dq}{\partial T} \nonumber$
Heat capacities generally have units of (J mol-1 K-1) and magnitudes equal to the number of J needed to raise the temperature of 1 mol of substance by 1 K. Similar to a heat capacity is a specific heat which is defined per unit mass rather than per mol. The specific heat of water, for example, has a value of 4.184 J g-1 K-1 (at constant pressure – a pathway distinction that will be discussed later.)
Example $1$: Heat required to Raise Temperature
How much energy is needed to raise the temperature of 5.0 g of water from 21.0 °C to 25.0 °C?
Solution
\begin{align*} q &=mC \Delta T \[4pt] &= (5.0 \,\cancel{g}) (4.184 \dfrac{J}{\cancel{g} \, \cancel{°C}}) (25.0 \cancel{°C} - 21.0 \cancel{°C}) \[4pt] &= 84\, J \end{align*}
What is a partial derivative?
A partial derivative, like a total derivative, is a slope. It gives a magnitude as to how quickly a function changes value when one of the dependent variables changes. Mathematically, a partial derivative is defined for a function $f(x_1,x_2, \dots x_n)$ by
$\left( \dfrac{ \partial f}{\partial x_i} \right)_{x_j \neq i} = \lim_{\Delta _i \rightarrow 0} \left( \dfrac{f(x_1+ \Delta x_1 , x_2 + \Delta x_2, \dots, x_i +\Delta x_i, \dots x_n+\Delta x_n) - f(x_1,x_2, \dots x_i, \dots x_n) }{\Delta x_i} \right) \nonumber$
Because it measures how much a function changes for a change in a given dependent variable, infinitesimal changes in the in the function can be described by
$df = \sum_i \left( \dfrac{\partial f}{\partial x_i} \right)_{x_j \neq i} \nonumber$
So that each contribution to the total change in the function $f$ can be considered separately.
For simplicity, consider an ideal gas. The pressure can be calculated for the gas using the ideal gas law. In this expression, pressure is a function of temperature and molar volume.
$p(V,T) = \dfrac{RT}{V} \nonumber$
The partial derivatives of p can be expressed in terms of $T$ and $V$ as well.
$\left( \dfrac{\partial p}{ \partial V} \right)_{T} = - \dfrac{RT}{V^2} \label{max1}$
and
$\left( \dfrac{\partial p}{ \partial T} \right)_{V} = \dfrac{R}{V} \label{max2}$
So that the change in pressure can be expressed
$dp = \left( \dfrac{\partial p}{ \partial V} \right)_{T} dV + \left( \dfrac{\partial p}{ \partial T} \right)_{V} dT \label{eq3}$
or by substituting Equations \ref{max1} and \ref{max2}
$dp = \left( - \dfrac{RT}{V^2} \right ) dV + \left( \dfrac{R}{V} \right) dT \nonumber$
Macroscopic changes can be expressed by integrating the individual pieces of Equation \ref{eq3} over appropriate intervals.
$\Delta p = \int_{V_1}^{V_2} \left( \dfrac{\partial p}{ \partial V} \right)_{T} dV + \int_{T_1}^{T_2} \left( \dfrac{\partial p}{ \partial T} \right)_{V} dT \nonumber$
This can be thought of as two consecutive changes. The first is an isothermal (constant temperature) expansion from $V_1$ to $V_2$ at $T_1$ and the second is an isochoric (constant volume) temperature change from $T_1$ to $T_2$ at $V_2$. For example, suppose one needs to calculate the change in pressure for an ideal gas expanding from 1.0 L/mol at 200 K to 3.0 L/mol at 400 K. The set up might look as follows.
$\Delta p = \underbrace{ \int_{V_1}^{V_2} \left( - \dfrac{RT}{V^2} \right ) dV}_{\text{isothermal expansion}} + \underbrace{ \int_{T_1}^{T_2}\left( \dfrac{R}{V} \right) dT}_{\text{isochoric heating}} \nonumber$
or
\begin{align*} \Delta p &= \int_{1.0 \,L/mol}^{3.0 \,L/mol} \left( - \dfrac{R( 400\,K)}{V^2} \right ) dV + \int_{200 \,K}^{400,\ K }\left( \dfrac{R}{1.0 \, L/mol} \right) dT \[4pt] &= \left[ \dfrac{R(200\,K)}{V} \right]_{ 1.0\, L/mol}^{3.0\, L/mol} + \left[ \dfrac{RT}{3.0 \, L/mol} \right]_{ 200\,K}^{400\,K} \[4pt] &= R \left[ \left( \dfrac{200\,K}{3.0\, L/mol} - \dfrac{200\,K}{1.0\, L/mol}\right) + \left( \dfrac{400\,K}{3.0\, L/mol} - \dfrac{200\,K}{3.0\, L/mol}\right) \right] \[4pt] &= -5.47 \, atm \end{align*}
Alternatively, one could calculate the change as an isochoric temperature change from $T_1$ to $T_2$ at $V_1$ followed by an isothermal expansion from $V_1$ to $V_2$ at $T_2$:
$\Delta p = \int_{T_1}^{T_2}\left( \dfrac{R}{V} \right) dT + \int_{V_1}^{V_2} \left( - \dfrac{RT}{V^2} \right ) dV \nonumber$
or
\begin{align*} \Delta p &= \int_{200 \,K}^{400,\ K }\left( \dfrac{R}{1.0 \, L/mol} \right) dT + \int_{1.0 \,L/mol}^{3.0 \,L/mol} \left( - \dfrac{R( 400\,K)}{V^2} \right ) dV \[4pt] &= \left[ \dfrac{RT}{1.0 \, L/mol} \right]_{ 200\,K}^{400\,K} + \left[ \dfrac{R(400\,K)}{V} \right]_{ 1.0\, L/mol}^{3.0\, L/mol} \[4pt] &= R \left[ \left( \dfrac{400\,K}{1.0\, L/mol} - \dfrac{200\,K}{1.0\, L/mol}\right) + \left( \dfrac{400\,K}{3.0\, L/mol} - \dfrac{400\,K}{1.0\, L/mol}\right) \right] \[4pt] &= -5.47 \, atm \end{align*}
This results demonstrates an important property of pressure in that pressure is a state variable, and so the calculation of changes in pressure do not depend on the pathway!
Work
Work can take several forms, such as expansion against a resisting pressure, extending length against a resisting tension (like stretching a rubber band), stretching a surface against a surface tension (like stretching a balloon as it inflates) or pushing electrons through a circuit against a resistance. The key to defining the work that flows in a process is to start with an infinitesimal amount of work defined by what is changing in the system.
Table 3.1.1: Changes to the System
Type of work Displacement Resistance dw
Expansion dV (volume) -pext (pressure) -pextdV
Electrical dQ (charge) W (resistence) -W dQ
Extension dL (length) -t (tension) t dL
Stretching dA -s (surf. tens.) sdA
The pattern followed is always an infinitesimal displacement multiplied by a resisting force. The total work can then be determined by integrating along the pathway the change follows.
Example $2$: Work from a Gas Expansion
What is the work done by 1.00 mol an ideal gas expanding from a volume of 22.4 L to a volume of 44.8 L against a constant external pressure of 0.500 atm?
Solution
$dw = -p_{ext} dV \nonumber$
since the pressure is constant, we can integrate easily to get total work
\begin{align*} w &= -p_{exp} \int_{V_1}^{V_2} dV \[4pt] &= -p_{exp} ( V_2-V_1) \[4pt] &= -(0.500 \,am)(44.8 \,L - 22.4 \,L) \left(\dfrac{8.314 \,J}{0.08206 \,atm\,L}\right) \[4pt] &= -1130 \,J = -1.14 \;kJ \end{align*}
Note: The ratio of gas law constants can be used to convert between atm∙L and J quite conveniently!
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(LibreTexts)/19%3A_The_First_Law_of_Thermodynamics/19.01%3A_Overview_of_Classical_Thermodynamics.txt
|
Work in general is defined as a product of a force $\textbf{F}$ and a path element $\textbf{ds}$. Both are vectors and work is computed by integrating over their inner product:
$w = \int \textbf{F} \cdot \textbf{ds} \nonumber$
Moving an object against the force of friction as done in the above dissipation experiment is but one example of work:
$w_{friction} = \int \textbf{F}_\text{friction} \cdot \textbf{ds} \nonumber$
We could also think of electrical work. In that case we would be moving a charge e (e.g. the negative charge of an electron) against an electrical (vector) field $\textbf{E}$. The work would be:
$w_{electical} = \int e \textbf{E} \cdot \textbf{ds} \nonumber$
Other examples are the stretching of a rubber band against the elastic force or moving a magnet in a magnetic field etc, etc.
Pressure-volume ($PV$) work
In the case of a cylinder with a piston, the pressure of gas molecules on the inside of the cylinder, $P$, and the gas molecules external to the piston, $P_\text{ext}$ both exert a force against each other. Pressure, (\P\), is the force, $F$, being exerted by the particles per area, $A$:
$P=\frac{F}{A} \nonumber$
We can assume that all the forces generated by the pressure of the particles operate parallel to the direction of motion of the piston. That is, the force moves the piston up or down as the movement of the piston is constrained to one direction. The piston moves as the molecules of the gas rapidly equilibrate to the applied pressure such that the internal and external pressures are the same. The result of this motion is work:
$w_{volume} = \int \left( \dfrac{F}{A} \right) (A\,ds) = \int P\,dV \label{Volume work}$
This particular form of work is called pressure-volume ($PV$) work and will play an important role in the development of our theory. Notice however that volume work is only one form of work.
Sign Conventions
It is important to create a sign convention at this point: positive heat, positive work is always energy you put in into the system. If the system decides to remove energy by giving off heat or work, that gets a minus sign.
In other words: you pay the bill.
To comply with this convention we need to rewrite volume work (Equation $\ref{Volume work}$) as
$w_{PV} = - \int \left( \dfrac{F}{A} \right) (A\,ds) = - \int P\,dV \nonumber$
Hence, to decrease the volume of the gas ($\Delta V$ is negative), we must put in (positive) work.
Thermodynamics would not have come very far without cylinders to hold gases, in particular steam. The following figure shows when the external pressure, $P_\text{ext}$, is greater than and less than the internal pressure, $P$, of the piston.
If the pressure, $P_\text{ext}$, being exerted on the system is constant, then the integral becomes:
$w = -P_\text{ext}\int_{V_{initial}}^{V_{final}}dV = -P_\text{ext}\Delta V \label{irreversible PV work}$
Since the system pressure (inside the piston) is not the same as the pressure exerted on the system, the system is not in a state of equilibrium and cannot be shown directly on and $PV$ diagram. This type of process is called an irreversible process. For a system that undergoes irreversible work at constant external pressure, we can show the amount of work being done on a $PV$ diagram despite not being able to show the process itself.
Note that the external pressure, $P_\text{ext}$, exerted on the system is constant. If the external pressure changes during the compression, we must integrate over the whole range:
$w = - \int_{V_{initial}}^{V_{final}}P_\text{ext}(V)\, dV \nonumber$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(LibreTexts)/19%3A_The_First_Law_of_Thermodynamics/19.02%3A_Pressure-Volume_Work.txt
|
Heat and work are path functions
Heat ($q$) and work ($w$) are path functions, not state functions:
1. They are path dependent.
2. They are energy transfer → they are not intrinsic to the system.
Path Functons
Functions that depend on the path taken, such as work ($w$) and heat ($q$), are referred to as path functions.
Reversible versus irreversible
Let's consider a piston that is being compressed at constant temperature (isothermal) to half of its initial volume:
1. Start with cylinder 1 liter, both external and internal pressure 1 bar.
2. Peg the piston in a fixed position.
3. Put cylinder in a pressure chamber with $P_{ext}=2$ bar.
4. Suddenly pull the peg.
The piston will shoot down till the internal and external pressures balance out again and the volume is 1/2 L. Notice that the external pressure was maintained constant at 2 bar during the peg-pulling and that the internal and external pressures were not balanced at all time. In a $P-V$ diagram of an ideal gas, $P$ is a hyperbolic function of $V$ under constant temperature (isothermal), but this refers to the internal pressure of the gas. It is the external one that counts when computing work and they are not necessarily the same. As long as $P_{external}$ is constant, work is represented by a rectangle.
The amount of work being done is equal to the shaded region and in equation:
$w=-\int^{V_2}_{V_1}{PdV}=-P_{ext}(V_2-V_1)=-P\Delta V \nonumber$
This represents the maximum amount of work that can be done for an isothermal compression. Work is being done on the system, so the overall work being done is positive. Let's repeat the experiment, but this time the piston will compress reversibly over infinitesimally small steps where the $P_{ext}=P_{system}$:
For an ideal gas, the amount of work being done along the reversible compression is:
$w=-\int^{V_2}_{V_1}{PdV}=-nRT\int^{V_2}_{V_1}{\frac{1}{V}}=-nRT\ln{\left(\frac{V_2}{V_1}\right)} \nonumber$
The amount of work being done to the two systems are not the same in the two diagrams (see the gray areas). Work is not a state, but a path function, as it depends on the path taken. You may say, what's the big difference. In both cases, the system is compressed from state 1 to state 2. The key is the word suddenly. By pegging the position in place for the first compression, we have created a situation where the external pressure is higher than the internal pressure ($P_{ext}>P$). Because work is done suddenly by pulling the peg out, the internal pressure is struggling to catch up with the external one. During the second compression, we have $P_{ext}=P$ at all times. It's a bit like falling off a cliff versus gently sliding down a hill. Path one is called an irreversible path, the second a reversible path.
Reversible vs. Irreversible Processes
A reversible path is a path that follows a series of states at rest (i.e., the forces are allowed to balance at all times). In an irreversible one the forces only balance at the very end of the process.
Notice that less work is being done on the reversible isothermal compression than the one-step irreversible isothermal compression. In fact, the minimum amount of work that can be done during a compression always occurs along the reversible path.
Isothermal Expansion
Let's consider a piston that is being expanded at constant temperature (isothermal) to twice of its initial volume:
1. Start with cylinder 1 liter in a pressure chamber with both an external and internal pressure of 2 bar.
2. Peg the piston in a fixed position.
3. Take the cylinder out of the pressure chamber with Pext= 1 bar.
4. Suddenly pull the peg.
The piston will shoot up till the internal and external pressures balance out again and the volume is 2 L. Notice that the external pressure was maintained constant at 1 bar during the peg-pulling and that the internal and external pressures were not balanced at all time.
The amount of irreversible work being done is again equal to the shaded region and the equation:
$w=-P\Delta V=-P_{ext}(V_2-V_1)=-P\Delta V \nonumber$
This represents the minimum amount of work that can be done for an isothermal expansion. Work is being done on the system, so the overall work being done is negative. Let's repeat the experiment, but this time the piston will compress reversibly over infinitesimally small steps where the $P_{ext}=P_{system}$:
Notice that not only is more work is being done than the one-step irreversible isothermal expansion, but it is the same amount of work being done as the reversible isothermal compression. This is the maximum amount of work that can be done during an expansion.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(LibreTexts)/19%3A_The_First_Law_of_Thermodynamics/19.03%3A_Work_and_Heat_are_not_State_Functions_but_Energy_is_a_State_Function.txt
|
Work and heat are not state functions
A Better Definition of the First law of thermodynamics
The change in internal energy of a system is the sum of $w$ and $q$, which is a state function.
The realization that work and heat are both forms of energy transfer undergoes quite an extension by saying that internal energy is a state function. It means that although heat and work can be produced and destroyed (and transformed into each other), energy is conserved. This allows us to do some serious bookkeeping! We can write the law as:
$\Delta U = w + q \nonumber$
But the (important!) bit about the state function is better represented if we talk about small changes of the energy:
$dU = \delta w +\delta q \nonumber$
We write a straight Latin $d$ for $U$ to indicate when the change in a state function, where as the changes in work and heat are path-dependent. This is indicated by the 'crooked' $\delta$. We can represent changes as integrals, but only for $U$ can we say that regardless of path we get $\Delta U = U_2-U_1$ if we go from state one to state two. (I.e. it only depends on the end points, not the path).
Notice that when we write $dU$ or $\delta q$, we always mean infinitesimally small changes, i.e. we are implicitly taking a limit for the change approaching zero. To arrive at a macroscopic difference like $\Delta U$ or a macroscopic (finite) amount of heat $q$ or work $w$ we need to integrate.
We will now invoke the first law of thermodynamics:
• $dU=\delta q+\delta w$
• $\oint{dU}=0$
• Internal energy is conserved
These are all ways of saying that internal energy is a state function.
19.05: An Adiabatic Process is a Process in which No Energy as Heat is Transferred
Isothermal expansion of an ideal gas
For a monatomic ideal gas we have seen that energy $\langle E \rangle$ observed as $U= 3/2 nRT$. This means that energy is only dependent on temperature and if a gas is compressed isothermally, then the internal energy does not change:
$ΔU_{isothermal-ideal gas} = 0 \nonumber$
This means that the reversible work must cancel the reversible heat:
$ΔU_{rev} = w_{rev} + q_{rev} = 0 \nonumber$
Therefore
$w_{rev} = - q_{rev} \nonumber$
so from the expression of the reversible work for expansion in the last section
$q_{rev}= nRT\ln \dfrac{V_2}{V_1} \nonumber$
If $V_2>V_1$ (expansion), then you (or the environment) must put heat into the system because this is a positive number.
Adiabatic expansion of an ideal gas
Now suppose you make sure that no heat can enter the cylinder. (Put it in styrofoam or so). Then the path can still be reversible (slow pulling) but the process is then adiabatic.
This bat- part comes from a Greek verb βαινω (baino) that means walking, compare acrobat, someone who goes high places (acro-). The δια (dia) part means 'through' (cf. diagram, diorama, diagonal etc.) and the prefix α- (a-) denies it all (compare atypical versus typical).
So the styrofoam prevents the heat from walking through the wall. When expanding the gas from V1 to V2 it still does reversible work but where does that come from? It can only come from the internal energy itself. So in this case any energy change should consist of work (adiabatic means: $δq=0$).
$dU = δw_{rev} \nonumber$
This implies that the temperature must drop, because if $U$ changes, then $T$ must change.
The change of energy with temperature at constant volume is known as the heat capacity (at constant volume) $C_v$
$C_v =\left( \dfrac{\partial U}{\partial T} \right)_V \nonumber$
For an ideal gas $U$ only changes with temperature, so that
or:
We can now compare two paths to go from state $P_1,V_1,T_1$ to state $P_2,V_2,T_1$:
1. Reversible isothermal expansion A
2. Reversible adiabatic expansion B followed by reversible isochoric heating C
Notice that the temperature remains $T_1$ for path A (isotherm!), but that it drops to $T_2$ on the adiabat B, so that the cylinder has to be isochorically warmed up, C, to regain the same temperature.
$\Delta U_{tot}$ should be the same for both path A and the combined path B+C, because the end points are the same ($U$ is a state function!). As the and points are at the same temperature and $U$ only depends on $T$:
$\Delta U_{tot}=0 \nonumber$
Along adiabat B:
$q_{rev}=0 \nonumber$
Along isochoric heating C, there is no volume work because the volume is kept constant, so that:
This is the only reversible heat involved in path B+C. However, we know that $\Delta U_{tot}$ for path A is zero (isothermal!). This means that the volume work along B must cancel the heat along C:
The book keeping looks as follows, all paths are reversible:
$\Delta U_{B+C} = \Delta U_A = 0 = q_B + w_B + q_C + w_C \nonumber$
We know that $q_B=0$ since it is an adiabat and $w_C=0$ since it is an isochore:
$\Delta U_{B+C} = \Delta U_A = 0 = 0 + w_B + q_C + 0 \nonumber$
Therefore:
$w_B=-q_C \nonumber$
We had already seen before that along the isotherm A:
$w_A = - q_A = - nRT \ln \dfrac{V_1}{V_2} \nonumber$
As you can see \w_A\) and $w_B$ are not the same. Work is a path function, even if reversible. As we are working with an ideal gas we can be more precise about $w_B$ and $q_c$ as well. The term $w_B$ along the adiabat is reversible volume work. Since there is no heat along B we can write a straight $d$ instead of $\delta$ for the work contributions (It is the only contribution and must be identical to the state function $dU$):
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(LibreTexts)/19%3A_The_First_Law_of_Thermodynamics/19.04%3A_Energy_is_a_State_Function.txt
|
We can make the same argument for the heat along C. If we do the three processes A and B+C only to a tiny extent we can write:
And now we can integrate from $V_1$ to $V_2$ over the reversible adiabatic work along B and from $T_1$ to $T_2$ for the reversible isochoric heat along C. To separate the variables we do need to bring the temperature to the right side of the equation.:
The latter expression is valid for a reversible adiabatic expansion of a monatomic ideal gas (say Argon) because we used the $C_v$ expression for such a system. We can use the gas law $PV=nRT$ to translate this expression in one that relates pressure and volume see Eq 19.23
We can mathematically show that the temperature of a gas decreases during an adiabatic expansion. Assuming an ideal gas, the internal energy along an adiabatic path is:
$\begin{split} d\bar{U}&= \delta q+\delta w \ &= 0-Pd\bar{V}\ &= -Pd\bar{V} \end{split} \nonumber$
The constant volume heat capacity is defined as:
${\bar{C}}_V=\left(\frac{\partial\bar{U}}{\partial T}\right)_V \nonumber$
We can rewrite this for internal energy:
$d\bar{U}={\bar{C}}_VdT \nonumber$
Combining these two expressions for internal energy, we obtain:
${\bar{C}}_VdT=-Pd\bar{V} \nonumber$
Using the ideal gas law for pressure of an ideal gas:
${\bar{C}}_VdT=-\frac{RT}{\bar{V}}d\bar{V} \nonumber$
Separating variables:
$\frac{\bar{C}_V}{T}dT=-\frac{R}{\bar{V}}d\bar{V} \nonumber$
This is an expression for an ideal path along a reversible, adiabatic path that relates temperature to volume. To find our path along a PV surface for an ideal gas, we can start in TV surface and convert to a PV surface. Let's go from ($T_1,V_1$) to ($T_2,V_2$).
$\int_{T1}^{T_2}{\frac{\bar{C}_V}{T}dT=-\int_{\bar{V}_1}^{\bar{V}_2}{\frac{R}{\bar{V}}d\bar{V}}} \nonumber$
$\bar{C}_V\ln{\left(\frac{T_2}{T_1}\right)}=-R\ln{\left(\frac{{\bar{V}}_2}{{\bar{V}}_1}\right)}=R\ln{\left(\frac{{\bar{V}}_1}{{\bar{V}}_2}\right)} \nonumber$
$\ln{\left(\frac{T_2}{T_1}\right)}=\frac{R}{\bar{C}_V}\ln{\left(\frac{\bar{V}_1}{\bar{V}_2}\right)} \nonumber$
$\left(\frac{T_2}{T_1}\right)=\left(\frac{\bar{V}_1}{\bar{V}_2}\right)^{\frac{R}{\bar{C}_V}} \nonumber$
We know that:
$R={\bar{C}}_P-{\bar{C}}_V \nonumber$
$\frac{R}{\bar{C}_V}=\frac{\bar{C}_P-\bar{C}_V}{\bar{C}_V}=\frac{\bar{C}_P}{\bar{C}_V}-1 \nonumber$
$\frac{R}{{\bar{C}}_V}=\gamma-1 \nonumber$
Therefore:
$\left(\frac{T_2}{T_1}\right)=\left(\frac{\bar{V}_1}{\bar{V}_2}\right)^{\gamma-1} \nonumber$
This expression shows that volume and temperature are inversely related. That is, as the volume increase from $V_1$ to $V_2$, the temperature must decrease from $T_1$ to $T_2$.
19.07: Work and Heat Have a Simple Molecular Interpretation
Statistical interpretation
We can use what we know about the statistical side of thermodynamics to give a simple interpretation to a change $dU$:
we see that because $\delta w_{rev} = -PdV$
See also section 17-5 :In this section it is shown that we can manipulate the partition function to find the pressure of a system by calculating the above moment of the distribution. Again we take the derivative of the logarithm of the partition function $Q$, this time versus V and show that the result resembles the last equation pretty closely (apart from a factor ). Thus we get:
Once again we can find an important quantity of our system by manipulating the partition function $Q$.
19.08: Pressure-Volume Work
Enthalpy
An important point is that pressure-volume work $-PdV$ is only one kind of work. It is the important one for gases but for most other systems we are interested in other kinds of work (e.g. electrical work in a battery).
A good way to measure $ΔU$'s is to make sure there are no work terms at all. If so:
$ΔU_{no work} = q +w = q+0 = q \nonumber$
However, this means that the $-PdV$ volume work term should also be zero and this implies we must keep volumes the same. That can actually be hard. Therefore we define at new state function ENTHALPY
$H ≡ U + PV \nonumber$
( The $≡$ symbol is used to show that this equality is actually a definition.)
If we differentiate we get:
$dH = dU + d(PV) = dU + PdV + VdP \nonumber$
We know that under reversible conditions we have
$dU = δw +δq = -PdV + δq \nonumber$
(+ other work terms that we assume zero)
Thus,
$dH = -PdV +δq + PdV + VdP \nonumber$
$dH = δq + VdP \nonumber$
That means that as long as there is no other work and we keep the pressure constant:
$ΔH = q_P \nonumber$
instead of
$ΔU= q_V \nonumber$
Working at constant $P$ is a lot easier to do than at constant $V$. This means that the enthalpy is a much easier state function to deal with than the energy U.
For example when we melt ice volumes change whether we like or not, but at long as the weather does not change too much pressure is constant. So if we measure how much heat we need to add to melt a mole of ice we get the molar heat of fusion:
Such enthalpies are measured and tabulated.
In this case the volume change is actually quite small, as it usually is for condensed matter. Only if we are dealing with gases is the difference between enthalpy and energy really important
So, $U ~ H$ for condensed matter, but $U$ and $H$ differ for gases.
A good example of this is the difference between the heat capacity at constant V and at constant P. For most materials there is not much of a difference, but for an ideal gas we have
$C_p = C-V + nR \nonumber$
Needless to say that the heat capacity is a path function: it depends on what you keep constant.
19.09: Heat Capacity is a Path Function
Determining enthalpies from heat capacities.
The functions $H$ and $C_p$ are related by differentiation:
$\left ( \dfrac{\partial H}{ \partial T} \right)_P = C_P \nonumber$
This means that we can:
1. measure $C_p$ as a function of temperature
2. integrate this function and find $H(T)$
However, there are problems with this approach:
1. Reference point: we have to deal with the lower limit of integration.
Ideally we start at zero Kelvin (but we cannot get there), but how do we compare one compound to the other?
2. At temperatures where there is a phase transition there is a sudden jump in enthalpy. E.g. when ice melts we have to first add the heat of fusion until all ice is gone before the temperature can go up again (assuming all is done under reversible well-equilibrated conditions).
3. At the jumps in H, the Cp is infinite.
It should be stressed that there are no absolute enthalpies.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(LibreTexts)/19%3A_The_First_Law_of_Thermodynamics/19.06%3A_The_Temperature_of_a_Gas_Decreases_in_a_Reversible_Adiabatic_Expansion.txt
|
It should be stressed that there are no absolute enthalpies. All that is properly defined are differences in enthalpy ΔH and these are only defined for processes
When dealing with enthalpies
Define the Process
For example for the process of:
Process 1: heating ice from -20oC to -0oC
We could write ΔH1 = ∫ CP,ice. dT from T=253K to T=273K = H(273) -H(253). But before moving beyond the melting point first a different process needs to take place, that of
Process 2: melting
This gives us ΔfusH = Hliquid - Hsolid (both at 273K!). When we heat the liquid water further to say +20oC we would have to integrate over the heat capacity of the liquid.
Process 3: heating water from 0oC to +20oC
We could write
$ΔH_3 = \int C_{P,water} dT \nonumber$
from T=273K to T=293K = H(293)-H(273).
The total change in enthalpy between -20 and +20 would be the sum of the three enthalpy changes.
$ΔH_{total process} = ΔH_1 + Δ_{fus}H + ΔH_3 \nonumber$
Of course we could consider doing the same calculation for any temperature between -20 and +20 and summarize all our results in a graph. The three processes can thus schematically be shown in Figure 19.10.1 .
Figure 19.10.1 : Schematic enthalpy function showing the jump at the melting point
Notice that the slopes (i.e. the heat capacities!) before and after the melting point differ. The slope for the liquid is a little steeper because the liquid has more degrees of freedom and therefore the heat capacity of the liquid tends to be higher than of the solid. In the figure the enthlapy curves are shown as straight lines. This would be the case if the heat capacities are constant over the temperature interval. Although Cp is typically a 'slow' or 'weak' function of temperature it usually does change a bit, which means that the straight lines for H become curves.
Although Cp is typically a 'slow' or 'weak' function of temperature and is well approximated as a constant.
Notice that for process two, the temperature is constant, that means that $ΔT$ or $dT$ is zero, but ΔH is finite, consequently ΔH/ΔT is infinitly large. Taking the limit for ΔT going to zero, we get a derivative:
$\left( \dfrac{\partial H}{ \partial T} \right)_p = C_p \nonumber$
This derivative, the heat capacity must undergo a singularity: the slope is infinitely large (i.e, the $H$ curve goes straight up). When there are more phase transitions, more discontinuities in $H$ and singularities in $C_p$ result (Figure 19.10.1 ). Note that $H(T)-H(0)$, not $H(T)$ is plotted to avoid the question what the absolute enthalpy is.
Scanning calorimetry
There is technique that allows us to measure the heat capacity as a function of temperature fairly directly. It is called Differential Scanning Calorimetry (DSC). You put a sample in a little pan and put the pan plus an empty reference pan in the calorimeter. The instrument heats up both pans with a constant heating rate. Both pans get hotter by conduction, but the heat capacity of the filled pan is obviously bigger. This means that the heat flow into the sample pan must be a bit bigger than into the empty one. This differential heat flow induces a tiny temperature difference ΔT between the two pans that can be measured. This temperature difference is proportional to the heat flow difference which is proportional to the heat capacity difference.
$ΔT ~ ΔΦ ~ Δ_{between pans}C_p = C_p^{sample} \,(if the pans cancel) \nonumber$
However, there are number of serious broadening issues with the technique. If you melt something you will never get to see the infinite singularity of the heat capacity. Instead it broadens out into a peak. If you integrate the peak you get the ΔfusionH and the onset is calibrated to give you the melting point.
It is even possible to heat the sample with a rate that fluctuates with a little sine wave. This "Modulated DSC" version can even give you the (small) difference in $C_p$ before and after the melting event.
19.11: Enthalpy Changes for Chemical Equations are Additive
Hess's law
As enthalpy and energy are state functions we should expect additivity of U and H when we study chemical reactions. This additivity is expressed in Hess's Law. The additivity has important consequences and the law finds wide spread application in the prediction of heats of reaction.
1. The reverse reaction has the negative enthalpy of the forward one.
2. If we can do a reaction in two steps we can calculate the enthalpy of the combined reaction by adding up:
Reaction Enthalpy
C(s) + ½ O2(g) -> CO(g) ΔrH = -110.5 kJ
CO(g) + ½ O2(g) -> CO2(g) ΔrH = -283.0 kJ
This means that ----------------------+
C(s) + O2(g) -> CO2(g) ΔrH = -393.5 kJ
By this mechanism it is often possible to calculate the heat of a reaction even if this reaction is hard to carry out. E.g. we could burn both graphite and diamond and measure the heats of combustion for both. The difference would give us the heat of the transformation reaction from graphite to diamond.
Reaction-as-written convention (caution!)
The enthalpy is for the reaction-as-written. That means that if we write:
$2C(s) + O_2(g) \rightarrow 2CO(g) \nonumber$
with ΔrH = -221 kJ (not: -110.5 kJ)
Reverse reactions
Because H is a state function the reverse reaction has the same enthalpy but with opposite sign
$2CO(g) \rightarrow 2C(s) + O_2(g) \nonumber$
with ΔrH = +221 kJ
Combining values
It is quite possible that you cannot really do a certain reaction in practice. For many reactions we can arrive at enthalpy values by doing some bookkeeping. For example, we can calculate the enthalpy for the reaction of PCl3 with chlorine if we know the two reactions that the elements phosphorous and chlorine can undergo.
You do have to make sure you balance your equations properly!
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(LibreTexts)/19%3A_The_First_Law_of_Thermodynamics/19.10%3A_Relative_Enthalpies_Can_Be_Determined_from_Heat_Capacity_Data_and_Heats_of_Transition.txt
|
Reaction enthalpies are important, but difficult to tabulate. However, because enthalpy is a state function, it is possible to use Hess’ Law to simplify the tabulation of reaction enthalpies. Hess’ Law is based on the addition of reactions. By knowing the reaction enthalpy for constituent reactions, the enthalpy of a reaction that can be expressed as the sum of the constituent reactions can be calculated. The key lies in the canceling of reactants and products that °Ccur in the “data” reactions but not in the “target reaction.
Example $1$:
Find $\Delta H_{rxn}$ for the reaction
$2 CO(g) + O_2(g) \rightarrow 2 CO_2(g) \nonumber$
Given
$C(gr) + ½ O_2(g) \rightarrow CO(g) \nonumber$
with $\Delta H_1 = -110.53 \,kJ$
$C(gr) + O_2(g) \rightarrow CO_2(g) \nonumber$
with $\Delta H_2 = -393.51\, kJ$
Solution
The target reaction can be generated from the data reactions.
${\color{red} 2 \times} \left[ CO(g) \rightarrow C(gr) + O_2(g) \right] \nonumber$
plus
${ \color{red} 2 \times} \left[ C(gr) + 2 O_2(g) \rightarrow 2 CO_2(g) \right] \nonumber$
equals
$2 CO(g) + O_2(g) \rightarrow 2 CO_2(g) \nonumber$
so
${ \color{red} 2 \times} \Delta H_1 = -787.02 \, kJ \nonumber$
${ \color{red} 2 \times} \Delta H_2 = 221.06\, kJ \nonumber$
${ \color{red} 2 \times} \Delta H_1 + { \color{red} 2 \times} \Delta H_2 = -565.96 \,kJ \nonumber$
Standard Enthalpy of Formation
One of the difficulties with many thermodynamic state variables (such as enthalpy) is that while it is possible to measure changes, it is impossible to measure an absolute value of the variable itself. In these cases, it is necessary to define a zero to the scale defining the variable. For enthalpy, the definition of a zero is that the standard enthalpy of formation of a pure element in its standard state is zero. All other enthalpy changes are defined relative to this standard. Thus it is essential to very carefully define a standard state.
Definition: the Standard State
The standard state of a substance is the most stable form of that substance at 1 atmosphere pressure and the specified temperature.
Using this definition, a convenient reaction for which enthalpies can be measured and tabulated is the standard formation reaction. This is a reaction which forms one mole of the substance of interest in its standard state from elements in their standard states. The enthalpy of a standard formation reaction is the standard enthalpy of formation ($\Delta H_{f^o}$). Some examples are
• $NaCl(s)$: $Na(s) + ½ Cl_2(g) \rightarrow NaCl(s) \nonumber$ with $\Delta H_f^o = -411.2\, kJ/mol$
• $C_3H_8(g)$: $3 C(gr) + 4 H_2(g) \rightarrow C_3H_8(g) \nonumber$ with $\Delta H_f^o = -103.8\, kJ/mol$
It is important to note that the standard state of a substance is temperature dependent. For example, the standard state of water at -10 °C is solid, whereas the standard state at room temperature is liquid. Once these values are tabulated, calculating reaction enthalpies becomes a snap. Consider the heat combustion ($\Delta H_c$) of methane (at 25 °C) as an example.
$CH_4(g) + 2 O_2(g) \rightarrow CO_2(g) + 2 H_2O(l) \nonumber$
The reaction can expressed as a sum of a combination of the following standard formation reactions.
$C(gr) + 2 H_2(g) \rightarrow CH_4(g) \nonumber$
with $\Delta H_f^o = -74.6\, kJ/mol$
$C(gr) + O_2(g) \rightarrow CO_2(g) \nonumber$
with $\Delta H_f^o = -393.5\, kJ/mol$
$H_2(g) + ½ O_2(g) \rightarrow H_2O(l) \nonumber$
with $\Delta H_f^o = -285.8 \,kJ/mol$
The target reaction can be generated from the following combination of reactions
${ \color{red} -1 \times} \left[ C(gr) + 2 H_2(g) \rightarrow CH_4(g)\right] \nonumber$
$CH_4(g) \rightarrow C(gr) + 2 H_2(g) \nonumber$
with $\Delta H_f^o ={ \color{red} -1 \times} \left[ -74.6\, kJ/mol \right]= 74.6\, kJ/mol$
$C(gr) + O_2(g) \rightarrow CO_2(g) \nonumber$
with $\Delta H_f^o = -393.5\, kJ/mol$
${ \color{red} 2 \times} \left[ H_2(g) + ½ O_2(g) \rightarrow H_2O(l) \right] \nonumber$
$2H_2(g) + O_2(g) \rightarrow 2H_2O(l) \nonumber$
with $\Delta H_f^o = {\color{red} 2 \times} \left[ -285.8 \,kJ/mol \right] = -571.6\, kJ/mol$.
$CH_4(g) + 2 O_2(g) \rightarrow CO2_(g) + 2 H_2O(l) \nonumber$
with $\Delta H_c^o = -890.5\, kJ/mol$
Alternately, the reaction enthalpy could be calculated from the following relationship
$\Delta H_{rxn} = \sum_{products} \nu \cdot \Delta H_f^o - \sum_{reactants} \nu \cdot \Delta H_f^o \nonumber$
where $\nu$ is the stoichiometric coefficient of a species in the balanced chemical reaction. For the combustion of methane, this calculation is
\begin{align} \Delta _{rxn} & = (1\,mol) \left(\Delta H_f^o(CO_2)\right) + (2\,mol) \left(\Delta H_f^o(H_2O)\right) - (1\,mol) \left(\Delta H_f^o(CH_4)\right) \ & = (1\,mol) (-393.5 \, kJ/mol) + (2\,mol) \left(-285.8 \, kJ/mol \right) - (1\,mol) \left(-74.6 \, kJ/mol \right) \ & = -890.5 \, kJ/mol \end{align} \nonumber
A note about units is in order. Note that reaction enthalpies have units of kJ, whereas enthalpies of formation have units of kJ/mol. The reason for the difference is that enthalpies of formation (or for that matter enthalpies of combustion, sublimation, vaporization, fusion, etc.) refer to specific substances and/or specific processes involving those substances. As such, the total enthalpy change is scaled by the amount of substance used. General reactions, on the other hand, have to be interpreted in a very specific way. When examining a reaction like the combustion of methane
$CH_4(g) + 2 O_2(g) \rightarrow CO_2(g) + 2 H_2O(l) \nonumber$
with $\Delta H_{rxn} = -890.5\, kJ$. The correct interpretation is that the reaction of one mole of CH4(g) with two moles of O2(g) to form one mole of CO2(g) and two moles of H2O(l) releases 890.5 kJ at 25 °C.
Ionization Reactions
Ionized species appear throughout chemistry. The energy changes involved in the formation of ions can be measured and tabulated for several substances. In the case of the formation of positive ions, the enthalpy change to remove a single electron at 0 K is defined as the ionization potential.
$M(g) \rightarrow M^+(g) + e^- \nonumber$
with $\Delta H (0 K) \equiv 1^{st} \text{ ionization potential (IP)}$
The removal of subsequent electrons requires energies called the 2nd Ionization potential, 3rd ionization potential, and so on.
$M^+(g) \rightarrow M^{2+}(g) + e^- \nonumber$
with $\Delta H(0 K) ≡ 2^{nd} IP$
$M^{2+}(g) \rightarrow M^{3+}(g) + e^- \nonumber$
with $\Delta H(0 K) ≡ 3^{rd} IP$
An atom can have as many ionization potentials as it has electrons, although since very highly charged ions are rare, only the first few are important for most atoms.
Similarly, the electron affinity can be defined for the formation of negative ions. In this case, the first electron affinity is defined by
$X(g) + e^- \rightarrow X^-(g) \nonumber$
with $-\Delta H(0 K) \equiv 1^{st} \text{ electron affinity (EA)}$
The minus sign is included in the definition in order to make electron affinities mostly positive. Some atoms (such as noble gases) will have negative electron affinities since the formation of a negative ion is very unfavorable for these species. Just as in the case of ionization potentials, an atom can have several electron affinities.
$X^-(g) + e^- \rightarrow X^{2-}(g) \nonumber$
with $-\Delta H(0 K) ≡ 2^{nd} EA$.
$X^{2-}(g) + e^- \rightarrow X^{3-}(g) \nonumber$
with $-\Delta H(0 K) ≡ 3^{rd} EA$.
Average Bond Enthalpies
In the absence of standard formation enthalpies, reaction enthalpies can be estimated using average bond enthalpies. This method is not perfect, but it can be used to get ball-park estimates when more detailed data is not available. A bond dissociation energy $D$ is defined by
$XY(g) \rightarrow X(g) + Y(g) \nonumber$
with $\Delta H \equiv D(X-Y)$
In this process, one adds energy to the reaction to break bonds, and extracts energy for the bonds that are formed.
$\Delta H_{rxn} = \sum (\text{bonds broken}) - \sum (\text{bonds formed}) \nonumber$
As an example, consider the combustion of ethanol:
In this reaction, five C-H bonds, one C-C bond, and one C-O bond, and one O=O bond must be broken. Also, four C=O bonds, and one O-H bond are formed.
Bond Average Bond Energy (kJ/mol)
C-H 413
C-C 348
C-O 358
O=O 495
C=O 799
O-H 463
The reaction enthalpy is then given by
\begin{align} \Delta H_c = \, &5(413 \,kJ/mol) + 1(348\, kJ/mol) + 1(358 \,kJ/mol) \nonumber \ & + 1(495\, kJ/mol) - 4(799 \,kJ/mol) – 2(463\, kJ/mol) \nonumber \ =\,& -856\, kJ/mol \end{align} \nonumber
Because the bond energies are defined for gas-phase reactants and products, this method does not account for the enthalpy change of condensation to form liquids or solids, and so the result may be off systematically due to these differences. Also, since the bond enthalpies are averaged over a large number of molecules containing the particular type of bond, the results may deviate due to the variance in the actual bond enthalpy in the specific molecule under consideration. Typically, reaction enthalpies derived by this method are only reliable to within ± 5-10%.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(LibreTexts)/19%3A_The_First_Law_of_Thermodynamics/19.12%3A_Heats_of_Reactions_Can_Be_Calculated_from_Tabulated_Heats_of_Formation.txt
|
It is often required to know thermodynamic functions (such as enthalpy) at temperatures other than those available from tabulated data. Fortunately, the conversion to other temperatures is not difficult.
At constant pressure
$dH = C_p \,dT \nonumber$
And so for a temperature change from $T_1$ to $T_2$
$\Delta H = \int_{T_2}^{T_2} C_p\, dT \label{EQ1}$
Equation \ref{EQ1} is often referred to as Kirchhoff's Law. If $C_p$ is independent of temperature, then
$\Delta H = C_p \,\Delta T \label{intH}$
If the temperature dependence of the heat capacity is known, it can be incorporated into the integral in Equation \ref{EQ1}. A common empirical model used to fit heat capacities over broad temperature ranges is
$C_p(T) = a+ bT + \dfrac{c}{T^2} \label{EQ15}$
After combining Equations \ref{EQ15} and \ref{EQ1}, the enthalpy change for the temperature change can be found obtained by a simple integration
$\Delta H = \int_{T_1}^{T_2} \left(a+ bT + \dfrac{c}{T^2} \right) dT \label{EQ2}$
Solving the definite integral yields
\begin{align} \Delta H &= \left[ aT + \dfrac{b}{2} T^2 - \dfrac{c}{T} \right]_{T_1}^{T_2} \ &= a(T_2-T_1) + \dfrac{b}{2}(T_2^2-T_1^2) - c \left( \dfrac{1}{T_2} - \dfrac{1}{T_1} \right) \label{ineq} \end{align}
This expression can then be used with experimentally determined values of $a$, $b$, and $c$, some of which are shown in the following table.
Table $1$: Empirical Parameters for the temperature dependence of $C_p$
Substance a (J mol-1 K-1) b (J mol-1 K-2) c (J mol-1 K)
C(gr) 16.86 4.77 x 10-3 -8.54 x 105
CO2(g) 44.22 8.79 x 10-3 -8.62 x 105
H2O(l) 75.29 0 0
N2(g) 28.58 3.77 x 10-3 -5.0 x 104
Pb(s) 22.13 1.172 x 10-2 9.6 x 104
Example $1$: Heating Lead
What is the molar enthalpy change for a temperature increase from 273 K to 353 K for Pb(s)?
Solution
The enthalpy change is given by Equation \ref{EQ1} with a temperature dependence $C_p$ given by Equation \ref{EQ1} using the parameters in Table $1$. This results in the integral form (Equation \ref{ineq}):
$\Delta H = a(T_2-T_1) + \dfrac{b}{2}(T_2^2-T_1^2) - c \left( \dfrac{1}{T_2} - \dfrac{1}{T_1} \right) \nonumber$
when substituted with the relevant parameters of Pb(s) from Table $1$.
\begin{align*} \Delta H = \,& (22.14\, \dfrac{J}{mol\,K} ( 353\,K - 273\,K) \ & + \dfrac{1.172 \times 10^{-2} \frac{J}{mol\,K^2}}{2} \left( (353\,K)^2 - (273\,K)^2 \right) \ &- 9.6 \times 10^4 \dfrac{J\,K}{mol} \left( \dfrac{1}{(353\,K)} - \dfrac{1}{(273\,K)} \right) \ \Delta H = \, & 1770.4 \, \dfrac{J}{mol}+ 295.5\, \dfrac{J}{mol}+ 470.5 \, \dfrac{J}{mol} \ = & 2534.4 \,\dfrac{J}{mol} \end {align*}
For chemical reactions, the reaction enthalpy at differing temperatures can be calculated from
$\Delta H_{rxn}(T_2) = \Delta H_{rxn}(T_1) + \int_{T_1}^{T_2} \Delta C_p \Delta T \nonumber$
Example $2$: Enthalpy of Formation
The enthalpy of formation of NH3(g) is -46.11 kJ/mol at 25 oC. Calculate the enthalpy of formation at 100 oC.
Solution
$\ce{N2(g) + 3 H2(g) \rightleftharpoons 2 NH3(g)} \nonumber$
with $\Delta H \,(298\, K) = -46.11\, kJ/mol$
Compound Cp (J mol-1 K-1)
N2(g) 29.12
H2(g) 28.82
NH3(g) 35.06
\begin{align*} \Delta H (373\,K) & = \Delta H (298\,K) + \Delta C_p\Delta T \ & = -46110 +\dfrac{J}{mol} \left[ 2 \left(35.06 \dfrac{J}{mol\,K}\right) - \left(29.12\, \dfrac{J}{mol\,K}\right) - 3\left(28.82\, \dfrac{J}{mol\,K}\right) \right] (373\,K -298\,K) \ & = -49.5\, \dfrac{kJ}{mol} \end{align*}
19.E: The First Law of Thermodynamics (Exercises)
In the mid 1920's the German physicist Werner Heisenberg showed that if we try to locate an electron within a region $Δx$; e.g. by scattering light from it, some momentum is transferred to the electron, and it is not possible to determine exactly how much momentum is transferred, even in principle. Heisenberg showed that consequently there is a relationship between the uncertainty in position $Δx$ and the uncertainty in momentum $Δp$.
$\Delta p \Delta x \ge \frac {\hbar}{2} \label {5-22}$
You can see from Equation $\ref{5-22}$ that as $Δp$ approaches 0, $Δx$ must approach ∞, which is the case of the free particle discussed previously.
This uncertainty principle, which also is discussed in Chapter 4, is a consequence of the wave property of matter. A wave has some finite extent in space and generally is not localized at a point. Consequently there usually is significant uncertainty in the position of a quantum particle in space. Activity 1 at the end of this chapter illustrates that a reduction in the spatial extent of a wavefunction to reduce the uncertainty in the position of a particle increases the uncertainty in the momentum of the particle. This illustration is based on the ideas described in the next section.
Exercise $1$
Compare the minimum uncertainty in the positions of a baseball (mass = 140 gm) and an electron, each with a speed of 91.3 miles per hour, which is characteristic of a reasonable fastball, if the standard deviation in the measurement of the speed is 0.1 mile per hour. Also compare the wavelengths associated with these two particles. Identify the insights that you gain from these comparisons.
Enthalpy is a State Function
Our expression for internal energy at constant pressure:
$\Delta U=q_P+w=q_P-P\Delta V \nonumber$
Rearrange:
$q_P=\Delta U+P\Delta V=U_2-U_1+P(V_2-V_1) \nonumber$ $q_P=\left(U_2+PV_2\right)-\left(U_1+PV_1\right) \nonumber$
We can define this term as enthalpy:
$H\equiv U+PV \nonumber$
This is a new state function.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(LibreTexts)/19%3A_The_First_Law_of_Thermodynamics/19.13%3A_The_Temperature_Dependence_of_H.txt
|
There are many spontaneous events in nature. If you open the valve in both cases a spontaneous event occurs. In the first case the gas fills the evacuated chamber, in the second the gases will mix. The state functions $U$ and $H$ do not give us a clue what will happen. You might think that only those events are spontaneous that produce heat.
Not so:
• If you dissolve $\ce{KNO3}$ in water, it does so spontaneously, but the solution gets cold.
• If you dissolve $\ce{KOH}$ in water, it does so spontaneously, but the solution gets hot.
Clearly the first law is not enough to describe nature.
Two items left on our wish list
The development of the new state function entropy has brought us much closer to a complete understanding of how heat and work are related:
1. the spontaneity problem
we now have a criterion for spontaneity for isolated systems
2. the asymmetry between work->heat (dissipation) and heat->work (power generation)
at least we can use the new state law to predict the limitations on the latter.
Two problems remain:
1. we would like a spontaneity criterion for all systems (not just isolated ones)
2. we have a new state function $S$, but what is it?
Entropy on a microscopic scale
Let us start with the latter. Yes we can use S to explain the odd paradox between w and q both being forms energy on the one hand, but the conversion being easier in one direction than the other, but we have introduced the concept entropy purely as a phenomenon on iis own. Scientifically there is nothing wrong with such a phenomenological theory except that experience tells us that if you can understand the phenomenon itself better your theory becomes more powerful.
To understand entropy better we need to leave the macroscopic world and look at what happens on a molecular level and do statistics over many molecules. First, let us do a bit more statistics of the kind we will need.
Permutations
If we have n distinguishable objects, say playing cards we can arrange them in a large number of ways. For the first object in our series we have $n$ choices, for number two we have $n-1$ choices (the first one being spoken for) etc. This means that in total we have
$W=n(n-1)(n-2)….4.3.2.1 = n!\;\text{ choices.} \nonumber$
The quantity $W$ is usually called the number of realizations in thermodynamics.
The above is true if the objects are all distinguishable. If they fall in groups within which they are not distinguishable we have to correct for all the swaps within these groups that do not produce a distinguishably new arrangement. This means that $W$ becomes $\frac{n!}{a!b!c!…z!}$ where a,b, c to z stands for the size of the groups. (Obviously $a+b+c+..+z = n$)
In thermodynamincs our 'group of objects' is typically an ensemble of systems, think of size Nav and so the factorial become horribly large. This makes it necessary to work with logarithms. Fortunately there is a good approximation (by Stirling) for a logarithmic factorial:
$\ln N! \approx N \ln N-N \nonumber$
Causality vs. Correlation
In Europe nosey little children who are curious to find out where their newborn little brother or sister came from, often get told that the stork brought it during the night. When you look at the number of breeding pairs of this beautiful bird in e.g. Germany since 1960 you see a long decline to about 1980 when the bird almost got extinct. After that the numbers go up again due to breeding programs mostly. The human birth rate in the country follows a pretty much identical curve and the correlation between the two is very high (>0.98 or so). (Dr H. Sies in Nature (volume 332 page 495; 1988). Does this prove that storks indeed bring babies?
Answer:
No, it does not show causality, just a correlation due a common underlying factor. In this case that is the choices made by the German people. First they concentrated on working real hard and having few children to get themselves out of the poverty WWII had left behind and neglected the environment, then they turned to protect the environment and opened the doors to immigration of people, mostly from Muslim countries like Turkey or Morocco, that usually have larger families. The lesson from this is that you can only conclude causality if you are sure that there are no other intervening factors.
Changing the size of the box with the particles in it
The expansion of an ideal gas against vacuum is really a wonderful model experiment, because nothing else happens but a spontaneous expansion and a change in entropy. No energy change, no heat, no work, no change in mass, no interactions, nothing. In fact, it does not even matter whether we consider it an isolated process or not. We might as well do so.
Physicists and Physical Chemists love to find such experiments that allows them to retrace causality. All this means that if we look at what happens at the atomic level, we should be able to retrace the cause of the entropy change. As we have seen before, the available energy states of particles in a box depend on the size of the box.
$E_{kin} = \dfrac{h^2}{8ma^2} \left[ n_1^2 + n_2^2 + n_2^2 \right] \nonumber$
Clearly if the side ($a$ and therefore the volume of the box changes, the energy spacing between the states will become smaller. Therefore during our expansion against vacuum, the energy states inside the box are changing. Because $U$ does not change the average energy $\langle E \rangle$ is constant. Of course this average is taken over a great number of molecules (systems) in the gas (the ensemble), but let's look at just two of them and for simplicity let us assume that the energy of the states are equidistant (rather than quadratic in the quantum numbers n).
As you can see there is more than one way to skin a cat, or in this case to realize the same average $\langle E \rangle$ of the complete ensemble (of only two particles admittedly). Before expansion I have shown three realizations W1, W2, W3 that add up to the same $\langle E \rangle$. After expansion however, there are more energy states available and the schematic figure shows twice as many realizations W in the same energy interval.
Boltzmann was the first to postulate that this is what is at the root of the entropy function, not so much the (total) energy itself (that stays the same!), but the number of ways the energy can be distributed in the ensemble. Note that because the ensemble average (or total) energy is identical, we could also say that the various realizations $W$ represent the degree of degeneracy $Ω$ of the ensemble.
Boltzmann considered a much larger (canonical) ensemble consisting of a great number of identical systems (e.g. molecules, but it could also be planets or so). If each of our systems already has a large number of energy states the systems can all have the same (total) energy but distributed in rather different ways. This means that two systems within the ensemble can either have the same distribution or a different one. Thus we can divide the ensemble A in subgroups aj having the same energy distribution and calculate the number of ways to distribute energy in the ensemble $A$ as
$W= \dfrac{A!}{a1!a2!…}. \nonumber$
Boltzmann postulated that entropy was directly related to the number of realizations W, that is the number of ways the same energy can be distributed in the ensemble. This leads immediately to the concept of order versus disorder, e.g, if the number of realizations is $W=1$, all systems must be in the same state (W=A! / A!0!0!0!…) which is a very orderly arrangement of energies.
If we were to add two ensembles to each other the total number of possible arrangements Wtot becomes the product W1W2 but the entropies should be additive. As logarithms transform products into additions Boltzmann assumed that the relation between W and S should be logarithmic and wrote:
$S= k \ln W \nonumber$
Again, if we consider a very ordered state, e.g. where all systems are in the ground state the number of realizations A!/A!= 1 so that the entropy is zero. If we have a very messy system where the number of ways to distribute energy over the many many different states is very large S becomes very large. Thus entropy is very large.
This immediately gives us the driving force for the expansion of a gas into vacuum or the mixing of two gases. We simply get more energy states to play with, this increases W. This means an increase in S. This leads to a spontaneous process.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(LibreTexts)/20%3A_Entropy_and_The_Second_Law_of_Thermodynamics/20.01%3A_Energy_Does_not_Determine_Spontaneity.txt
|
In spontaneous processes for an isolated system, there is a competition between minimizing the energy of the system and increasing the dispersal of energy within the system. If energy is constant, then the system will evolve to maximize energy dispersal. If energy dispersal is not a factor, the system will evolve to minimize its energy. We already have a quantitative basis for the energy of a system and we need to do the same for energy dispersal. Suppose we have a small reversible change $dU$ in the energy of an ideal gas. We know that $U$ only depends on temperature:
$dU = C_vdT \nonumber$
We also know that any reversible work would be volume work.
$δw_{rev} = -PdV \nonumber$
This means that we can write:
\begin{align*} δq_{rev} &= dU - δw_{rev} \[4pt] &= C_vdT + PdV \end{align*} \nonumber
Let us examine if this represents an exact differential. If $\delta q$ were an exact differential, we could write the total differential:
$\delta q_{rev} =\left(\frac{\partial q}{dT}\right)_V dT + \left(\frac{\partial q}{dV}\right)_T dV \nonumber$
And the following would be true:
$\frac{\partial^2 q_{rev}}{\partial T\partial V}=\frac{\partial^2 q_{rev}}{\partial V\partial T} \nonumber$
From our equation above, we know that:
$\frac{\partial q_{rev}}{\partial T}=C_V \nonumber$
$\frac{\partial q_{rev}}{\partial V}=P \nonumber$
Therefore, the following should be true:
$\frac{\partial C_V}{\partial V}=\frac{\partial P}{\partial T} \nonumber$
However,
$\dfrac{\partial C_v}{\partial V}=0 \nonumber$
Because $C_v$ does not depend on volume (only $T$, just like $U$: it is its derivative). And:
$\dfrac{\partial P}{\partial T} = \dfrac{\partial nRT}{\partial T} = \dfrac{nR}{V} \nonumber$
Which is not zero!! Clearly, $δq_{rev}$ is not a state function, but look what happens if we multiply everything with an 'integration factor' $1/T$:
$\dfrac{δq_{rev}}{T} = \dfrac{C_v}{T}dT + \dfrac{P}{T}dV \nonumber$
$\dfrac{\partial C_v/T}{\partial V} = 0 \nonumber$
Because $\dfrac{C_v}{T}$ does not depend on volume. However,
$\dfrac{\partial (P/T)}{\partial T} = \dfrac{\partial (nR/V)}{\partial T} = 0 \nonumber$
Thus, the quantity $dS = \dfrac{δq_{rev}}{T}$ is an exact differential, so $S$ is a state function and it is called entropy. Entropy is the dispersal of energy, or equivalently, the measure of the number of possible microstates of a system.
20.03: Unlike heat Entropy Is a State Function
Circular integrals
Because entropy is a state function, it integrates to zero over any circular path going back to initial conditions, just like $U$ and $H$:
$\oint dS =0 \nonumber$
$\oint dH =0 \nonumber$
$\oint dU =0 \nonumber$
As discussed previously, we can use this fact to revisit the isotherm + isochore + adiabat circular path (Figure 20.3.1 ).
Along adiabat B and isochore C:
• There is no heat transfer along adiabat B:
$q_{rev,B} = 0 \nonumber$
• There is no work along isochore C:
$\delta w=0 \nonumber$
• But the temperature changes from $T_2$ back to $T_1$. This requires heat:
$q_{rev,C}=C_V\Delta T \nonumber$
Along the isotherm A:
• We have seen that
$q_{rev,A} = nRT \ln \dfrac{V_2}{V_1} \nonumber$
The quantities qrev,A, qrev,B, and qrev,C are not the same, which once again underlines that heat is a path function. How about entropy?
First, consider the combined paths of B and C:
$q_{rev,B+C} = \int _{T_2}^{T_1} C_v dT \nonumber$
$\int dS_{B+C} = \int \dfrac{dq_{rev,B+C}}{T} = \int _{T_2}^{T_1} \dfrac{C_v}{T} dT \nonumber$
We had seen this integral before from Section 19-6, albeit from $T_1$ to $T_2$:
$\Delta S_{B+C} = nR\ln \dfrac{V_2}{V_1} \label{19.21}$
(Notice sign in Equation \ref{19.21} is positive)
Along the isotherm A:
$q_{rev,A} = nRT \ln\frac{V_2}{V_1} \nonumber$
$T$ is a constant so we can just divide $q_{rec,A}$ by $T$ to get $\Delta S_A$:
$\Delta S_A = nR\ln \dfrac{V_2}{V_1} \nonumber$
We took two different paths to get start and end at the same points. Both paths had the same change in entropy. Clearly entropy is a state function while $q_{rev}$ is not.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(LibreTexts)/20%3A_Entropy_and_The_Second_Law_of_Thermodynamics/20.02%3A_Nonequilibrium_Isolated_Systems_Evolve_in_a_Direction_That_Increases_Their_Probab.txt
|
Spontaneity of an isolated system
An isolated system is a little more than just adiabatic. In the latter heat cannot get in or out. In an isolated system nothing gets in or out, neither heat nor mass nor even any radiation, such as light. The isolated system is like a little universe all to itself.
Let us consider a zero law process. We have two identical blocks of metal, say aluminum. They are each at thermal equilibrium, but at different temperatures. They are brought into contact with each other but isolated from the rest of the universe.
• Zeroth law: Heat will flow from hot to cold
• First law: There is no change in total energy
so:
$dU_A= -dU_B \nonumber$
There is also no work so:
$dU_A = \delta q_A + 0 \nonumber$
Because U is a state function this makes $q$ a state function as well, otherwise this equality does not hold. As there is only one term on the right there is only one path (along q). So we could write:
$dU_A = dq_A \nonumber$
This implies that we do not need to worry about reversible and irreversible paths as there is only one path. Since:
$dS = \dfrac{\delta q_{rev}}{T} \nonumber$
In this particular case:
$TdS = \delta q_{rev} = dU \nonumber$
Thus we get:
$dS = \dfrac{dU_A}{T_A} + \dfrac{dU_B}{T_B} = \dfrac{dU_A}{T_A} - \dfrac{dU_A}{T_B} = dU_A \left( \dfrac{1}{T_A} - \dfrac{1}{T_B} \right) \nonumber$
Clearly as long as the two temperatures are not the same $dS$ is not zero and entropy is not conserved. Instead it is increasing. Over time, the temperatures will become the same (if the blocks are identical, the final temperature is the average of $T_A$ and $T_B$) and the entropy will reach a maximum.
For our two identical blocks of metal (with same heat capacity, $C_V$), we can, in fact, derive that the entropy change:
$\Delta S = C_V \ln \left[\dfrac{T_A^2 + T_B^2}{4T_AT_B}\right]. \nonumber$
This is indeed a positive quantity. In general, we can say for an isolated system:
$dS \ge 0 \nonumber$
Thus if we are dealing with a spontaneous (and isolated) process $dS >0$ and entropy is being produced. This gives us a criterion for spontaneity.
Entropy exchange of an open system
In an isolated system $dS$ represents the produced entropy $dS_{prod}$ and this is a good criterion for spontaneity. Of course the requirement that the system is isolated is very restrictive and makes the criterion as good as useless... What happens in a system that can exchange heat with the rest of the universe? We do have entropy changes in that case, but part of them may have nothing to do with production, because we also have to consider the heat that is exchanged.
$dS = dS_{prod} + dS_{exchange} \nonumber$
If the process is reversible (that is completely non-spontaneous) we are dealing with $\delta q_{rev}$ so that $dS_{exchange} = \delta q_{rev} /T$, but that is also what $dS_{tot}$ is equal to (by definition). This leaves no room for entropy production.
So we have:
Isolated: $dS = dS_{prod} + 0$
Reversible $dS = 0 + \delta q_{rev}/T$
Notice that this demonstrates that for non-isolated systems entropy change is not a good criterion for spontaneity at all... In the case the heat exchange is irreversible part of the entropy is entropy production by the system:
Irreversible: $dS = dS_{prod} + \delta q_{irrev}/T$
$dS > \delta q_{irrev}/T$ in this case.
Generalizing the isolated, irreversible and reversible cases we may say:
$dS \ge \dfrac{δq}{T} \nonumber$
This is the Clausius inequality.
20.05: The Famous Equation of Statistical Thermodynamics
A common interpretation of entropy is that it is somehow a measure of chaos or randomness. There is some utility in that concept. Given that entropy is a measure of the dispersal of energy in a system, the more chaotic a system is, the greater the dispersal of energy will be, and thus the greater the entropy will be. Ludwig Boltzmann (1844 – 1906) (O'Connor & Robertson, 1998) understood this concept well, and used it to derive a statistical approach to calculating entropy. Boltzmann proposed a method for calculating the entropy of a system based on the number of energetically equivalent ways a system can be constructed.
Boltzmann proposed an expression, which in its modern form is:
$S = k_b \ln(W) \label{Boltz}$
$W$ is the number of available microstates in a macrostate (ensemble of systems) and can be taken as the quantitative measure of energy dispersal in a macrostate:
$W=\frac{A!}{\prod_j{a_j}!} \nonumber$
Where $a_j$ is the number of systems in the ensemble that are in state $j$ and $A$ represents the total number of systems in the ensemble:
$A=\sum_j{a_j} \nonumber$
Equation 20.5.1 is a rather famous equation etched on Boltzmann’s grave marker in commemoration of his profound contributions to the science of thermodynamics (Figure $1$).
Example $1$:
Calculate the entropy of a carbon monoxide crystal, containing 1.00 mol of $\ce{CO}$, and assuming that the molecules are randomly oriented in one of two equivalent orientations.
Solution:
Using the Boltzmann formula (Equation \ref{Boltz}):
$S = nK \ln (W) \nonumber$
And using $W = 2$, the calculation is straightforward.
\begin{align*} S &= \left(1.00 \, mol \cot \dfrac{6.022\times 10^{23}}{1\,mol} \right) (1.38 \times 10^{-23} J/K) \ln 2 \ &= 5.76\, J/K \end{align*} \nonumber
Contributors and Attributions
• Patrick E. Fleming (Department of Chemistry and Biochemistry; California State University, East Bay)
• Jerry LaRue (Department of Chemistry and Biochemistry, Chapman University)
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(LibreTexts)/20%3A_Entropy_and_The_Second_Law_of_Thermodynamics/20.04%3A_The_Second_Law_of_Thermodynamics.txt
|
The second law of thermodynamics can be formulated in many ways, but in one way or another they are all related to the fact that the state function entropy, $S$, tends to increase over time in isolated systems. For a long time, people have looked at the entire universe as an example of an isolated system and concluded that its entropy must be steadily increasing until $\delta S_{universe}$ becomes zero. As we will see below, the second law has important consequences for the question of how we can use heat to do useful work.
Of late, cosmologists like the late Richard Hawkins have begun to question the assumption that the entropy of the universe is steadily increasing. The key problem is the role that gravity and relativity play in creating black holes.
Vacuum Expansion
Let's compare two expansions from $V_1$ to $V_2$ for an ideal gas, both are isothermal. The first is an irreversible one, where we pull a peg an let the piston move against vacuum:
The second one is a reversible isothermal expansion from $V_1$ to $V_2$ (and $P_1$ to $P_2$) that we have examined before. In both cases, the is no change in internal energy since $T$ does not change. During the irreversible expansion, however, there is also no volume work because the piston is expanding against a vacuum and the following integral:
$\int -P_{ext}dV = 0 \nonumber$
integrates to zero. The piston has nothing to perform work against until it slams into the right hand wall. At this point $V=V_2$ and then $dV$ becomes zero. This is not true for the reversible isothermal expansion as the external pressure must always equal the internal pressure.
No energy and no work means no heat!
Clearly the zero heat is irreversible heat ($q_{irr} = 0$) and this makes it hard to calculate the entropy of this spontaneous process. But then this process ends in the same final state as the reversible expansion from $V_1$ to $V_2$. We know that $dU$ is still zero, but now $δw_{rev} = -δq_{rev}$ is nonzero. We calculated its value before:
$q_{rev} = nRT \ln \left(\dfrac{V_2}{V_1} \right) \label{Vacuum}$
The Claussius definition of entropy change can be used to find $\Delta S$ (under constant temperature).
$\Delta S = \dfrac{q_{rev}}{T} \label{Claussius}$
Substituting Equation $\ref{Vacuum}$ into Equation \ref{Claussius} results in
$\Delta S = nR \ln \left(\dfrac{V_2}{V_1} \right) \nonumber$
As $S$ is a state function this equation also holds for the irreversible expansion against vacuum.
Always calculate the entropy difference between two points along a reversible path.
For the irreversible expansion into vacuum we see that
\begin{align*}\Delta S_\text{total} &= \Delta S_\text{sys} + \Delta S_\text{surr} \[4pt] &= nR\ln \left( \dfrac{V_2}{V_1}\right) + 0 \[4pt] &= nR\ln \left( \dfrac{V_2}{V_1}\right) \end{align*}
For the reversible expansion, heat is transferred to the system while the system does work on the surroundings in order to keep the process isothermal:
\begin{align*}\Delta S_\text{sys} &= nR\ln \left( \dfrac{V_2}{V_1}\right) \end{align*}
The entropy change for the surrounding is the opposite of the system:
\begin{align*}\Delta S_\text{surr} &= -nR\ln \left( \dfrac{V_2}{V_1}\right) \end{align*}
This is because the amount of heat transferred to the system is the same as the heat transferred from the surroundings and this process is reversible so the system and surroundings are at the same temperature (equilibrium). Heat is related to entropy by the following equation:
$dS = \frac{\delta q}{T}$
Therefore, the total entropy change for the reversible process is zero:
\begin{align*}\Delta S_\text{total} &= \Delta S_\text{sys} + \Delta S_\text{surr} \[4pt] &= nR\ln \left( \dfrac{V_2}{V_1}\right)- nR\ln \left( \dfrac{V_2}{V_1}\right) \[4pt] &=0 \end{align*}
The Mixing of Two Gases
Consider two ideal gases at same pressure separated by a thin wall that is punctured. Both gases behave as if the other one is not there and again we get a spontaneous process, mixing in this case.
If the pressure is the same the number of moles of each gas should be proportional to the original volumes, $V_A$ and $V_B$, and the total number of moles to the total volume $V_{tot}$.
For gas A we can write:
$\Delta S_A = n_A R \ln \dfrac{V_{tot}}{V_A} = n_A R \ln \dfrac{n_{tot}}{n_A} \nonumber$
and similarly for gas B we can write:
$\Delta S_B = n_B R \ln \dfrac{V_{tot}}{V_B} = n_B R \ln \dfrac{n_{tot}}{n_B} \nonumber$
The total entropy change is therefore the sum of constituent entropy changes:
$\Delta S = \Delta S_A + \Delta S_B \nonumber$
and the entropy change total per mole of gas is:
$\dfrac{\Delta S}{n_{tot}} =R \dfrac{\left[n_B \ln \dfrac{n_{tot}}{n_B}+ n_A \ln \dfrac{n_{tot}}{n_A} \right ]}{n_{tot}} \label{EqTot}$
Equation $\ref{EqTot}$ can be simplified using mole fractions:
$\chi_A = \dfrac{n_A}{n_{tot}} \nonumber$
and the mathematical relationship of logarithms that:
$\ln \left( \dfrac{x}{y} \right)= - \ln \left( \dfrac{y}{x} \right) \nonumber$
to:
$\Delta \bar{S} = -R \left [\chi_A\ln \chi_A +\chi_B \ln \chi_B \right] \label{Molar Entropy}$
In the case of mixing of more than two gases, Equation $\ref{Molar Entropy}$ can be expressed as:
$\Delta \bar{S} = -R \sum \chi_i\ln \chi_i \label{Sum Entropy}$
This entropy expressed in Equations $\ref{Molar Entropy}$ and $\ref{Sum Entropy}$ is known as the entropy of mixing; its existence is the major reason why there is such a thing as diffusion and mixing when gases, and also solutions (even solid ones), are brought into contact with each other.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(LibreTexts)/20%3A_Entropy_and_The_Second_Law_of_Thermodynamics/20.06%3A_We_Must_Always_Devise_a_Reversible_Process_to_Calculate_Entropy_Changes.txt
|
Heat and work are both forms of transferring energy, and under the right circumstance, one form may be transformed into the other. However, the second law of thermodynamics puts a limitation on this. To go from work to heat is called dissipation and there is no limitation on this at all. In fact it was through dissipation (by friction) that we discovered that heat and work were both forms of energy. There is, however, a limitation on converting heat to work.
The Carnot Cycle
Let's consider a circular, reversible path of an ideal gas on a PV diagram:
This cycle forms the 4-stage Carnot cycle heat engine. A heat engine converts heat energy into work. The cycle consists of:
1. Isothermal expansion at the hot temperature, $T_h$: $\Delta U_1=w_1+q_h=0 \nonumber$
2. Adiabatic cooling from $T_h$ to $T_c$: $\Delta U_2=w_2 \nonumber$
3. Isothermal compression at the cold temperature, $T_c$: $\Delta U_3=w_3+q_c=0 \nonumber$
4. Adiabatic heating from $T_c$ to $T_h$: $\Delta U_4=w_4 \nonumber$
The total four-step process produces work because $w_{hot} > w_{cold}$. The work is the integral under the upper isotherm minus the one under the lower curve, i.e.the surface area in between.
Sadi Carnot
Sadi Carnot was a French engineer at the beginning of the 19th century. He considered a cyclic process involving a cylinder filled with gas. This cycle the Carnot cycle contributed greatly to the development of thermodynamics and the improvement of the steam engine. Carnot demonstrated that the cold temperature on the right is as important as the heat source on the left in defining the possible efficiency of a heat engine
Efficiency
Of course we spend good money on the fuel to start the cycle by heating things up. So how much work do we get for the heat we put in? In other words, we want to know how efficient our heat engine is. The efficiency, $\eta$ of a heat engine is:
$\eta=\frac{|w_\text{cycle}|}{q_h}=\frac{q_h+q_c}{q_h}=1+\frac{q_c}{q_h} \nonumber$
To get the work of the cycle, we can make use of internal energy as a state function. As the path is circular the circular integrals for $U$ is zero:
\begin{align*} \oint{dU} &= \Delta U_{\text{cycle}} \[4pt] &=\sum{\Delta U_i} \[4pt] &=w_1+q_h+w_2+w_3+q_c+w_4 \[4pt] &=0 \end{align*}
Rearranging:
\begin{align*} q_h + q_c &=-w_1-w_2-w_3-w_4 \[4pt] &=-w_\text{cycle} \end{align*}
An ideal engine would take $q_h\rightarrow q_c$ with 100% efficiency. The work of the cycle will be equivalent to the heat transfer. For ideal gases:
1. $w_1=-RT_h\ln{\left(\frac{V_B}{V_A}\right)}=-q_h$
2. $dU=\delta w=\bar{C}_VdT\rightarrow w_2={\bar{C}}_V\left(T_c-T_h\right)$
3. $w_3=-RT_c\ln{\left(\frac{V_D}{V_C}\right)}=-q_c$
4. $w_4={\bar{C}}_V\left(T_h-T_c\right)$
Finding an expression for $w_\text{cycle}$:
$\begin{split} w_\text{cycle} &= -RT_h\ln{\left(\frac{V_B}{V_A}\right)}+{\bar{C}}_V\left(T_c-T_h\right)-RT_c\ln{\left(\frac{V_D}{V_C}\right)}+{\bar{C}}_V\left(T_h-T_c\right) \ &= -RT_h\ln{\left(\frac{V_B}{V_A}\right)}-RT_c\ln{\left(\frac{V_D}{V_C}\right)}= -R(T_h-T_c)\ln{\left(\frac{V_B}{V_A}\right)} \end{split} \nonumber$
We have an expression for work, so we can evaluate the efficiency, $\eta$. The efficiency of the Carnot engine is:
$\eta=\frac{|w_\text{cycle}|}{q_h}=\frac{R\left(T_h-T_c\right)\ln{\left(\frac{V_B}{V_A}\right)}}{RT_h\ln{\left(\frac{V_B}{V_A}\right)}}=\frac{T_h-T_c}{T_h}=1-\frac{T_c}{T_h} \nonumber$
Paths (2) and (4) are adiabats, so we can also use entropy, $S$, to get the same solution:
$\oint{dS}= \frac{q_h}{T_h}+\frac{q_c}{T_c}=0 \nonumber$
Therefore:
$\dfrac{q_c}{q_h} = -\dfrac{T_c}{T_h} \nonumber$
And we get that:
$η= 1+ \dfrac{q_c}{q_h} = 1-\dfrac{T_c}{T_h} \label{eff}$
As you see we can only get full efficiency if $T_{cold}$ is 0 K, which is never (i.e., we always waste energy). Another implication is that if $T_c = T_h$ then no work can be obtained, no matter how much energy is available in the from of heat. Or in other words, if one dissipates work into heat isothermally, none of it can be retrieved. Equation $\ref{eff}$ is not very forgiving at all. Just imaging that you have a heat source of say 400 K (a superheated pool of water, e.g. a geyser) and you are dumping in the river at room temperature 300 K. The best efficiency you'll ever get is:
$η= 1-\dfrac{300}{400}= 24\% \nonumber$
Sadly, you'd be dumping three quarters of your energy as heat in the river (and that is best case scenario as there are always more losses, e.g. due to friction). The arrow saying $Q_C$ in Figure 20.7.3 should then be three times as fat as the one that says $w$.
Heat pumps
So getting our work from heat is hard and always less than 100% successful. The other way around should be easy. After all, we can dissipate work into heat freely even under isothermal conditions!
What happens if we let the heat engine run backwards? Consider reversing all the flows in the above diagram. Obviously we must put in work to make the cycle run in reverse. The heat will now flow from cold to hot, say from your cold garden into your nice and warm apartment. The amount of heat you get in your humble abode will be the sum of all the work (say 100 Joules) you dissipate plus the heat you pumped out of the garden (say 300 Joules). Thus if you are willing to pay for the energy you dissipate (100 Joules), you may well end up with a total of 400 Joules of heat in your apartment! Obviously if it is heat you after this is a better deal than just dissipating the work in your apartment (by burning some oil). Then you'd only get 100 J for your precious buck.
Dissipating it as electrical heating is even worse because you would
1. first burn (a lot more!) oil to generate heat
2. use this heat to produce electrical work at great expense because a lot of the heats gets dumped at the low temperature side (the river or so)
3. dissipate the work again in your apartment (without using it to pump any heat out of the garden)
Refrigerators are also heat pumps. They heat the kitchen by pumping heat from its innards to the kitchen. If I keep the door open, however, all it does is dissipate precious electrical work, because the pumped heat will flow back into its innards and spoil the milk.
Stirling Engine
Let's consider another type of heat engine, the Stirling engine. The Stirling engine uses a circular reversible path in an ideal PV diagram:
The path consists of four steps:
1. Isothermal expansion at the hot temperature $T_h$: $q_1=q_h =-w \nonumber$ and we get work out (i.e., negative work): $w_h =-RT_h \ln \left( \dfrac{V_B}{V_A} \right) \nonumber$
2. Isochoric heating from $T_c$ to $T_h$ (with constant $C_V$): $q_2= \dfrac{C_V}{ΔT} \label{q2}$
3. Isothermal compression at the cold temperature $T_c$: $q_3=q_c =-w \nonumber$ and we must put work in (i.e., positive work): $w_c =-RT_c \ln \left( \dfrac{V_D}{V_C}\right)=RT_c \ln \left( \dfrac{V_B}{V_A} \right) \nonumber$
4. Isochoric cooling from $T_h$ to $T_c$ (with constant $C_V$): $q_4= -\dfrac{C_V}{ΔT} \label{q4}$
Notice that this gray area vanishes if $T_{h}= T_{c}$. Obviously how cold the cold side is of great importance! The amount of work is also equal to the difference in the heat picked up at high temperature $q_h$ and dumped at low temperature $q_{c}$. The isochoric heats cancel. The problem is that $q_{c}$ is only zero if the cold temperature is 0 K. That means that we can never get all the heat we pick up at high temperatures to come out as work.
20.08: Entropy Can Be Expressed in Terms of a Partition Function
We have seen that the partition function of a system gives us the key to calculate thermodynamic functions like energy or pressure as a moment of the energy distribution. We can extend this formulism to calculate the entropy of a system once its $Q$ is known. We can start with Boltzmann's (statistical) definition of entropy:
$S = k \ln(W) \label{Boltz}$
with
$W=\frac{A!}{\prod_j{a_j}!} \nonumber$
Combining these equations, we obtain:
$S_{ensemble} = k \ln\frac{A!}{\prod_j{a_j}!} \nonumber$
Rearranging:
$S_{ensemble} = k \ln{A!}-k \sum_j{\ln{a_j!}} \nonumber$
Using Sterling's approximation:
$\begin{split}S_{ensemble} &= k A\ln{A}-k A - k \sum_j{a_j\ln{a_j}} + k \sum_j{a_j} \ &= k A\ln{A}- k \sum_j{a_j\ln{a_j}}\end{split} \nonumber$
Since:
$\sum_j{a_j}=A \nonumber$
The probability of finding the system in state $a_j$ is:
$p_j=\frac{a_j}{A} \nonumber$
Rearranging:
$a_j = p_jA \nonumber$
Plugging in:
$S_{ensemble}=k A\ln{A}- k \sum_j{p_jA\ln{p_jA}} \nonumber$
Rearranging:
$S_{ensemble}=k A\ln{A}- k \sum_j{p_jA\ln{p_j}}- k \sum_j{p_jA\ln{A}} \nonumber$
If $A$ is constant, then:
$k \sum_j{p_jA\ln{A}} = k A\ln{A}\sum_j{p_j} \nonumber$
Since:
$\sum_j{p_j} = 1$
We get:
$S_{ensemble}=k A\ln{A}- k \sum_j{p_jA\ln{p_j}}- k A\ln{A} \nonumber$
The first and last term cancel out:
$S_{ensemble}=- k \sum_j{p_jA\ln{p_j}} \nonumber$
We can divide by $A$ to get the entropy of the system:
$S_{system}=- k \sum_j{p_j\ln{p_j}} \label{eq10}$
If all the $p_j$ are zero except for the for one, then the system is perfectly ordered and the entropy of the system is zero. The probability of being in state $j$ is
$p_j=\frac{e^{-\beta E_j}}{Q} \label{eq15}$
Plugging Equation \ref{eq15} into Equation \ref{eq10} results in
\begin{align*}S &= - k \sum_j{\frac{e^{-\beta E_j}}{Q}\ln{\frac{e^{-\beta E_j}}{Q}}} \[4pt] &= - k \sum_j{\frac{e^{-\beta E_j}}{Q}\left(-\beta E_j- \ln{Q}\right)} \[4pt] &= - \beta k \sum_j{\frac{E_je^{-\beta E_j}}{Q}}+\frac{k\ln{Q}}{Q}\sum_j{e^{-\beta E_j}} \end{align*}
Making use of:
$\beta k=\frac{1}{T} \nonumber$
And:
$\sum{\frac{e^{-\beta E_j}}{Q}}=\sum{p_j}=1 \nonumber$
We obtain:
$S= \dfrac{U}{T} + k\ln Q \label{20.42}$
20.09: The Statistical Definition of Entropy is Analogous to the Thermodynamic Definitio
We learned earlier in 20-5 that entropy, $S$, is related to the number of microstates, $W$, in an ensemble with $A$ systems:
$S_{ensemble}=k_B \ln{W} \label{eq1}$
and
$W=\frac{A!}{\prod_j{a_j}!} \label{eq2}$
Combining Equations \ref{eq1} and \ref{eq2} to get:
$\begin{split} S_{ensemble} &= k_B \ln{\frac{A!}{\prod_j{a_j}!}} \ &= k_B \ln{A!}-k_B\sum_j{\ln{a_j}!} \end{split} \nonumber$
$\ln{A!} \approx A\ln{A}-A \nonumber$
We obtain:
$S_{ensemble} = k_B A \ln{A}-k_BA-k_B\sum_j{a_j\ln{a_j}}+k_B\sum{a_j} \nonumber$
Since:
$A=\sum{a_j} \nonumber$
The expression simplifies to:
$S_{ensemble} = k_B A \ln{A}-k_B\sum_j{a_j\ln{a_j}} \nonumber$
We can make use of the fact that the number of microstates in state $j$ is equal to the total number of microstates multiplied by the probability of finding the system in state $j$, $p_j$:
$a_j=p_jA \nonumber$
Plugging in, we obtain
$\begin{split}S_{ensemble} &= k_B A \ln{A}-k_B\sum_j{p_jA\ln{p_jA}} \ &= k_B A \ln{A}-k_B\sum_j{p_jA\ln{p_j}}-k_B\sum_j{p_jA\ln{A}} \end{split} \nonumber$
Since $A$ is a constant and the sum of the probabilities of finding the system in state $j$ is always 1:
$\sum{p_j}=1 \nonumber$
The first and last term cancel out:
$S_{ensemble} = -k_BA\sum_j{p_j\ln{p_j}} \nonumber$
We can use that the entropy of the system is the entropy of the ensemble divided by the number of systems:
$S_{system}=S_{ensemble}/A \nonumber$
Dividing by $A$, we obtain:
$S_{system} = -k_B\sum_j{p_j\ln{p_j}} \nonumber$
We can differentiate this equation and dropping the subscript:
$dS = -k_B\sum_j{\left(dp_j+\ln{p_j}dp_j\right)} \nonumber$
Since $\sum_j{p_j}=1$, the derivative $\sum_j{dp_j}=0$:
$dS = -k_B\sum_j{\ln{p_j}dp_j} \nonumber$
In short:
$\sum_j{\ln{p_j}dp_j}=-\frac{\delta q_{rev}}{k_BT} \nonumber$
Plugging in:
$dS = \frac{\delta q_{rev}}{T} \nonumber$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(LibreTexts)/20%3A_Entropy_and_The_Second_Law_of_Thermodynamics/20.07%3A_Thermodynamics_Provides_Insight_into_the_Conversion_of_Heat_into_Work.txt
|
Learning Objectives
• Define entropy and its relation to energy flow.
Entropy versus temperature
We can put together the first and the second law for a reversible process with no other work than volume ($PV$) work and obtain:
$dU= δq_{rev} + δw_{rev} \nonumber$
Entropy is the dispersal of energy and is related to heat:
$δq_{rev}= TdS \nonumber$
Work is related to the change in volume:
$δw_{rev}= -PdV \nonumber$
Plugging these into our expression for $dU$ for reversible changes:
$dU= TdS -PdV \nonumber$
We no longer have any path functions in the expression, as $U$, $S$ and $V$ are all state functions. This means this expression must be an exact differential. We can generalize the expression to hold for irreversible processes, but then the expression becomes an inequality:
$dU≤ TdS - PdV \nonumber$
This equality expresses $U$ as a function of two variables, entropy and volume: $U(S,V)$. $S$ and $V$ are the natural variables of $U$.
Entropy and heat capacity
At constant volume, $dU$ becomes:
$dU=TdS \nonumber$
Recall that internal energy is related to constant volume heat capacity, $C_V$:
$C_V=\left(\frac{dU}{dT}\right)_V \nonumber$
Combining these two expressions, we obtain:
$dS=\frac{C_V}{T}dT \nonumber$
Integrating:
$\Delta S=\int_{T_1}^{T_2}{\frac{C_V(T)}{T}dT} \nonumber$
If we know how $C_V$ changes with temperature, we can calculate the change in entropy, $\Delta S$. Since heat capacity is always a positive value, entropy must increase as the temperature increases. There is nothing to stop us from expressing $U$ in other variables, e.g. $T$ and $V$. In fact, we can derive some interesting relationships if we do.
Example 21.1.1
1. Write $U$ as a function of $T$ and $V$.
2. Write $U$ as a function of its natural variables.
3. Rearrange (2) to find an expression for $dS$.
4. Substitute (1) into (3) and rearrange. This is the definition of $C_V$.
5. Write out $S$ as a function of $T$ and $V$.
We can also derive an expression for the change in entropy as a function of constant pressure heat capacity, $C_P$. To start, we need to change from internal energy, $U$, to enthalpy, $H$:
\begin{align*} H &= U + PV \[4pt] dH &= dU +d(PV) \[4pt] &= dU + PdV + VdP \end{align*} \nonumber
For reversible processes:
\begin{align*} dH &= dU + PdV + VdP \[4pt] &= TdS -PdV + PdV + VP \[4pt] &= TdS + VdP\end{align*} \nonumber
The natural variables of the enthalpy are $S$ and $P$ (not: $V$). A similar derivation as above shows that the temperature change of entropy is related to the constant pressure heat capacity:
$dH=TdS+VdP \nonumber$
At constant pressure:
$dH=TdS+VdP \nonumber$
Recall that:
$C_P=\frac{dH}{dT} \nonumber$
Combining, we obtain:
$dS=\frac{C_P}{T}dT \nonumber$
Integrating:
$\Delta S=\int_{T_1}^{T_2}{\frac{C_P(T)}{T}dT} \nonumber$
This means that if we know the heat capacities as a function of temperature we can calculate how the entropy changes with temperature. Usually it is easier to obtain data under constant $P$ conditions than for constant $V$, so that the route with $C_p$ is the more common one.
21.02: Absolute Entropy
In the unlikely case that we have $C_p$ data all the way from 0 K we could write:
$S(T) = S(0) + \int_{0}^{T} C_p dT \nonumber$
It is tempting to set the $S(0)$ to zero and make the entropy thus an absolute quantity. As we have seen with enthalpy, it was not really possible set $H(0)$ to zero. All we did was define $ΔH$ for a particular process, although if $C_p$ data are available we could construct an enthalpy function (albeit with a floating zero point) by integration of $C_P$ (instead of $C_p/T$!).
Still for entropy $S$ the situation is a bit different than for $H$. Here we can actually put things on an absolute scale. Both Nernst and Planck have proposed to do so. Nernst postulated that for a pure and perfect crystal $S$ should indeed to go to zero as $T$ goes to zero. For example, sulfur has two solid crystal structure, rhombic and monoclinic. At 368.5 K, the entropy of the phase transition from rhombic to monoclinic, $S_{(rh)}\rightarrow S_{(mono)}$, is:
$\Delta_\text{trs}S(\text{368.5 K})=1.09\;\sf\frac{J\cdot mol}{K} \nonumber$
As the temperature is lowered to 0 K, the entropy of the phase transition approaches zero:
$\lim_{T\rightarrow 0}\Delta_rS=0 \nonumber$
This shows that the entropy of the two crystalline forms are the same. The only way is if all species have the same absolute entropy at 0 K. For energy dispersal at 0 K:
1. Energy is as concentrated as it can be.
2. Only the very lowest energy states can be occupied.
For a pure, perfect crystalline solid at 0 K, there is only one energetic arrangement to represent its lowest energy state. We can use this to define a natural zero, giving entropy an absolute scale. The third Law of Thermodynamics states that the entropy of a pure substance in a perfect crystalline form is 0 $\sf\frac{J}{mol\cdot K}$ at 0 K:
${\bar{S}}_{0\text{ K}}^\circ=0 \nonumber$
This is consistent with our molecular formula for entropy:
$S=k\ln{W} \nonumber$
For a perfect crystal at 0 K, the number of ways the total energy of a system can be distributed is one ($W=1$). The $\ln{W}$ term goes to zero, resulting in the perfect crystal at 0 K having zero entropy.
It is certainly true that for the great majority of materials we end up with a crystalline material at sufficiently low materials (although there are odd exceptions like liquid helium). However, it should be mentioned that a completely perfect crystal can only be grown at zero Kelvin! It is not possible to grow anything at 0 K however. At any finite temperature the crystal always incorporates defects, more so if grown at higher temperatures. When cooled down very slowly the defects tend to be ejected from the lattice for the crystal to reach a new equilibrium with less defects. This tendency towards less and less disorder upon cooling is what the third law is all about.
However, the ordering process often becomes impossibly slow, certainly when approaching absolute zero. This means that real crystals always have some frozen in imperfections. Thus there is always some residual entropy. Fortunately, the effect is often too small to be measured. This is what allows us to ignore it in many cases (but not all).
We could state the Third law of thermodynamics as follows:
The entropy of a perfect crystal approaches zero when T approaches zero (but perfect crystals do not exist).
Another complication arises when the system undergoes a phase transition, e.g. the melting of ice. As we can write:
$Δ_{fus}H= q_P \nonumber$
If ice and water are in equilibrium with each other the process is quite reversible and so we have:
$Δ_{fus}S = \dfrac{q_{rev}}{T}= \dfrac{Δ_{fus}H}{T} \nonumber$
This means that at the melting point the curve for $S$ makes a sudden jump by this amount because all this happens at one an the same temperature. Entropies are typically calculated from calorimetric data ($C_P$ measurements e.g.) and tabulated in standard molar form. The standard state at any temperature is the hypothetical corresponding ideal gas at one bar for gases.
In table 21.3 a number of such values are shown. There are some clear trends. E.g. when the noble gas gets heavier this induces more entropy. This is a direct consequence of the particle in the box formula: It has mass in the denominator and therefore the energy levels get more crowded together when m increases: more energy levels, more entropy.
The energy levels get more crowded together when $m$ increases: more energy levels, more entropy
Tabulation of $H$, $S$ and $G$. Frozen entropy
There are tables for $H^\circ(T)-H^\circ(0)$, $S^\circ(T)$ and $G^\circ(T)-H^\circ(0)$ as a function of temperature for numerous substances. As we discussed before the plimsoll defines are standard state in terms of pressure (1 bar) and of concentration reference states, if applicable, but temperature is the one of interest.
For most substances the Third Law assumption that $\lim{S^\circ(T)}$ for $T\rightarrow 0 = 0$ is a reasonable one but there are notable exceptions, such as carbon monoxide. In the solid form, carbon monoxide molecules should ideally be fully ordered at absolute zero, but because the sizes of the carbon atoms and the oxygen atoms are very close and the dipole of the molecule is small, it is quite possible to put in a molecule 'upside down', i.e. with the oxygen on a carbon site and vice versa. At higher temperatures, at which the crystal is formed, this lowers the Gibbs energy ($G$) because it increases entropy. In fact we could say that if we could put each molecule into the lattice in two different ways, the number of ways $W_{disorder}$ we can put N molecules in into the lattice is 2N. This leads to an additional contribution to the entropy of
$S_{disorder} = k\,\ln{W_{disorder}}= N k \ln 2 = R \ln 2 = 5.7\, \frac{\text{J}}{\text{mol}\cdot\text{K}}) \nonumber$
Although at lower temperatures the entropy term in $G = H-TS$ becomes less and less significant and the ordering of the crystal to a state of lower entropy should become a spontaneous process, in reality the kinetics are so slow that the ordering process does not happen and solid CO therefore has a non-zero entropy when approaching 0 K.
In principle all crystalline materials have this effect to some extent, but CO is unusual because the concentration of 'wrongly aligned' entities is of the order of 50% rather than say 1 in 1013 (a typical defect concentration in say single crystal silicon).
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(LibreTexts)/21%3A_Entropy_and_the_Third_Law_of_Thermodynamics/21.01%3A_Entropy_Increases_With_Increasing_Temperature.txt
|
Phase transitions (e.g. melting) often occur under equilibrium conditions. We have seen that both the $H$ and the $S$ curves undergo a discontinuity at constant temperature during melting, because there is an enthalpy of fusion to overcome. For a general phase transition at equilibrium at constant $T$ and $P$, we can say that:
$Δ_{trs}G = Δ_{trs}H - T_{trs}Δ_{trs}S = 0 \nonumber$
$Δ_{trs}H = T_{trs}Δ_{trs}S \nonumber$
$\dfrac{Δ_{trs}H}{T_{trs}}=Δ_{trs}S \nonumber$
For melting of a crystalline solid, we now see why there is a sudden jump in enthalpy. The reason is that the solid has a much more ordered structure than the crystalline solid. The decrease in order implies a finite $Δ_{trs}S$. We should stress at this point that we are talking about first order transitions here. The reason for this terminology is that the discontinuity is in a function like $S$, that is a first order derivative of $G$ (or $A$):
$\left(\frac{\partial\bar{G}}{\partial T}\right)_P=-\bar{S} \nonumber$
Second order derivatives (e.g. the heat capacity) will display a singularity (+∞) at the transition point.
Every phase transition will have a change in entropy associated with it. The different types of phase transitions that can occur are:
$l \rightarrow g$ Vaporization / boiling
$g \rightarrow l$ Condensation
$s \rightarrow l$ Fusion / melting
$l \rightarrow s$ Freezing
$s \rightarrow g$ Sublimation
$g \rightarrow s$ Deposition
$s \rightarrow s$ Solid to solid phase transition
21.04: The Third Law of Thermodynamics
The Debye function has been experimentally determined to calculate the heat capacity at low temperatures, between 0 K and 15 K, for nonmetallic crystals:
$\bar{C}_P=aT^3 \nonumber$
Constant $a$ can be found by ensuring ${\bar{C}}_P$ is continuous up to lowest ${\bar{C}}_P$ data point. The Debye function is named after the Dutch chemistry Peter Debye, who theoretically showed this relationship. Metallic crystals have a slightly different equation at low temperature:
$\bar{C}_P=aT+bT^3 \nonumber$
where $a$ and $b$ are constants.
21.05: Practical Absolute Entropies Can Be Determined Calorimetrically
From section 21.1, we learned that the entropy at constant pressure changes with temperature by:
$\Delta S=\int_{T_1}^{T_2}{\frac{C_P(T)}{T}dT} \nonumber$
From section 21.3, we learned that the entropy of a phase transition is:
$\Delta_{trs} S=\frac{\Delta_{trs}H}{T_{trs}} \nonumber$
Both the heat capacity and enthalpy of transition can be experimentally determined using calorimetry. Using experimental values with the two above expressions and the convention that the entropy at absolute zero (0 K) is zero, we can calculate the practical absolute entropy of a substance for any temperature. For example, the entropy of CO2 gas at 300 K can be calculated by:
$S(T)=\int_{0 K}^{T_{sub}}{\frac{C_P^s(T)}{T}dT}+\frac{\Delta_{sub}H}{T_{sub}}+\int_{T_{sub}}^{300\text{ K}}{\frac{C_P^g(T)}{T}dT} \nonumber$
Where the temperature of sublimation ($T_{sub}$) is 194.7 K.
21.06: Practical Absolute Entropies of Gases Can Be Calculated from Partition Functions
Recall that the entropy of a system, $S$, can be calculated if the partition function, $Q$, is known:
$S=\frac{U}{T}+k_B \ln{Q} \nonumber$
where $Q$ is:
$Q=\sum_j{e^{-\frac{E_j}{kT}}} \nonumber$
The internal energy of the system can also be calculated from the partition function:
$U=k_BT^2\left(\frac{\delta \ln{Q}}{\delta T}\right) \nonumber$
Combining these equations, we obtain:
$S=k_B\ln{Q}+k_BT\left(\frac{\delta \ln{Q}}{\delta T}\right) \nonumber$
The third law of thermodynamics states that entropy of a perfect crystal at absolute zero is zero. We can show that our equation for entropy in terms of the partition function is consistent with the third law. Plugging in our equation for the partition function into our equation for entropy, we obtain:
$S=k_B\ln{\sum_j{e^{-\frac{E_j}{kT}}}}+k_BT\left(\frac{\delta \ln{\sum_j{e^{-\frac{E_j}{kT}}}}}{\delta T}\right) \nonumber$
As $T$ goes to zero, all the states will be in their lowest energy configuration. If the ground state is $n$ degenerate, then $n$ states will be in the ground state:
$E_1=E_2=...=E_m \nonumber$
This is true for all the next states. For example, if the next state is $m$ degenerate, then:
$E_{n+1}=E_{n+2}=...=E_{n+m} \nonumber$
This gives us the result:
$\sum_j{e^{-\frac{E_j}{kT}}} = ne^{-\frac{E_1}{kT}}+me^{-\frac{E_{n+1}}{kT}}+... \nonumber$
21.07: Standard Entropies Depend Upon Molecular Mass and Structure
Entropy is related to the number of microstates a collection of particles can occupy. As both the molecular mass and molecular structure of the particles will affect the number of available microstates, they also affect the entropy of the collection of particles.
From quantum theory, we know that increasing the molecular mass of a particle decreases the energy spacing between states. For a given temperature, more states are available to be occupied, increasing the number of available microstates the system may occupy, and hence the entropy of the system. The table below shows the molar entropies for the noble gases. As the mass of increases, so does the molar entropy.
Noble Gas He Ne Ar Kr Xe Rn
Mass $\left(\frac{\text{g}}{\text{mol}}\right)$ 4.0 20.2 39.9 84.8 131.3 222.0
$S^\circ_{g, \text{1 bar}}\;\left(\frac{\text{J}}{\text{mol}\cdot\text{K}}\right)$ $126.15^1$ $146.33^1$ $154.84^1$ $164.08^1$ $169.68^1$ $176.2^1$
The same is true for the number of atoms in a molecule. A molecule with more atoms will, in general, have a more degrees of freedom to take up energy, increasing its number of available microstates and entropy.
21.08: Spectroscopic Entropies sometimes disgree with Calorimetric Entropies
The entropy of gases can be experimentally measured using calorimetry ($S^\circ_{\text{exp}}$) or calculated using spectroscopic methods ($S^\circ_{\text{calc}}$). For most molecules, the experiment and calculated values are in a good agreement, however, this is not true for all molecules. The discrepancy is referred to as residual entropy:
$\bar{S}_{\text{calc}}-\bar{S}_{\text{exp}} \nonumber$
Residual entropy arises from a material that can have many different states a 0 K. The third law of thermodynamics states that at zero kelvin, a substance will have an entropy of zero. In substances, such as glass, ice, and carbon monoxide, the substance can exist in many different configurations; it is not a perfect crystal, but must still have zero entropy according to third law. The material has residual entropy.
21.09: Standard Entropies Can Be Used to Calculate Entropy Changes of Chemical Reactions
Entropy is a state function, so we can calculate values for a process using any path. This allows us to calculate the entropy change of a chemical reaction using standard entropies. Specifically, we sum the entropies of the products and subtract the entropies of the reactions:
$\Delta_{rxn}S^\circ = \sum_{\text{Products}}{v_i S^\circ_i} - \sum_{\text{Reactants}}{v_i S^\circ_i} \nonumber$
Where $v_i$ is the stoichiometric coefficient. Let's look at the combustion of methane:
$\ce{CH_4} \left( g \right) + 2 \ce{O_2} \left( g \right) \rightarrow 2 \ce{H_2O} + \ce{CO_2}\left(g\right) \nonumber$
The standard entropies are:
Molecule Entropy $\left(\frac{\text{J}}{\text{mol}\cdot\text{K}}\right)$
$\ce{CH_4}$ $186.25^1$
$\ce{O_2}$ $205.15^1$
$\ce{H_2O}$ $188.84^1$
$\ce{CO_2}$ $213.79^1$
The entropy for the combustion of methane is:
$\Delta S^\circ = \left[ 2 \left( 188.84 \right) + 1 \left( 213.70 \right) \right] - \left[ 1 \left( 186.25 \right) + 2 \left( 205.15 \right) \right] = -5.17 \: \frac{\text{J}}{\text{mol} \cdot \text{K}} \nonumber$
21.E: Entropy and the Third Law of Thermodynamics (Exercises)
Problem 1
Liquid water has a nearly constant molar heat capacity of $\bar{C}_P=75.4.\;\text{J}\cdot\text{mol}^{-1}\text{K}^{-1}$. What is the change in entropy as 200. g of water are cooled from 70.0°C to 20.0°C?
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(LibreTexts)/21%3A_Entropy_and_the_Third_Law_of_Thermodynamics/21.03%3A_Temperatures_at_a_Phase_Transition.txt
|
We have answered the question: what is entropy, but we still do not have a general criterion for spontaneity, just one that works in an isolated system. We will consider what happens when we hold volume and temperature constant. As discussed previously, the expression for the change in internal energy:
$dU = TdS -PdV \nonumber$
is only valid for reversible changes. Let us consider a spontaneous change. If we assume constant volume, the $-PdV$ work term drops out. From the Clausius inequality $dS>\dfrac{δq}{T}$ we get:
$\underset{ \text{constant V} }{dU \le TdS } \nonumber$
$\underset{ \text{constant V} }{ dU-TdS \le 0 } \nonumber$
Consider a new state function, Helmholtz energy, A:
$A ≡ U -TS \nonumber$
$dA = dU -TdS - SdT \label{diff1}$
If we also set $T$ constant, we see that Equation $\ref{diff1}$ becomes
$\underset{ \text{constant V and T} }{ dA=dU-TdS \le 0 } \nonumber$
This means that the Helmholtz energy, $A$, is a decreasing quantity for spontaneous processes (regardless of isolation!) when $T$ and $V$ are held constant. $A$ becomes constant once a reversible equilibrium is reached.
Example 22.1.1 : What A stands for
A good example is the case of the mixing of two gases. Let's assume isothermal conditions and keep the total volume constant. For this process, $\Delta U$ is zero (isothermal, ideal) but the
$\Delta S_{molar} = -y_1R\ln y_1-y_2 R \ln y_2 \nonumber$
This means that
$\Delta A_{molar} = RT (y_1\ln y_1+y_2\ln y_2). \nonumber$
This is a negative quantity because the mole ratios are smaller than unity. So yes this spontaneous process has a negative $\Delta A$. If we look at $\Delta A = \Delta U - T\Delta S$ we should see that the latter term is the same thing as $-q_{rev}$ So we have :
$\Delta A = \Delta U - q_{rev} = w_{rev} \nonumber$
This is however the maximal work that a system is able to produce and so the Helmholtz energy is a direct measure of how much work one can get out of a system. $A$ is therefore often called the Helmholtz free energy. Interestingly this work cannot be volume work as volume is constant. so it stands for the maximal other work (e.g. electrical work) that can be obtained under the unlikely condition that volume is constant.
Natural variables of A
Because $A≡ U-TS$ we can write
$dA = dU -TdS -SdT \nonumber$
$dA = TdS -PdV -TdS -SdT = -PdV - SdT \nonumber$
The natural variables of $A$ are volume $V$ and temperature $T$.
22.02: Gibbs Energy
The Helmholtz energy $A$ is developed for isochoric changes and as we have often said before it is much easier to deal with isobaric ones where $P=$ constant. We can therefore repeat the above treatment for the enthalpy and introduce another state function the Gibbs energy
\begin{align*} G &≡ H -TS \[4pt] &= U + PV - TS \[4pt] &= A + PV \end{align*}
If we take both $T$ and $P$ constant we get
$dU-TdS + PdV \le 0 \nonumber$
$dG \le 0 \nonumber$
$G$ either decreases (spontaneously) or is constant (at equilibrium). Calculating the state function between two end points we get:
$ΔG = ΔH - TΔS ≤ 0 (T,P\text{ constant}) \nonumber$
This quantity is key to the question of spontaneity under the conditions we usually work under. If for a process $\Delta G$ is positive it does not occur spontaneous and can only be made to occur if it is 'pumped', i.e. coupled with a process that has a negative $\Delta G$. The latter is spontaneous.
If $\Delta G=0$ then the system is as equilibrium.
Direction of the spontaneous change
Because the $\Delta S$ term contains the temperature $T$ as coefficient the spontaneous direction of a process, e.g. a chemical reaction can change with temperature depending on the values of the enthalpy and the entropy change $\Delta H$ and $\Delta S$. This is true for the melting process, e.g. for water below 0oC we get water=>ice, above this temperature ice melts to water, but it also goes for chemical reactions.
Example
Consider
$\ce{NH3(g) + HCl(g) <=> NH4Cl(s)} \nonumber$
$\Delta_r H$ at 298K / 1 bar is -176.2 kJ. The change in entropy is -0.285 kJ/K so that at 298K $\Delta G$ is -91.21 kJ. Clearly this is a reaction that will proceed to the depletion of whatever is the limiting reagent on the left.
However at 618 K this is a different story. Above this temperature $\Delta G$ is positive! (assuming enthalpy and entropy have remained the same, which is almost but not completely true) The reaction will not proceed. Instead the reverse reaction would proceed spontaneously. The salt on the right would decompose in the two gases -base and acid- on the left.
Meaning of the $\Delta G$ term
As we have seen, $\Delta A$ can be related to the maximal amount of work that a system can perform at constant $V$ and $T$. We can hold an analogous argument for $\Delta G$ except that $V$ is not constant so that we have to consider volume work (zero at constant volume) .
$dG = d(U+PV-TS) = dU -TdS - SdT - PdV +VdP \nonumber$
As $dU = TdS + δw_{rev}$
$dG = δw_{rev} -SdT + VdP + PdV \nonumber$
As the later term is $-δw_{volume}$
$dG = δw_{rev} -SdT + VdP - δw_{volume} \nonumber$
At constant $T$ and $P$ the two middle terms drop out
$dG = δw_{rev} - δw_{volume} = δw_{other useful work} \nonumber$
Note
$\Delta G$ stands for the (maximal) reversible, isobaric isothermal non-$PV$ work that a certain spontaneous change can perform. The volume work may not be zero, but is corrected for.
Natural variables of G
Because $G ≡ H-TS$, we can write
$dG = dH -TdS -SdT \nonumber$
$dG = TdS +VdP -TdS -SdT = VdP - SdT \nonumber$
The natural variables of $G$ are pressure $P$ and temperature $T$. This is what makes this function the most useful of the four $U$, $H$, $A$, and $G$: these are the natural variables of most of your laboratory experiments!
Summary
We now have developed the basic set of concepts and functions that together form the framework of thermodynamics. Let's summarize four very basic state functions:
state function natural variables
$dU = -PdV + TdS$ $U(V,S)$
$dH = +VdP + TdS$ $H(P,S)$
$dA = -PdV - SdT$ $A(V,T)$
$dG = +VdP - SdT$ $G(P,T)$
Note:
1. The replacement of $δq$ by $TdS$ was based on reversible heat. This means that in the irreversible case the expressions for dU and dH become inequalities
2. We only include volume work in the above expressions. If other work (elastic, electrical e.g.) is involved extra terms need to be added: $dU = TdS - PdV + xdX$ etc.
We are now ready to begin applying thermodynamics to a number of very diverse situations, but we will first develop some useful partial differential machinery.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(LibreTexts)/22%3A_Helmholtz_and_Gibbs_Energies/22.01%3A_Helmholtz_Energy.txt
|
Modeling the dependence of the Gibbs and Helmholtz functions behave with varying temperature, pressure, and volume is fundamentally useful. But in order to do that, a little bit more development is necessary. To see the power and utility of these functions, it is useful to combine the First and Second Laws into a single mathematical statement. In order to do that, one notes that since
$dS = \dfrac{dq}{T} \nonumber$
for a reversible change, it follows that
$dq= TdS \nonumber$
And since
$dw = TdS - pdV \nonumber$
for a reversible expansion in which only p-V works is done, it also follows that (since $dU=dq+dw$):
$dU = TdS - pdV \nonumber$
This is an extraordinarily powerful result. This differential for $dU$ can be used to simplify the differentials for $H$, $A$, and $G$. But even more useful are the constraints it places on the variables T, S, p, and V due to the mathematics of exact differentials!
Maxwell Relations
The above result suggests that the natural variables of internal energy are $S$ and $V$ (or the function can be considered as $U(S, V)$). So the total differential ($dU$) can be expressed:
$dU = \left( \dfrac{\partial U}{\partial S} \right)_V dS + \left( \dfrac{\partial U}{\partial V} \right)_S dV \nonumber$
Also, by inspection (comparing the two expressions for $dU$) it is apparent that:
$\left( \dfrac{\partial U}{\partial S} \right)_V = T \label{eq5A}$
and
$\left( \dfrac{\partial U}{\partial V} \right)_S = -p \label{eq5B}$
But the value doesn’t stop there! Since $dU$ is an exact differential, the Euler relation must hold that
$\left[ \dfrac{\partial}{\partial V} \left( \dfrac{\partial U}{\partial S} \right)_V \right]_S= \left[ \dfrac{\partial}{\partial S} \left( \dfrac{\partial U}{\partial V} \right)_S \right]_V \nonumber$
By substituting Equations \ref{eq5A} and \ref{eq5B}, we see that
$\left[ \dfrac{\partial}{\partial V} \left( T \right)_V \right]_S= \left[ \dfrac{\partial}{\partial S} \left( -p \right)_S \right]_V \nonumber$
or
$\left( \dfrac{\partial T}{\partial V} \right)_S = - \left( \dfrac{\partial p}{\partial S} \right)_V \nonumber$
This is an example of a Maxwell Relation. These are very powerful relationship that allows one to substitute partial derivatives when one is more convenient (perhaps it can be expressed entirely in terms of $\alpha$ and/or $\kappa_T$ for example.)
A similar result can be derived based on the definition of $H$.
$H \equiv U +pV \nonumber$
Differentiating (and using the chain rule on $d(pV)$) yields
$dH = dU +pdV + Vdp \nonumber$
Making the substitution using the combined first and second laws ($dU = TdS – pdV$) for a reversible change involving on expansion (p-V) work
$dH = TdS – \cancel{pdV} + \cancel{pdV} + Vdp \nonumber$
This expression can be simplified by canceling the $pdV$ terms.
$dH = TdS + Vdp \label{eq2A}$
And much as in the case of internal energy, this suggests that the natural variables of $H$ are $S$ and $p$. Or
$dH = \left( \dfrac{\partial H}{\partial S} \right)_p dS + \left( \dfrac{\partial H}{\partial p} \right)_S dV \label{eq2B}$
Comparing Equations \ref{eq2A} and \ref{eq2B} show that
$\left( \dfrac{\partial H}{\partial S} \right)_p= T \label{eq6A}$
and
$\left( \dfrac{\partial H}{\partial p} \right)_S = V \label{eq6B}$
It is worth noting at this point that both (Equation \ref{eq5A})
$\left( \dfrac{\partial U}{\partial S} \right)_V \nonumber$
and (Equation \ref{eq6A})
$\left( \dfrac{\partial H}{\partial S} \right)_p \nonumber$
are equation to $T$. So they are equation to each other
$\left( \dfrac{\partial U}{\partial S} \right)_V = \left( \dfrac{\partial H}{\partial S} \right)_p \nonumber$
Morevoer, the Euler Relation must also hold
$\left[ \dfrac{\partial}{\partial p} \left( \dfrac{\partial H}{\partial S} \right)_p \right]_S= \left[ \dfrac{\partial}{\partial S} \left( \dfrac{\partial H}{\partial p} \right)_S \right]_p \nonumber$
so
$\left( \dfrac{\partial T}{\partial p} \right)_S = \left( \dfrac{\partial V}{\partial S} \right)_p \nonumber$
This is the Maxwell relation on $H$. Maxwell relations can also be developed based on $A$ and $G$. The results of those derivations are summarized in Table 6.2.1..
Table 6.2.1: Maxwell Relations
Function Differential Natural Variables Maxwell Relation
$U$ $dU = TdS - pdV$ $S, \,V$ $\left( \dfrac{\partial T}{\partial V} \right)_S = - \left( \dfrac{\partial p}{\partial S} \right)_V$
$H$ $dH = TdS + Vdp$ $S, \,p$ $\left( \dfrac{\partial T}{\partial p} \right)_S = \left( \dfrac{\partial V}{\partial S} \right)_p$
$A$ $dA = -pdV - SdT$ $V, \,T$ $\left( \dfrac{\partial p}{\partial T} \right)_V = \left( \dfrac{\partial S}{\partial V} \right)_T$
$G$ $dG = Vdp - SdT$ $p, \,T$ $\left( \dfrac{\partial V}{\partial T} \right)_p = - \left( \dfrac{\partial S}{\partial p} \right)_T$
The Maxwell relations are extraordinarily useful in deriving the dependence of thermodynamic variables on the state variables of p, T, and V.
Example $1$
Show that
$\left( \dfrac{\partial V}{\partial T} \right)_p = T\dfrac{\alpha}{\kappa_T} - p \nonumber$
Solution
Start with the combined first and second laws:
$dU = TdS - pdV \nonumber$
Divide both sides by $dV$ and constraint to constant $T$:
$\left.\dfrac{dU}{dV}\right|_{T} = \left.\dfrac{TdS}{dV}\right|_{T} - p \left.\dfrac{dV}{dV} \right|_{T} \nonumber$
Noting that
$\left.\dfrac{dU}{dV}\right|_{T} =\left( \dfrac{\partial U}{\partial V} \right)_T \nonumber$
$\left.\dfrac{TdS}{dV}\right|_{T} = \left( \dfrac{\partial S}{\partial V} \right)_T \nonumber$
$\left.\dfrac{dV}{dV} \right|_{T} = 1 \nonumber$
The result is
$\left( \dfrac{\partial U}{\partial V} \right)_T = T \left( \dfrac{\partial S}{\partial V} \right)_T -p \nonumber$
Now, employ the Maxwell relation on $A$ (Table 6.2.1)
$\left( \dfrac{\partial p}{\partial T} \right)_V = \left( \dfrac{\partial S}{\partial V} \right)_T \nonumber$
to get
$\left( \dfrac{\partial U}{\partial V} \right)_T = T \left( \dfrac{\partial p}{\partial T} \right)_V -p \nonumber$
and since
$\left( \dfrac{\partial p}{\partial T} \right)_V = \dfrac{\alpha}{\kappa_T} \nonumber$
It is apparent that
$\left( \dfrac{\partial V}{\partial T} \right)_p = T\dfrac{\alpha}{\kappa_T} - p \nonumber$
Note: How cool is that? This result was given without proof in Chapter 4, but can now be proven analytically using the Maxwell Relations!
22.04: The Enthalpy of an Ideal Gas
How does pressure affect enthalpy $H$? As we showed above we have the following relations of first and second order for $G$
$\left( \dfrac{\partial G}{\partial T} \right)_P = -S \nonumber$
$\left( \dfrac{\partial G}{\partial P} \right)_T = -V \nonumber$
$-\left (\dfrac{\partial S}{\partial P }\right)_T = \left (\dfrac{\partial V}{\partial T} \right)_P \nonumber$
We also know that by definition:
$G = H - TS \label{def}$
Consider an isothermal change in pressure, so taking the partial derivative of each side of Equation $\ref{def}$, we get:
$\left( \dfrac{\partial G}{\partial P}\right)_T = \left( \dfrac{\partial H}{ \partial P}\right)_T -T \left( \dfrac{\partial S}{\partial P}\right)_T \nonumber$
$\left( \dfrac{\partial H}{\partial P}\right)_T = V -T \left( \dfrac{\partial V}{\partial T}\right)_P \label{Eq12}$
For an ideal gas
$\dfrac{\partial V}{\partial T} = \dfrac{nR}{P} \nonumber$
so Equation $\ref{Eq12}$ becomes
$\left( \dfrac{\partial H}{\partial P}\right)_T = V - T \left( \dfrac{nR}{P}\right) = 0 \nonumber$
As we can see for an ideal gas, there is no dependence of $H$ on $P$.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(LibreTexts)/22%3A_Helmholtz_and_Gibbs_Energies/22.03%3A_The_Maxwell_Relations.txt
|
The fundamental thermodynamic equations follow from five primary thermodynamic definitions and describe internal energy, enthalpy, Helmholtz energy, and Gibbs energy in terms of their natural variables. Here they will be presented in their differential forms.
Introduction
The fundamental thermodynamic equations describe the thermodynamic quantities U, H, G, and A in terms of their natural variables. The term "natural variable" simply denotes a variable that is one of the convenient variables to describe U, H, G, or A. When considered as a whole, the four fundamental equations demonstrate how four important thermodynamic quantities depend on variables that can be controlled and measured experimentally. Thus, they are essentially equations of state, and using the fundamental equations, experimental data can be used to determine sought-after quantities like $G$ or $H$.
First Law of Thermodynamics
The first law of thermodynamics is represented below in its differential form
$dU = đq+đw \nonumber$
where
• $U$ is the internal energy of the system,
• $q$ is heat flow of the system, and
• $w$ is the work of the system.
The "đ" symbol represent inexact differentials and indicates that both $q$ and $w$ are path functions. Recall that $U$ is a state function. The first law states that internal energy changes occur only as a result of heat flow and work done.
It is assumed that w refers only to PV work, where
$w = -\int{pdV} \nonumber$
The fundamental thermodynamic equation for internal energy follows directly from the first law and the principle of Clausius:
$dU = đq + đw \nonumber$ $dS = \dfrac{\delta q_{rev}}{T} \nonumber$
we have
$dU = TdS + \delta w \nonumber$
Since only $PV$ work is performed,
$dU = TdS - pdV \label{DefU}$
The above equation is the fundamental equation for $U$ with natural variables of entropy $S$ and volume$V$.
Principle of Clausius
The Principle of Clausius states that the entropy change of a system is equal to the ratio of heat flow in a reversible process to the temperature at which the process occurs. Mathematically this is written as
$dS = \dfrac{\delta q_{rev}}{T} \nonumber$
where
• $S$ is the entropy of the system,
• $q_{rev}$ is the heat flow of a reversible process, and
• $T$ is the temperature in Kelvin.
Enthalpy
Mathematically, enthalpy is defined as
$H = U + pV \label{DefEnth}$
where $H$ is enthalpy of the system, p is pressure, and V is volume. The fundamental thermodynamic equation for enthalpy follows directly from it deffinition (Equation $\ref{DefEnth}$) and the fundamental equation for internal energy (Equation $\ref{DefU}$) :
$dH = dU + d(pV) \nonumber$ $= dU + pdV + VdP \nonumber$ $dU = TdS - pdV \nonumber$ $dH = TdS - pdV + pdV + Vdp \nonumber$ $dH = TdS + Vdp \nonumber$
The above equation is the fundamental equation for H. The natural variables of enthalpy are S and p, entropy and pressure.
Gibbs Energy
The mathematical description of Gibbs energy is as follows
$G = U + pV - TS = H - TS \label{Defgibbs}$
where $G$ is the Gibbs energy of the system. The fundamental thermodynamic equation for Gibbs Energy follows directly from its definition $\ref{Defgibbs}$ and the fundamental equation for enthalpy $\ref{DefEnth}$:
$dG = dH - d(TS) \nonumber$ $= dH - TdS - SdT \nonumber$
Since
$dH = TdS + VdP \nonumber$
$dG = TdS + VdP - TdS - SdT \nonumber$
$dG = VdP - SdT \label{EqGibbs1}$
The above equation is the fundamental equation for G. The natural variables of Gibbs energy are P and T.
Helmholtz Energy
Mathematically, Helmholtz energy is defined as
$A = U - TS \label{DefHelm}$
where $A$ is the Helmholtz energy of the system, which is often written as the symbol $F$. The fundamental thermodynamic equation for Helmholtz energy follows directly from its definition (Equation $\ref{DefHelm}$) and the fundamental equation for internal energy (Equation $\ref{DefU}$):
$dA = dU - d(TS) \nonumber$ $= dU - TdS - SdT \nonumber$
Since
$dU = TdS - pdV \nonumber$
$dA = TdS - pdV -TdS - SdT \nonumber$
$dA = -pdV - SdT \label{EqHelm1}$
Equation $\ref{EqHelm1}$ is the fundamental equation for A with natural variables of $V$ and $T$. For the definitions to hold, it is assumed that only PV work is done and that only reversible processes are used. These assumptions are required for the first law and the principle of Clausius to remain valid. Also, these equations do not account include n, the number of moles, as a variable. When $n$ is included, the equations appear different, but the essence of their meaning is captured without including the n-dependence.
Chemical Potential
The fundamental equations derived above were not dependent on changes in the amounts of species in the system. Below the n-dependent forms are presented1,4.
$dU = TdS - PdV + \sum_{i=1}^{N}\mu_idn_i \nonumber$ $dH = TdS + VdP + \sum_{i=1}^{N}\mu_idn_i \nonumber$ $dG = -SdT + Vdp + \sum_{i=1}^{N}\mu_idn_i \nonumber$ $dA = -SdT - PdV + \sum_{i=1}^{N}\mu_idn_i \nonumber$
where μi is the chemical potential of species i and dni is the change in number of moles of substance i.
Importance/Relevance of Fundamental Equations
The differential fundamental equations describe U, H, G, and A in terms of their natural variables. The natural variables become useful in understanding not only how thermodynamic quantities are related to each other, but also in analyzing relationships between measurable quantities (i.e. P, V, T) in order to learn about the thermodynamics of a system. Below is a table summarizing the natural variables for U, H, G, and A:
Thermodynamic Quantity Natural Variables
U (internal energy) S, V
H (enthalpy) S, P
G (Gibbs energy) T, P
A (Helmholtz energy) T, V
Maxwell Relations
The fundamental thermodynamic equations are the means by which the Maxwell relations are derived1,4. The Maxwell Relations can, in turn, be used to group thermodynamic functions and relations into more general "families"2,3.
As we said dA is an exact differential. Let's write is out in its natural variables (Equation $\ref{EqHelm1}$) and take a cross derivative. The dA expression in natural variables is
$dA = \left( \dfrac{\partial A}{\partial V} \right)_T dV + \left( \dfrac{\partial A}{\partial T} \right) _V dT \nonumber$
The partial derivatives of A of first order can already be quite interesting we see e.g. in step 2 that the first partial of A versus V (at T constant) is the negative of the pressure.
$\left( \dfrac{\partial A}{\partial V} \right)_T = -P \nonumber$
Likewise we find the (isochoric) slope with temperature gives us the negative of the entropy. Thus entropy is one of the first order derivatives of A.
$\left( \dfrac{\partial A}{\partial T} \right) _V = -S \nonumber$
When we apply a cross derivative
$\left( \dfrac{\partial^2 A}{\partial V \partial T} \right) = \left( \dfrac{\partial (-S)}{\partial V} \right) _T + \left( \dfrac{\partial (-P)}{\partial T} \right) _V \nonumber$
we get what is known as a Maxwell relation:
$\left( \dfrac{\partial P}{\partial T} \right) _V = \left( \dfrac{\partial S}{\partial V} \right) _T \nonumber$
Exercise
What does Equation three mean for the heat capacity? answer
A similar treatment of dG (Equation $\ref{EqGibbs1}$ gives:
$\left( \dfrac{\partial G}{\partial T} \right) _P = -S \nonumber$
$\left( \dfrac{\partial G}{\partial P} \right) _T = V \nonumber$
and another Maxwell relation
$- \left( \dfrac{\partial S}{\partial P} \right) _T = \left( \dfrac{\partial V}{\partial T} \right) _P \nonumber$
Problems
1. If the assumptions made in the derivations above were not made, what would effect would that have? Try to think of examples were these assumptions would be violated. Could the definitions, principles, and laws used to derive the fundamental equations still be used? Why or why not?
2. For what kind of system does the number of moles not change? This said, do the fundamental equations without n-dependence apply to a wide range of processes and systems?
3. Derive the Maxwell Relations.
4. Derive the expression
$\left (\dfrac{\partial H}{\partial P} \right)_{T,n} = -T \left(\dfrac{\partial V}{\partial T} \right)_{P,n} +V \nonumber$
Then apply this equation to an ideal gas. Does the result seem reasonable?
5. Using the definition of Gibbs energy and the conditions observed at phase equilibria, derive the Clapeyron equation.
Answers
1. If it was not assumed that PV-work was the only work done, then the work term in the second law of thermodynamics equation would include other terms (e.g. for electrical work, mechanical work). If reversible processes were not assumed, the Principle of Clausius could not be used. One example of such situations could the movement of charged particles towards a region of like charge (electrical work) or an irreversible process like combustion of hydrocarbons or friction.
2. In general, a closed system of non-reacting components would fit this description. For example, the number of moles would not change for a closed system in which a gas is sealed (to prevent leaks) in a container and allowed to expand/is contracted.
3. See the Maxwell Relations section.
4. $(\dfrac{\partial H}{\partial P})_{T,n} = 0$ for an ideal gas. Since there are no interactions between ideal gas molecules, changing the pressure will not involve the formation or breaking of any intermolecular interactions or bonds.
5. See the third outside link.
Contributors and Attributions
• Andreana Rosnik, Hope College
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(LibreTexts)/22%3A_Helmholtz_and_Gibbs_Energies/22.05%3A_Thermodynamic_Functions_have_Natural_Variables.txt
|
Tabulated data are expressed in terms of a pure ideal gas (no mixing) at 1 bar, known as standard state conditions (SSC). Standard states are indicated with the ∘ symbol. These values are tabulated at a specific temperature, but that temperature can vary and is not included in the definition of SSC. No real gas has perfectly ideal behavior, but this definition of the standard state allows corrections for non-ideality to be made consistently for all the different gases.
22.07: The Gibbs-Helmholtz Equation
Ideal gas
For a mole of ideal gas we can use the gas law to integrate volume over pressure and we get
$ΔG_{molar} = RT \ln \left(\dfrac{P_2}{P_1}\right) \nonumber$
It is customary to identify one of the pressures (P1} with the standard state of 1 bar and use the plimsoll to indicate the fact that we are referring to a standard state by writing:
$G_{molar}(P) = G^o_{molar} + RT \ln \left(\dfrac{P}{1}\right)=G^o_{molar} + RT \ln[P] \nonumber$
The fact that we are making the function intensive (per mole) is usually indicated by putting a bar over the $G$ symbol, although this is often omitted for Gomolar
Solids
For solids the volume does not change very much with pressure (the isothermal compressibility $κ$ is small), so can assume it more or less constant:
$G(P_{final})=G(P_{initial})+ \int VdP \text{(from init to final)} ≈ G(P_{initial})+ V \int dP \text{(from init to final)}=G(P_{initial})+ VΔP \nonumber$
The Gibbs-Helmholtz Expression
$\frac { G } { T } = \frac { H } { T } - S \nonumber$
Take the derivative under constant pressure of each side to get
$\left( \frac { \partial G / T } { \partial T } \right) _ { P } = - \frac { H } { T ^ { 2 } } + \frac { 1 } { T } \left( \frac { \partial H } { \partial T } \right) _ { P } - \left( \frac { \partial S } { \partial T } \right) _ { P } \nonumber$
We make use of the relationship between $C_p$ and $H$ and $C_p$ and $S$
\begin{align} \left( \frac { \partial G / T } { \partial T } \right) _ { P } &= - \frac { H } { T ^ { 2 } } + \cancel{ \frac { C _ { P } } { T }} - \cancel{\frac { C _ { P } } { T }} \ &= - \frac { H } { T ^ { 2 } } \end{align} \nonumber
We said before that $S$ is a first order derivative of $G$. As you can see from this derivation the enthalpy $H$ is also a first order derivative, albeit not of $G$ itself, but of $G/T$.
$\left( \frac { \partial \Delta G / T } { \partial T } \right) _ { P } = - \frac { \triangle H } { T ^ { 2 } } \nonumber$
The last step in the derivation simply takes the step before twice -say for the $G$ and $H$ at the begin and end of a process- and subtracts the two identical equations leading to a $Δ$ symbol. In this differential form the Gibbs-Helmholtz equation can be applied to any process.
Gibbs Energy as a Function of Temperature
If heat capacities are know from 0 K we could determine both enthalpy and entropy by integration:
$S(T) = S(0) + \int_0^T \dfrac{C_p}{T} dT \nonumber$
$H(T) = H(0) + \int_0^T C_p\; dT \nonumber$
As we have seen we must be careful at phase transitions such as melting or vaporization. At these points the curves are discontinuous and the derivative $C_p$ is undefined.
$H ( T ) = H ( 0 ) + \int_0^{T_{fus}} C_p(T)_{solid} dT + \Delta H_{fus} + \int_{T_{fus}}^{T_{boil}} C_p(T)_{liquid} dT + \Delta H_{vap} + etc. \label{Hcurve}$
\begin{align} S ( T ) &= S ( 0 ) + \int_0^{T_{fus}} \dfrac{C_p(T)_{solid}}{T} dT + \Delta S_{fus} + \int_{T_{fus}}^{T_{boil}} \dfrac{C_p(T)_{liquid}}{T} dT + \Delta S_{vap} + etc. \[4pt] &= S(0) + \int_0^{T_{fus}} \dfrac{C_p(T)_{solid}}{T} dT + \dfrac{\Delta H_{fus}}{T_{fus}} + \int_{T_{fus}}^{T_{boil}} \dfrac{C_p(T)_{liquid}}{T} dT + \dfrac{\Delta H_{vap}}{T_{boil}} + etc. \label{Scurve} \end{align}
with $H(T=0)= \text{undefined}$ and $S(T=0)=0$ from the third law of thermodynamics.
We also discussed the fact that the third law allows us to define $S(0)$ as zero in most cases. For the enthalpy we cannot do that so that our curve is with respect to an undefined zero point. We really should plot $H(T) - H(0)$ and leave $H(0)$ undefined.
Because the Gibbs free energy $G= H-TS$ we can also construct a curve for $G$ as a function of temperature, simply by combining the $H$ and the $S$ curves (Equations \ref{Hcurve} and \ref{Scurve}):
$G(T) = H(T) - TS(T) \nonumber$
Interestingly, if we do so, the discontinuties at the phase transition points will drop out for $G$ because at these points $Δ_{trs}H = T_{trs}Δ_{trs}S$. Therefore, $G$ is always continuous.
The $H(0)$ problem does not disappear so that once again our curve is subject to an arbitrary offset in the y-direction. The best thing we can do is plot the quantity $G(T) - H(0)$ and leave the offset $H(0)$ undefined.
We have seen above that the derivative of $G$ with temperature is $-S$. As entropy is always positive, this means that the G-curve is always descending with temperature. It also means that although the curve is continuous even at the phase transitions, the slope of the G curve is not, because the derivative $-S$ makes a jump there. Fig. 22.7 in the book shows an example of such a curve for benzene. Note the kinks in the curve at the mp and the boiling point.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(LibreTexts)/22%3A_Helmholtz_and_Gibbs_Energies/22.06%3A_The_Standard_State_for_a_Gas_is_Ideal_Gas.txt
|
We have seen that, for a closed system, the Gibbs energy is related to pressure and temperature as follows:
$dG =VdP - SdT \label{16.15}$
For a constant temperature process:
$dG = VdP \label {16.16}$
Equation $\ref{16.16}$ can be evaluated for an ideal gas:
$d\bar{G} =\dfrac{RT}{P} dP \label{16.17}$
At constant temperature, $T$:
$d\bar{G} = RT\;d\ln P \label{16.18}$
This expression by itself is strictly applicable to ideal gases. However, Lewis, in 1905, suggested extending the applicability of this expression to all substances by defining a new thermodynamic property called fugacity, $f$, such that:
$d\bar{G} = RT\;d\ln f \label {16.19}$
This definition implies that for ideal gases, $f$ must be equal to $P$. For non ideal gases, $f$ is not equal to $P$. The ratio between fugacity and pressure is:
$\phi = \frac{f}{P}$
where $\phi$ is the fugacity coefficient. The fugacity coefficient takes a value of unity when the substance behaves like an ideal gas. Therefore, the fugacity coefficient is also regarded as a measure of non-ideality; the closer the value of the fugacity coefficient is to unity, the closer we are to the ideal state. In the zero-pressure limit, fugacity approaches ideal gas behavior:
$\lim_{P\rightarrow0}{d\bar{G} = RT\;d\ln P} \label {16.21b}$
For mixtures, this expression is written as:
$d\bar{G}_i = RT\;d\ln f_i \label {16.20}$
where $\bar{G}_i$ and $f_i$ are the partial molar Gibbs energy and fugacity of the i-th component, respectively. Fugacity can be readily related to chemical potential because of the one-to-one relationship of Gibbs energy to chemical potential, which we have discussed previously. Therefore, the definition of fugacity in terms of chemical potential becomes:
$d\bar{\mu}_i = RT\;d\ln f_i \label {16.21}$
Even though the concept of thermodynamic equilibrium is given in terms of chemical potentials, the above definitions allow us to restate the same principle in terms of fugacity. To do this, previous expressions can be integrated to obtain:
$\bar{\mu} = \bar{\mu}^\circ + RT\ln\frac{f}{P^\circ}$
At equilibrium:
$0 = \bar{\mu}^\circ + RT\ln\frac{f}{P^\circ}$
$\bar{\mu}^\circ = - RT\ln\frac{f}{P^\circ}$
Contributors and Attributions
• Michael Adewumi, Professor, Petroleum & Natural Gas Engineering, The Pennsylvania State University.
22.E: Helmholtz and Gibbs Energies (Exercises)
In the mid 1920's the German physicist Werner Heisenberg showed that if we try to locate an electron within a region $Δx$; e.g. by scattering light from it, some momentum is transferred to the electron, and it is not possible to determine exactly how much momentum is transferred, even in principle. Heisenberg showed that consequently there is a relationship between the uncertainty in position $Δx$ and the uncertainty in momentum $Δp$.
$\Delta p \Delta x \ge \frac {\hbar}{2} \label {5-22}$
You can see from Equation $\ref{5-22}$ that as $Δp$ approaches 0, $Δx$ must approach ∞, which is the case of the free particle discussed previously.
This uncertainty principle, which also is discussed in Chapter 4, is a consequence of the wave property of matter. A wave has some finite extent in space and generally is not localized at a point. Consequently there usually is significant uncertainty in the position of a quantum particle in space. Activity 1 at the end of this chapter illustrates that a reduction in the spatial extent of a wavefunction to reduce the uncertainty in the position of a particle increases the uncertainty in the momentum of the particle. This illustration is based on the ideas described in the next section.
Exercise $1$
Compare the minimum uncertainty in the positions of a baseball (mass = 140 gm) and an electron, each with a speed of 91.3 miles per hour, which is characteristic of a reasonable fastball, if the standard deviation in the measurement of the speed is 0.1 mile per hour. Also compare the wavelengths associated with these two particles. Identify the insights that you gain from these comparisons.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(LibreTexts)/22%3A_Helmholtz_and_Gibbs_Energies/22.08%3A_Fugacity_Measures_Nonideality_of_a_Gas.txt
|
Phase equilibria is the term used to describe with two or more phases co-exist (in equilibrium). The stability of phases can be predicted by the chemical potential, in that the most stable form of the substance will have the minimum chemical potential at the given temperature and pressure. A key tool in exploring phase equilibria is a phase diagram which is used to show conditions (pressure, temperature, volume, etc.) at which thermodynamically distinct phases (such as solid, liquid or gaseous states) occur and coexist at equilibrium.
23: Phase Equilibria
A good map will take you to your destination with ease, provided you know how to read it. A map is an example of a diagram, a pictorial representation of a body of knowledge. In science they play a considerable role. Next to plots and tables diagrams are an important means of making information and/or theoretical knowledge accessible.
Constructing them takes quite a bit of thought. You want to represent as much of what you know and give as accurate a picture of it without conveying anything incorrect. If the drawing can be made to scale that makes it quite a bit more powerful, but this is not strictly necessary. A remark like not to scale or schematically does need to be given if applicable. A good caption or description is essential.
Thermodynamic Stability and Fluctuations
There are different kinds of equilibrium, besides the stable equilibrium that represents an absolute minimum in the $G$ function. Of course $G$ is potentially a function of a great number of variables, but let us look at a diagram in which $G$ is shown as a function of only one unspecified variable. You could think of the density, the mole fraction of one of the components of a mixture or an applied electrical field or whatever, but the argument is general.
Figure 23.1.1 is helpful to point out that besides a stable equilibrium ($A$) there can also be an metastable equilibrium (B) or an indifferent equilibrium ($C$). The local derivatives of $G$ (versus all variables of which we only show one) are zero in all three cases, which means that changes in the variables are not spontaneous. For a labile equilibrium ($D$) the opposite is true. Any small deviation will make the system role down hill. (Note that the second derivative has the opposite sign compared to cas A) and B)) A labile equilibrium is seldom or never observed except in a circus where artists delight in balancing objects on their heads (because you pay for it).. This usually requires continuous small corrections to maintain the precarious balance. All other points in our diagram represent state of instability because locally $dG$ is not zero and a spontaneous process can take place.
The fact that $dG=0$ in the equilibrium points does not mean small deviations from the minimum cannot happen at times. We have seen e.g. that the Boltzmann distribution was simply the most likely distribution. The most likely one is the one that has the highest number of realizations W. Another way of saying that is that it is the one with the highest entropy S. A slightly less likely distribution may occur from time to time by chance. It will have a little less entropy, but the same $\langle E \rangle$. That means it will have a slightly higher $G$ ($G=H-TS$). From time to time therefore $G$ will fluctuate a bit. Such fluctuations are very small for large systems, but they are of greater relative importance for small systems (like a nanoparticle). (Statistical averaging works best on large ensembles.)
The fluctuations in $G$ mean that small fluctuations in its variables like density etc. can also occur. They are usually kept in check, because $dG$ is no longer zero when moving away from the equilibrium state. This drives the system back to the minimum spontaneously. You could picture the system wobbling a bit around in its G-well. This holds for stable and metastable equilibria alike.
In the indifferent case ($C$) however the derivative is zero (or very close to zero) for a range of neighboring values of some variable. In contrast to $A$ and $B$ also the second derivative is zero. This means that there is little penalty to much larger deviations in the variable. If this variable is the density the system becomes milky and shows opalescence a strong scattering of light because the refractive index depends on the strongly fluctuating density. This is observed near critical points and is called critical opalescence.
Unary Phase Diagrams
A unary phase diagram summarizes the equilibrium states of a single pure substance. We will see that we can also look at mixtures of two components (binary diagrams) or more (ternary, quaternary, quinary, senary etc.). Usually a phase diagram only maps out stable equilibria, but occasionally metastable ones may be given too (e.g., with a dashed line).
Liquid-Vapor Equilibrium Curve
We have seen that the Gibbs function $G$ depends strongly (logarithmically) on pressure for a gas, but only slightly (and linearly) for a liquid. The two curves intersect in a point representing the equilibrium vapor pressure of the liquid. At lower pressures the vapor is more stable, at higher ones the liquid. (For a solid the same holds as for the liquid). This means that except at the intersection point we only will observe one phase.
Figure 23.1.2 : A cylinder with one substance
It is important to stress that this holds in the absence of other matter, e.g., when we put only water into an evacuated cylinder (Figure 23.1.2 ). We may get three cases:
• we compress the cylinder until it only contains the liquid under hydrostatic pressure ($P>P_{eq}$)
• we expand the cylinder until all water has vaporized ($P<p_{eq}$)>
• we let part of the water evaporate, just enough so that the space above the liquid is filled with an equilibrium vapor pressure ($P=P_{eq}$)
At room temperature $P_{eq}$ for water is only about 15 Torr. If we apply 1 bar -or let the atmosphere do the job- we will only have liquid water. If other gases are present, e.g., air, we must distinguish between the total pressure (e.g., 1 bar) and the equilibrium vapor pressure which will now be the partial pressure. In a cylinder with water and one bar of air just enough water will evaporate to establish equilibrium. The evaporation will be limited to the gas-liquid interface unless the partial pressure equals the total pressure. Then the liquid will boil.
(Do allow the volume to expand, though., why? If the volume is constant the pressure builds up and boiling will stop.
If we consider the set of equilibrium pressures as a function of temperature and plot that in a P vs. T diagram we have one component of our phase diagram.
Gas-solid Equilibrium Curve
For solids the situation is similar as the $G(P)$ curve is once again an almost flat straight line. The intersection with the logarithmic curve for the gas will define an equilibrium pressure for gas-solid co-existence. Generally vapor pressures above solids are quite small, but not negligible. As for liquids we can construct a line representing the equilibrium pressures for sublimation as function of temperature and add it to the phase diagram.
Liquid-solid Equilibrium Curve
The $\text{solid} \rightleftharpoons \text{liquid}$ equilibrium is known as melting or freezing is not very dependent on pressure. Usually melting points increase a little bit with pressure, although water is a peculiar exception. It expands upon freezing and the melting point goes down (a bit) with pressure. In our diagram this will represent an almost vertical line leaning a little forwards for most substances, but backwards for water and a few others.
Putting the Curve Together
The three lines come together in the triple point, the only point where all three phases are at equilibrium with each other. For water, its temperature is only 0.01 K different from the normal melting point (273.16 K) and its pressure is only 4.58 Torr. The intersection points with a line representing atmospheric pressure give the melting and boiling points at that pressure.
A typical unary diagram
If the triple point lies above the line that represents atmospheric pressure this implies that a liquid is never observed. On earth CO2 is such a substance. The intersection of the solid-vapor equilibrium line with the 1 bar line represents a state where the solid will 'boil' (evaporate from inside out). This is known as the sublimation point. The melting points at P=1 bar are known as the standard melting point, the only slightly different one at 760 Torr = 1 atm is called normal melting point. The same goes for boiling and sublimation points.
There is nothing magical about $P=1 \,bar$. It just happens to be the pressure of our home planet. On a planet with higher atmospheric pressures $CO_2$ may well be a liquid and on such a planet, all boiling points will be quite different (higher than on earth). The melting points will also differ, but only slightly so. On Mars where atmospheric pressure is much lower water can not occur in liquid form, much like carbon dioxide on earth - it sublimes.
We should also realize that in a closed container (glass ampoule, hermetically sealed DSC pan), we can observe melting points at only very slightly different temperature values, but we will not see a boiling effect. Why?
To see a boiling point the container must be open to the (constant!) 1 bar pressure of the earths atmosphere that defines it and causes the boiling phenomenon. If the ampoule is sealed it will generate its own (autogenous) pressure, depending on what you put in, how much of it in relation to the volume, how volatile it is and the temperature. The autogenous pressure does not interfere with the melting point much (the melting line is almost vertical), but as $P$ changes with temperature you may never reach boiling conditions.
In DSC experiments it is possible to observe boiling points only if the pan has been carefully perforated with a hole of known size. It must be big enough that the pressure inside the pan does not build up above atmospheric, but small enough that it does not cause premature loss of mass during the run. The latter spoils the calculation of the intensive value (per mole, per gram) of the heat of vaporization.
The liquid evaporation line ends in a point that we have encountered before: the critical point $T_C$. As temperature increases the liquid and vapor phases in equilibrium with each other start to resemble each other more and more and at $T_c$ they coalesce. At this point the liquid-gas equilibrium becomes indifferent with respect to density and large fluctuations occur leading to critical opalescence.
Notice that there is a relationship of dimensionality between the objects in the diagram and the number of phases present:
• 2 D planes: one phase
• 1 D curves: two phases
• 0 D point: three phases
As you see the sum is always three.
Number of moles
So far we have typically considered one substance at the time, but for chemists it is imperative to deal with more than one because we are typically changing one into the other in our reactions. This means that the number of moles $n$, that we often simply set equal to one now becomes an important variable in its own right. Besides we will actually have two (or more) of them: the number of moles of component one and the one for the other component. This makes n a much less trivial variable.
This is already the case at a simple melting point, say when ice melts, because we are dealing with changing quantities of ice and water:
$n_{ice} + n_{water} = n_{total} \nonumber$
If all we do is turn water into ice or vice versa, we have $dn_{total}=0$, so that:
$dn_{ice} =- dn_{water} \nonumber$
To deal with changing n's, we need to expand our mathematical notation a bit.
Partial variables
So far we have simply divided our thermodynamic functions if they were extensive by the number of moles and arrived at intensive molar values:
$G_{molar} = \dfrac{G}{ n} \nonumber$
$V_{molar} = \dfrac{V}{n} \nonumber$
We have written such intensive molar values by writing a bar over the symbol G or V. We should note that scaling the function this way departs from the assumption that the function $G$ depends on the variable n as a straight line that passes through the origin.
If we have the same pure compound in two phases, like ice and water we can still apply this principle and write:
$G_{molar}^{ice} = \dfrac{G^{ice}}{n^{ice}} \nonumber$
$V_{molar}^{ice} = \dfrac{V^{ice}}{n^{ice}} \nonumber$
$G_{molar}^{water} = \dfrac{G^{water}}{n^{water}} \nonumber$
$V_{molar}^{water} = \dfrac{V^{water}}{n^{water}} \nonumber$
If we have a mixture of two substances present as $n_1$ and $n_2$ moles the dependency need not be linear on either if the two substances interact with each other. This is also true for function like the volume of a liquid mixture. In the presence of interactions volumes do not have to be linearly additive. We can define a partial molar value of e.g. for the volume:
$V_{partial molar,1} = \dfrac{\partial V}{\partial n_1} \nonumber$
at $n_2$ = constant
$V_{partial molar,2} = \dfrac{\partial V}{\partial n_2} \nonumber$
at $n_1$ = constant
The notation of putting a bar over the $V$ symbol is used for these partial quantities as well. Partial molar volumes have been measured for many binary systems. They are functions of the composition (mole fraction) as well as the temperature and to a lesser extent the pressure.
The partial molar Gibbs free energy
The partial molar Gibbs free energy ($\left(\dfrac{∂G}{∂n_i}\right)_{P,T}$, all other n's) is denoted with $μ$ and is called the thermodynamic potential.
When numbers of moles can change we can write the corresponding change in $G$ as:
$dG = -SdT + VdP + \sum_i^N \left( \dfrac{\partial G}{\partial n_1} \right)_{P,T,n_{j\neq i}}dn_i \nonumber$
$dG = -SdT + VdP + \sum_i^N μ_idn_i \nonumber$
over $N$ phases in the system. As you can see we are adding a set of conjugate variables $μ_in_i$ for each phase $i$. If we are considering a pure component (but in different modifications, like ice and steam), we can still write:
$μ_i = \left( \dfrac{\partial G}{\partial n_1} \right)_{P,T,n_{j\neq i}} = \dfrac{G_i}{n_i} \nonumber$
As soon as we are dealing with mixtures we really do have derivatives.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(LibreTexts)/23%3A_Phase_Equilibria/23.01%3A_A_Phase_Diagram_Summarizes_the_Solid-Liquid-Gas_Behavior_of_a_Substance.txt
|
First Order Transitions
The following plot shows the Gibbs energy as a function of temperature, including phase changes from solid to liquid (melting) and liquid to gas (boiling).
Gibbs energy ($\bar{G}$) as a function of temperature ($T$).
Although the $G$ curve is continuous, its first order derivatives ($-S$) is discontinuous at the phase changes. This is why this transition it called a first order transition. We could say that:
• $G$ is continuous but has a kink
• The first order derivatives ($H$,$S$,..) are discontinuous (have a jump)
• The second order derivatives ($C_P$, ..) have a singularity (go to ∞)
Second Order Transitions
More subtle transitions where $G$ is continuous, $H$ and $S$ are also continuous but have a kink and the discontinuity is only found in the second order derivatives (such as $C_P$) also exist. They are called second order transitions. In such a case:
• $G$ is continuous and has no kink
• The first order derivatives ($H$,$S$,..) are continuous (but have a kink)
• The second order derivatives ($C_P$, ..) are discontinuous (have a jump)
Transition Order Function 1st Order 2nd Order
Table 23.2.1: Properties of Phase Transitions
0 G,A kink smooth
1 H,S,V,.. jump kink
2 CP,CV,α,κ sing. ∞ jump
This classification goes back to Ehrenfest. Obviously it based on the question: what order derivative is the first to go discontinuous? Of course we could extend this principle and define third order transitions but there are reasons to be doubtful that such things exist. Another problem is that it is assumed that the order must be integer: 1,2, etc. Is it possible to have a transition of intermediate non-integer order, say 1.3? Although derivatives of fractional order are beyond the scope of the chemistry curriculum the mathematics does exist (Liouville).
Schematic comparison of $G$, $S$ and $C_P$ for 1st and 2nd order transitions
The Gibbs free energy is a particularly important function in the study of phases and phase transitions. The behavior of $G(N, P,T)$, particularly as a function of $P$ and $T$, can signify a phase transition and can tell us some of the thermodynamic properties of different phases.
Consider, first, the behavior of $G$ vs. $T$ between the solid and liquid phases of benzene: We immediately notice several things. First, although the free energy is continuous across the phase transition, its first derivative, $\partial G/\partial T$ is not: The slope of $G(T)$ in the solid region is different from the slope in the liquid region. When the first derivative of the free energy with respect to one of its dependent thermodynamic variables is discontinuous across a phase transition, this is an example of what is called a first order phase transition. The solid-liquid-gas phase transition of most substances is first order. When the free energy exhibits continuous first derivatives but discontinuous second derivatives, the phase transition is called second order. Examples of this type of phase transition are the order-disorder transition in paramagnetic materials.
Now, recall that
$S = -\dfrac{\partial G}{\partial T} \label{13.1}$
Consider the slopes in the solid and liquid parts of the graph:
$\dfrac{\partial G^\text{(solid)}}{\partial T} = -S^\text{(solid)}, \: \: \: \: \: \: \: \dfrac{\partial G^\text{(liquid)}}{\partial T} = -S^\text{(liquid)} \label{13.2}$
However, since
$\dfrac{\partial G^\text{(liquid)}}{\partial T} < \dfrac{\partial G^\text{(solid)}}{\partial T} \label{13.3}$
(note that the slopes are all negative, and the slope of the liquid line is more negative than that of the solid line), it follows that $-S^\text{(liquid)} < -S^\text{(solid)}$ or $S^\text{(liquid)} > S^\text{(solid)}$. This is what we might expect considering that the liquid phase is higher in entropy than the solid phase. The same argument can be made with regards to the gaseous phase.
Similarly, if we consider the dependence of $G$ on pressure, we obtain a curve like that shown in the figure below:
As noted previously, here again, we see that the first derivative of $\bar{G} (P)$ is discontinuous, signifying a first-order phase transition. Recalling that the average molar volume is
$\bar{V} = \dfrac{\partial \bar{G}}{\partial P} \label{13.4}$
From the graph, we see that the slopes obey
$\bar{V}^\text{(gas)} \gg \bar{V}^\text{(liquid)} > \bar{V}^\text{(solid)} \label{13.5}$
as one might expect for a normal substance like benzene at a temperature above its triple point. Because the temperature is above the triple point, the free energy follows a continuous path (even though it is not everywhere differentiable) from gas to liquid to solid.
On the other hand, for water, we see something a bit different, namely, that
$\bar{V}^\text{(gas)} \gg \bar{V}^\text{(solid)} > \bar{V}^\text{(liquid)} \label{13.6}$
at a temperature below the triple point. This, again, indicates, the unusual property of water that its solid phase is less dense than its liquid phase in the coexistence region.
Interestingly, if we look at how the plot of $G(P)$ changes with $T$, we obtain a plot like that shown below: Below the triple point, it is easy to see from the benzene phase diagram that the system proceeds directly from solid to gas. There is a liquid curve on this plot that is completely disconnected from the gas-solid curve, suggesting that, below the triple point, the liquid state can exist metastably if at all. AT the triple point, the solid can transition into the liquid or gas phases depending on the value of the free energy. Near the critical temperature, we see the liquid-gas transition line, while the solid line is disconnected. Above the critical temperature, the system exists as a supercritical fluid, which is shown on the lower line, and this line now shows derivative discontinuity.
Conjugate Variables
As discussed before there are many other forms of work possible, such as electrical work, magnetic work or elastic work. These they are commonly incorporated in the formalism of thermodynamics by adding other terms, e.g:
$dG = -SdT + VdP + ℰde + MdH + FdL + γdA \nonumber$
1. ℰde stands for the electromotoric force and de the amount of charge transported against it.
2. MdH stand for magnetization and (change in) magnetic field.
3. F stands for the elastic force of e.g. a rubber band dL for the length it is stretched
4. γ stands for the surface tension (e.g. of a soap bubble), A for its surafce area.
The terms always appear in a pair of what is known as conjugate variables. That is even clearer if we write out the state function rather than its differential form:
$G = U + PV -TS + ℰe + MH + FL + γA + ... \nonumber$
The PV term can also be generalized -and needs to be so- for a viscous fluid to a stress-strain conjugate pair. It then involves a stress tensor. We will soon encounter another conjugate pair: μdn that deals with changes in composition (n) and the thermodynamic potential μ.
23.03: The Chemical Potentials of a Pure Substance in Two Phases in Equilibrium
Chemical Potential of Two Phases in Equilibrium
Equilibrium between two phases, e.g. water ice ($ice$) and liquid water ($water$), is at constant $T$ and $P$. Therefore:
$dG = \cancel{-SdT + VdP} +μ_{ice}dn_{ice}+μ_{water}dn_{water} \nonumber$
There is a relationship between the amount of ice and water:
$dn_{ice} = -dn_{water} \nonumber$
From this, we get:
$0 = [μ_{ice}-μ_{water}]dn_{water} = \Delta \mu\; dn_{water} \nonumber$
As $dn_{water}$ is not zero, this means that $\Delta \mu$ must be zero! This must hold true for any set of points where ice and water are in equilibrium. The statement is not just for liquid and solid water, but for any two phases in equilibrium. That is, any two phases in equilbrium will always have the same chemical potential.
The Clapeyron Equation
That is the almost vertical line in the diagram. Its points are not at the same $P$ and $T$, but we can find out where they should be by considering the thermodynamic potential $\mu$ as a function of $T$ and $P$:
$dμ = \left(\dfrac{∂μ}{∂P}\right)_TdT + \left(\dfrac{∂μ}{∂T}\right)_PdP \nonumber$
Because $\mu= \left(\frac{\partial G}{\partial n}\right)_{T,P} = \bar{G}$, it is not hard to identify the partial derivatives:
$\left(\dfrac{∂μ}{∂P}\right) = \left(\dfrac{∂\bar{G}}{∂P}\right) = \bar{V} \nonumber$
$\left(\dfrac{∂μ}{∂T}\right) = \left(\dfrac{∂\bar{G}}{∂T}\right) = -\bar{S} \nonumber$
This is true for both water and ice, or any two phases in equilibrium. As the $Δμ=0$, we can equate the $dμ$ expressions for both water and ice:
$\left(\dfrac{∂μ_{ice}}{∂P}\right)_TdT + \left(\dfrac{∂μ_{ice}}{∂T}\right)_P dP=\left(\dfrac{∂μ_{water}}{∂P}\right)_TdT + \left(\dfrac{∂μ_{water}}{∂T}\right)_PdP \nonumber$
Rearranging and identifying the partials gives:
$\bar{V}_{ice}dP -\bar{S}_{ice}dT= \bar{V}_{water}dP -\bar{S}_{water}dT \nonumber$
Solving for $dP/dT$ we get:
$\dfrac{dP}{dT} = \dfrac{Δ\bar{S}}{Δ\bar{V}} \nonumber$
As $\Delta \bar{G}= \Delta \bar{H}-T\Delta \bar{S} = 0$, we have:
$Δ\bar{S} = \dfrac{Δ\bar{H}}{T_m} \nonumber$
So:
$\dfrac{dP}{dT} = \dfrac{Δ\bar{H}}{TΔ\bar{V}} \nonumber$
This expression should be valid for all points along a phase boundary, such as the melt line. In fact, it tells use that the phase boundary is defined by $\Delta\bar{H}/T\Delta\bar{V}$. For water and ice, we immediately see why the melt line runs a little to the left: exceptionally $\Delta\bar{V}$ is negative for ice, because water is actually a little denser that ice.
The above expression(s) are named after Clapeyron. The values of $\Delta\bar{H}$ and $\Delta\bar{V}$ do not change much with pressure and can often be considered constants for the melting line. When gases are involved that is not really true.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(LibreTexts)/23%3A_Phase_Equilibria/23.02%3A_Gibbs_Energies_and_Phase_Diagrams.txt
|
Evaporation
In Section 23.3, the Clapeyron Equation was derived for melting points.
$\dfrac{dP}{dT} = \dfrac{ΔH_{molar}}{TΔV_{molar}} \nonumber$
However, our argument is actually quite general and should hold for vapor equilibria as well. The only problem is that the molar volume of gases are by no means so nicely constant as they are for condensed phases. (i. e., for condenses phases, both $α$ and $κ$ are pretty small).
We can write:
$\dfrac{dP}{dT} =\dfrac{ ΔH_{molar}}{TΔV_{molar}} = \dfrac{ΔH_{molar}}{T} \left[V_{molar}^{gas}-V_{molar}^{liquid} \right] \nonumber$
as
$V_{molar}^{gas} \gg V_{molar}^{liquid} \nonumber$
we can approximate
$V_{molar}^{gas}-V_{molar}^{liquid} \nonumber$
by just taking $V_{molar}^{gas}$. Further more if the vapor is considered an ideal gas, then
$V_{molar}^{gas} = \dfrac{RT}{P} \nonumber$
We get
$\dfrac{1}{P} .\dfrac{dP}{dT} = \dfrac{d \ln P}{dT} = \dfrac{ΔH_{molar}^{vap}}{RT^2} \label{CCe}$
Equation $\ref{CCe}$ is known as the Clausius-Clapeyron equation. We can further work our the integration and find the how the equilibrium vapor pressure changes with temperature:
$\ln \left( \dfrac{P_2}{P_1} \right)= \dfrac{-ΔH_{molar}^{vap}}{R} \left[\dfrac{1}{T_2}-\dfrac{1}{T_1} \right] \nonumber$
Thus if we know the molar enthalpy of vaporization we can predict the vapor lines in the diagram. Of course the approximations made are likely to lead to deviations if the vapor is not ideal or very dense (e.g., approaching the critical point).
The Clapeyron Equation
The Clapeyron attempts to answer the question of what the shape of a two-phase coexistence line is. In the $P-T$ plane, we see the a function $P(T)$, which gives us the dependence of $P$ on $T$ along a coexistence curve.
Consider two phases, denoted $\alpha$ and $\beta$, in equilibrium with each other. These could be solid and liquid, liquid and gas, solid and gas, two solid phases, et. Let $\mu_\alpha (P, T)$ and $\mu_\beta (P, T)$ be the chemical potentials of the two phases. We have just seen that
$\mu_\alpha (P, T) = \mu_\beta (P, T) \label{14.1}$
Next, suppose that the pressure and temperature are changed by $dP$ and $dT$. The changes in the chemical potentials of each phase are
$d \mu_{\alpha} (P, T) = d \mu_{\beta} (P, T) \label{14.2a}$
$\left( \dfrac{\partial \mu_{\alpha}}{\partial P} \right)_T dP + \left( \dfrac{\partial \mu_{\alpha}}{\partial T} \right)_P dT = \left( \dfrac{\partial \mu_{\beta}}{\partial P} \right)_T dP + \left( \dfrac{\partial \mu_{\beta}}{\partial T} \right)_P dT \label{14.2b}$
However, since $G(n, P, T) = n \mu (P, T)$, the molar free energy $\bar{G} (P, T)$, which is $G(n, P, T)/n$, is also just equal to the chemical potential
$\bar{G} (P, T) = \dfrac{G(n, P, T)}{n} = \mu (P, T) \label{14.3}$
Moreover, the derivatives of $\bar{G}$ are
$\left( \dfrac{\partial \bar{G}}{\partial P} \right)_T = \bar{V}, \: \: \: \: \: \: \: \left( \dfrac{\partial \bar{G}}{\partial T} \right)_P = -\bar{S} \label{14.4}$
Applying these results to the chemical potential condition in Equation $\ref{14.2b}$, we obtain
\begin{align} \left( \dfrac{\partial \bar{G}_\alpha}{\partial P} \right)_T dP + \left( \dfrac{\partial \bar{G}_\alpha}{\partial T} \right)_P dT &= \left( \dfrac{\partial \bar{G}_\beta}{\partial P} \right)_T dP + \left( \dfrac{\partial \bar{G}_\beta}{\partial T} \right)_P dT \[5pt] \bar{V}_\alpha dP - \bar{S}_\alpha dT &= \bar{V}_\beta dP - \bar{S}_\beta dT \end{align} \label{14.5}
Dividing through by $dT$, we obtain
\begin{align} \bar{V}_\alpha \dfrac{\partial P}{\partial T} - \bar{S}_\alpha &= \bar{V}_\beta \dfrac{\partial P}{\partial T} - \bar{S}_\beta \[5pt] (\bar{V}_\alpha - \bar{V}_\beta) \dfrac{\partial P}{\partial T} &= \bar{S}_\alpha - \bar{S}_\beta \[5pt] \dfrac{dP}{dT} &= \dfrac{\bar{S}_\alpha - \bar{S}_\beta}{\bar{V}_\alpha - \bar{V}_\beta} \end{align} \label{14.6}
The importance of the quantity $dP/dT$ is that is represents the slope of the coexistence curve on the phase diagram between the two phases. Now, in equilibrium $dG = 0$, and since $G = H - TS$, it follows that $dH = T \: dS$ at fixed $T$. In the narrow temperature range in which the two phases are in equilibrium, we can assume that $H$ is independent of $T$, hence, we can write $S = H/T$. Consequently, we can write the molar entropy difference as
$\bar{S}_\alpha - \bar{S}_\beta = \dfrac{\bar{H}_\alpha - \bar{H}_\beta}{T} \label{14.7}$
and the pressure derivative $dP/dT$ becomes
$\dfrac{dP}{dT} = \dfrac{\bar{H}_\alpha - \bar{H}_\beta}{T (\bar{V}_\alpha - \bar{V}_\beta)} = \dfrac{\Delta_{\alpha \beta} \bar{H}}{T \Delta_{\alpha \beta} \bar{V}} \label{14.8}$
a result known as the Clapeyron equation, which tells us that the slope of the coexistence curve is related to the ratio of the molar enthalpy between the phases to the change in the molar volume between the phases. If the phase equilibrium is between the solid and liquid phases, then $\Delta_{\alpha \beta} \bar{H}$ and $\Delta_{\alpha \beta} \bar{V}$ are $\Delta \bar{H}_\text{fus}$ and $\Delta \bar{V}_\text{fus}$, respectively. If the phase equilibrium is between the liquid and gas phases, then $\Delta_{\alpha \beta} \bar{H}$ and $\Delta_{\alpha \beta} \bar{V}$ are $\Delta \bar{H}_\text{vap}$ and $\Delta \bar{V}_\text{vap}$, respectively.
For the liquid-gas equilibrium, some interesting approximations can be made in the use of the Clapeyron equation. For this equilibrium, Equation $\ref{14.8}$ becomes
$\dfrac{dP}{dT} = \dfrac{\Delta \bar{H}_\text{vap}}{T (\bar{V}_g - \bar{V}_l)} \label{14.9}$
In this case, $\bar{V}_g \gg \bar{V}_l$, and we can approximate Equation $\ref{14.9}$ as
$\dfrac{dP}{dT} \approx \dfrac{\Delta \bar{H}_\text{vap}}{T \bar{V}_g} \label{14.10}$
Suppose that we can treat the vapor phase as an ideal gas. Certainly, this is not a good approximation so close to the vaporization point, but it leads to an example we can integrate. Since $PV_g = nRT$, $P \bar{V}_g = RT$, Equation $\ref{14.10}$ becomes
\begin{align} \dfrac{dP}{dT} &= \dfrac{\Delta \bar{H}_\text{vap} P}{RT^2} \[5pt] \dfrac{1}{P} \dfrac{dP}{dT} &= \dfrac{\Delta \bar{H}_\text{vap}}{RT^2} \[5pt] \dfrac{d \: \text{ln} \: P}{dT} &= \dfrac{\Delta \bar{H}_\text{vap}}{RT^2} \end{align} \label{14.11}
which is called the Clausius-Clapeyron equation. We now integrate both sides, which yields
$\text{ln} \: P = -\dfrac{\Delta \bar{H}_\text{vap}}{RT} + C \nonumber$
where $C$ is a constant of integration. Exponentiating both sides, we find
$P(T) = C' e^{-\Delta \bar{H}_\text{vap}/RT} \nonumber$
which actually has the wrong curvature for large $T$, but since the liquid-vapor coexistence line terminates in a critical point, as long as $T$ is not too large, the approximation leading to the above expression is not that bad.
If we, instead, integrate both sides, the left from $P_1$ to $P_2$, and the right from $T_1$ to $T_2$, we find
\begin{align} \int_{P_1}^{P_2} d \: \text{ln} \: P &= \int_{T_1}^{T_2} \dfrac{\Delta \bar{H}_\text{vap}}{RT^2} dT \[5pt] \text{ln} \: \left( \dfrac{P_2}{P_1} \right) &= -\dfrac{\Delta \bar{H}_\text{vap}}{R} \left( \dfrac{1}{T_2} - \dfrac{1}{T_1} \right) \[5pt] &= \dfrac{\Delta \bar{H}_\text{vap}}{R} \left( \dfrac{T_1 - T_1}{T_1 T_2} \right) \end{align} \label{14.12}
assuming that $\Delta \bar{H}_\text{vap}$ is independent of $T$. Here $P_1$ is the pressure of the liquid phase, and $P_2$ is the pressure of the vapor phase. Suppose we know $P_2$ at a temperature $T_2$, and we want to know $P_3$ at another temperature $T_3$. The above result can be written as
$\text{ln} \: \left( \dfrac{P_3}{P_1} \right) = -\dfrac{\Delta \bar{H}_\text{vap}}{R} \left( \dfrac{1}{T_3} - \dfrac{1}{T_1} \right) \label{14.13}$
Subtracting the two results, we obtain
$\text{ln} \: \left( \dfrac{P_2}{P_3} \right) = -\dfrac{\Delta \bar{H}_\text{vap}}{R} \left( \dfrac{1}{T_2} - \dfrac{1}{T_3} \right) \label{14.14}$
so that we can determine the vapor pressure at any temperature if it is known as one temperature.
In order to illustrate the use of this result, consider the following example:
Example 23.4.1
At $1 \: \text{bar}$, the boiling point of water is $373 \: \text{K}$. At what pressure does water boil at $473 \: \text{K}$? Take the heat of vaporization of water to be $40.65 \: \text{kJ/mol}$.
Solution
Let $P_1 = 1 \: \text{bar}$ and $T_1 = 373 \: \text{K}$. Take $T_2 = 473 \: \text{K}$, and we need to calculate $P_2$. Substituting in the numbers, we find
\begin{align} \text{ln} \: P_2(\text{bar}) &= -\dfrac{(40.65 \: \text{kJ/mol})(1000 \: \text{J/kJ})}{8.3145 \: \text{J/mol} \cdot \text{K}} \left( \dfrac{1}{473 \: \text{K}} - \dfrac{1}{373 \: \text{K}} \right) = 2.77 \[5pt] P_2(\text{bar}) &= (1 \: \text{bar}) \: e^{2.77} = 16 \: \text{bar} \end{align} \nonumber
Learning Objectives
• Apply the Clausius-Clapeyron equation to estimate the vapor pressure at any temperature.
• Estimate the heat of phase transition from the vapor pressures measured at two temperatures.
The vaporization curves of most liquids have similar shapes with the vapor pressure steadily increasing as the temperature increases (Figure $1$).
A good approach is to find a mathematical model for the pressure increase as a function of temperature. Experiments showed that the vapor pressure $P$ and temperature $T$ are related,
$P \propto \exp \left(- \dfrac{\Delta H_{vap}}{RT}\right) \ \label{1}$
where $\Delta{H_{vap}}$ is the Enthalpy (heat) of Vaporization and $R$ is the gas constant (8.3145 J mol-1 K-1).
A simple relationship can be found by integrating Equation \ref{1} between two pressure-temperature endpoints:
$\ln \left( \dfrac{P_1}{P_2} \right) = \dfrac{\Delta H_{vap}}{R} \left( \dfrac{1}{T_2}- \dfrac{1}{T_1} \right) \label{2}$
where $P_1$ and $P_2$ are the vapor pressures at two temperatures $T_1$ and $T_2$. Equation \ref{2} is known as the Clausius-Clapeyron Equation and allows us to estimate the vapor pressure at another temperature, if the vapor pressure is known at some temperature, and if the enthalpy of vaporization is known.
Alternative Formulation
The order of the temperatures in Equation \ref{2} matters as the Clausius-Clapeyron Equation is sometimes written with a negative sign (and switched order of temperatures):
$\ln \left( \dfrac{P_1}{P_2} \right) = - \dfrac{\Delta H_{vap}}{R} \left( \dfrac{1}{T_1}- \dfrac{1}{T_2} \right) \label{2B}$
Example $1$: Vapor Pressure of Water
The vapor pressure of water is 1.0 atm at 373 K, and the enthalpy of vaporization is 40.7 kJ mol-1. Estimate the vapor pressure at temperature 363 and 383 K respectively.
Solution
Using the Clausius-Clapeyron equation (Equation $\ref{2B}$), we have:
\begin{align} P_{363} &= 1.0 \exp \left[- \left(\dfrac{40,700}{8.3145}\right) \left(\dfrac{1}{363\;K} -\dfrac{1}{373\; K}\right) \right] \nonumber \[4pt] &= 0.697\; atm \nonumber \end{align} \nonumber
\begin{align} P_{383} &= 1.0 \exp \left[- \left( \dfrac{40,700}{8.3145} \right)\left(\dfrac{1}{383\;K} - \dfrac{1}{373\;K} \right) \right] \nonumber \[4pt] &= 1.409\; atm \nonumber \end{align} \nonumber
Note that the increase in vapor pressure from 363 K to 373 K is 0.303 atm, but the increase from 373 to 383 K is 0.409 atm. The increase in vapor pressure is not a linear process.
Discussion
We can use the Clausius-Clapeyron equation to construct the entire vaporization curve. There is a deviation from experimental value, that is because the enthalpy of vaporization varies slightly with temperature.
The Clausius-Clapeyron equation can be also applied to sublimation; the following example shows its application in estimating the heat of sublimation.
Example $2$: Sublimation of Ice
The vapor pressures of ice at 268 K and 273 K are 2.965 and 4.560 torr respectively. Estimate the heat of sublimation of ice.
Solution
The enthalpy of sublimation is $\Delta{H}_{sub}$. Use a piece of paper and derive the Clausius-Clapeyron equation so that you can get the form:
\begin{align} \Delta H_{sub} &= \dfrac{ R \ln \left(\dfrac{P_{273}}{P_{268}}\right)}{\dfrac{1}{268 \;K} - \dfrac{1}{273\;K}} \nonumber \[4pt] &= \dfrac{8.3145 \ln \left(\dfrac{4.560}{2.965} \right)}{ \dfrac{1}{268\;K} - \dfrac{1}{273\;K} } \nonumber \[4pt] &= 52,370\; J\; mol^{-1}\nonumber \end{align} \nonumber
Note that the heat of sublimation is the sum of heat of melting (6,006 J/mol at 0°C and 101 kPa) and the heat of vaporization (45,051 J/mol at 0 °C).
Exercise $2$
Show that the vapor pressure of ice at 274 K is higher than that of water at the same temperature. Note the curve of vaporization is also called the curve of evaporization.
Example $3$: Vaporization of Ethanol
Calculate $\Delta{H_{vap}}$ for ethanol, given vapor pressure at 40 oC = 150 torr. The normal boiling point for ethanol is 78 oC.
Solution
Recognize that we have TWO sets of $(P,T)$ data:
• Set 1: (150 torr at 40+273K)
• Set 2: (760 torr at 78+273K)
We then directly use these data in Equation \ref{2B}
\begin{align*} \ln \left(\dfrac{150}{760} \right) &= \dfrac{-\Delta{H_{vap}}}{8.314} \left[ \dfrac{1}{313} - \dfrac{1}{351}\right] \[4pt] \ln 150 -\ln 760 &= \dfrac{-\Delta{H_{vap}}}{8.314} \left[ \dfrac{1}{313} - \dfrac{1}{351}\right] \[4pt] -1.623 &= \dfrac{-\Delta{H_{vap}}}{8.314} \left[ 0.0032 - 0.0028 \right] \end{align*}
Then solving for $\Delta{H_{vap}}$
\begin{align*} \Delta{H_{vap}} &= 3.90 \times 10^4 \text{ joule/mole} \[4pt] &= 39.0 \text{ kJ/mole} \end{align*}
Advanced Note
It is important to not use the Clausius-Clapeyron equation for the solid to liquid transition. That requires the use of the more general Clapeyron equation
$\dfrac{dP}{dT} = \dfrac{\Delta \bar{H}}{T \Delta \bar{V}} \nonumber$
where $\Delta \bar{H}$ and $\Delta \bar{V}$ is the molar change in enthalpy (the enthalpy of fusion in this case) and volume respectively between the two phases in the transition.
23.05: Chemical Potential Can be Evaluated From a Partition Function
The chemical potential can be given in terms of a partition function. Internal energy can be defined as:
$U=RT^2\left(\frac{\partial \ln Q}{\partial T}\right)_{n,V} \nonumber$
And entropy can be defined as:
$S=RT\left(\frac{\partial \ln Q}{\partial T}\right)_{n,V} + R\ln Q \nonumber$
We know that Helmholtz energy is:
$A=U-TS \nonumber$
Using our two equations above, we obtain:
$A=-RT\ln Q \nonumber$
Now, let's change gears a bit to show how Helmholtz energy is related to chemical potential. The total differential for Helmholtz energy is:
$dA = \left(\frac{\partial A}{\partial T}\right)_{n,V} + \left(\frac{\partial A}{\partial V}\right)_{n,T} + \left(\frac{\partial A}{\partial n}\right)_{V,T} \nonumber$
And the fundamental equation is:
$dA = -SdT-PdV+\left(\frac{\partial A}{\partial n}\right)_{V,T}dn \nonumber$
Using the relationship between Helmholtz energy and Gibbs energy:
$G=A+PV \nonumber$
We obtain:
$\begin{split}dG &= dA+d(PV) \ &= -SdT+VdP+\left(\frac{\partial A}{\partial n}\right)_{V,T}dn\end{split} \nonumber$
We know that the change in Gibbs energy is:
$\begin{split}dG &= -SdT+VdP+\left(\frac{\partial G}{\partial n}\right)_{P,T}dn \ &= -SdT+VdP+\mu dn \end{split} \nonumber$
Inspecting these equations, we see that:
$\mu = \left(\frac{\partial G}{\partial n}\right)_{P,T} = \left(\frac{\partial A}{\partial n}\right)_{V,T} \nonumber$
This shows us that, as long as the natural variables for each thermodynamic potential are held constant, the partial derivatives of Gibbs energy and Helmholtz energy with respect to the number of moles, $n$ are equal to the chemical potential. We can now plug in our expression above for Helmholtz energy in terms of the partition function:
$\mu = -RT\left(\frac{\partial \ln Q}{\partial n}\right)_{V,T} \nonumber$
We now have chemical potential written in terms of the partition function, $Q$.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(LibreTexts)/23%3A_Phase_Equilibria/23.04%3A_The_Clausius-Clapeyron_Equation.txt
|
So far, we have only discussed systems that are comprised of one component. Because a lot of chemistry occurs in mixtures or produces a mixture, chemists need to consider the thermodynamics of mixtures. A mixture can consist of many different components, however, for the sake of simplicity, we will restrict ourselves for now to two-component mixtures. Two-component mixtures can consist of two gases, two liquids, two solids, or even a liquid and a gas.
Partial Quantities and Scaling
Let's consider a two-component system where the volume and number of moles are changing. For example, we could have a system of a certain size of 1 and reduce its size stepwise in successive steps by taking half of it and throwing the other half away. The number of moles of each component, $n_1$ and $n_2$, will change as the volume of the system, $V$, changes:
$dn_1=n_1dV \nonumber$
$dn_2=n_2dV \nonumber$
The extensive Gibbs free energy will be affected the same way:
$dG = GdV \nonumber$
At constant $T$ and $P$ we can write:
$dG = \cancel{-SdT} + \cancel{VdP} + \mu_1dn_1+\mu _2dn_2 \nonumber$
So:
$dG = \mu _1dn_1+\mu _2dn_2 \nonumber$
$GdV = \mu _1n_1dV+\mu _2n_2dV \nonumber$
If we integrate this from the original size, 1, down to 0 (or 0 to 1, it does not matter). We get:
$\int_0^1 GdV= \int_0^1 \mu _1n_1dV+ \int_0^1 \mu _2n_2dV \nonumber$
$G \int_0^1 dV= \mu _1n_1 \int_0^1 dV+\mu _2n_2 \int_0^1 dV \nonumber$
$G=\mu _1n_1+\mu _2n_2 \nonumber$
By the same argument we have:
$V=\bar{V}_{1}n_1+\bar{V}_{2}n_2 \nonumber$
where $\bar{V}_i$ is the partial molar volume for component $i$. These partial molar volumes are generally a function of composition (and $P$, $T$) and have been tabulated for a number of liquid systems. They allow us to calculate the real volume of a binary mixture. Volumes are generally speaking not strictly additive. This fact is typically ignored in volumetric analysis and the use of molarities. Fortunately the deviations are often negligible in dilute solutions.
For phase diagrams, molarity (moles per liter) is not a very suitable quantity to use for concentration due to its volume dependence. Usually we work with mole fractions or molalities (moles per kilogram), where there are no volume dependencies.
Gaseous Mixtures
Gases can always mix in any ratio and mixtures typically act close to ideal unless heavily compressed or brought to low temperatures. The only exception is if the gases react (e.g, HCl and NH3). Gas molecules experience little interaction with each other and, therefore, it does not matter much whether the molecules are different or the same. The total pressure can be computed by adding the partial pressures of the two components (Dalton's Law of Partial Pressures):
$P_{total} = P_1 + P_2 \label{PreDalton}$
Liquid Mixtures
There are binary liquid systems that are fully miscible and are said to act as ideal solutions. Liquid molecules typically experience strong interactions with their neighbors. For the solution to be ideal, the interactions must remain equally strong even when the neighboring substance is different. This means they must be chemically similar. For this reason, liquid binaries are often not ideal. The next nearest thing are regular solutions. Even these systems can display phase segregation and limited mutual solubilities at low temperatures. Many liquid-liquid binaries diverge from ideality even more than the regular solutions and many of them are hardly miscible at all.
Table 24.1.1: Solutions
Solution/mixture Interactions Miscibility
Ideal gas none complete
Ideal liquid strong but similar complete
Regular liquid strong, modestly dissimilar not always complete
Real liquid often strongly dissimilar partial or none
Solid Mixtures
Solid binaries tend to be even less miscible than liquid binaries to the point that immiscibility is the rule and miscibility is the exception. Even totally miscible systems like electrum (the alloys of silver and gold) are far from ideal.
Another point of practical (kinetic rather than thermodynamic) importance is that even if two compounds are able to form a homogeneous solid solution, it usually takes heating for prolonged periods to get them to mix because solid diffusion is typically very slow. Nevertheless, solid solubility is an important issue for many systems, particularly for metal alloys. Two molecular solid substances that differ vastly in shape, size, polarity and or hydrogen bonding (e.g. organic compounds) typically have negligible mutual solid solubility. The latter fact is frequently exploited in organic chemistry to purify compounds through recrystallization.
Note
Solid solutions are relatively infrequent and never ideal.
Ideal liquid/Ideal Gas Phase Diagrams
Let's mix two liquids together. Liquids typically have different boiling points, with one being more volatile than the other. The vapor pressure of a component scales simply with the equilibrium vapor pressure of the pure component. In the gas phase, Dalton's law is applicable:
$y_i= \dfrac{P_i}{P_{total}} \label{Dalton}$
This is a consequence of the fact that ideal gases do not interact. The latter implies that the total pressure is simply the sum of the partial ones:
$P_{total} = \sum_i^N P_i \nonumber$
If the liquid solution is ideal, then the vapor pressure of both components follow Raoult's law, which states that the equilibrium vapor pressure above the mixture is the equilibrium pressure of the pure component times the mole fraction:
$P_i = x_iP^*_i \label{Raoult}$
• $P_i$ is the vapor pressure of component $i$ in the mixture
• $P^*_i$ is equilibrium vapor pressure of the pure component $i$.
• $x_i$ is the mole fraction of $i^{th}$ component in the liquid phase.
Note that values for pure components are typically indicated by adding an asterisk * superscript.
The idea behind Raoult's law is that if the interactions are similar, it is a matter of random chance which component sits at the interface at any given moment. The equilibrium vapor pressure has to do with the probability that a molecule escapes from the interface into the gas phase and is dependent on both the substances volatility and the number that cover the surface. This leads to Raoult's Law, where we must multiply the vapor pressure of the pure liquid (volatility) by the mole fraction (number on the surface).
Note: Applicability of Raoult's Law
Raoult's law seldom holds completely, which is more applicable if the two components are almost chemically identical like two isomers, e.g., 1-propanol and 2-propanol.
The Pressure Phase Diagram
If we assume that temperature is constant, we can plot the total pressure for both Dalton and Raoult's laws versus composition (of gas: $y_1$ and liquid: $x_1$ on the same axis).
Liquid Phase:
$P_{total} = P_1 + P_2 = x_1P^*_1 +x_2P^*_2 = x_1P^*_1 +(1-x_1)P^*_2= P^*_2- x_1(P^*_2-P^*_1) \label{liquidus}$
Clearly this is a straight line going from $P^*_2$ at $x=0$ to $P^*_1$ at $x=1$.
However the composition of the vapor in equilibrium with a liquid at a given mole fraction $x$ is different than that of the liquid. So $y$ is not $x$. If we take Dalton's law (Equation $\ref{Dalton}$) and substitute Raoult's Law (Equation $\ref{Raoult}$) in the numerator and the straight line in the denominator we get:
$y_1 = \dfrac{x_1P^*_1}{P^*_2- x_1(P^*_2-P^*_1)} \label{vaporus1}$
Exercise
Suppose $P^*_1 = 50$ Torr and $P^*_2 = 25$ Torr. If (X_1= 0.6\) what is the composition of the vapor?
We can rearrange Equations $\ref{liquidus}$ and $\ref{vaporus1}$ to plot the total pressure as function of $y_1$:
$P_{total}= \dfrac{P^*_1P^*_2 }{P^*_1 + (P^*_2-P^*_1)y_1} \label{vaporus2}$
This is not a straight line.
As you can see when we plot both lines we get a diagram with three regions. At high pressures we just have a liquid. At low pressures we just have a gas. In between we have a phase gap or two phase region. Points inside this region represent states that the system cannot achieve homogeneously. The horizontal tie-line shows which two phases coexists. I used the same 25 and 50 Torr values for the pure equilibrium pressures as in the question above. If you try to make a system with overall composition x and impose a pressure that falls in the forbidden zone you get two phases: a gaseous one that is richer in the more volatile component and a liquid one that is poorer in the volatile component than the overall composition would indicate.
The Temperature Phase Diagram
Note that the question: what phase do we have when? is really a function of both $P$ and $T$, so that if we want to represent all our knowledge in a diagram we should make it a three dimensional picture. This is not so easy to draw and not easy to comprehend visually either. This is why we usually look at a 2D cross section of the 3D space.
The above diagram is isothermal: we vary $P$, keeping $T$ constant. It is, however, more usual (and easier) to do it the other way around. We keep pressure constant (say 1 bar, that's easy) and start heating things up isobarically.
The boiling points of our mixtures can also be plotted against $x$ (the liquid composition) and $y$ (the gaseous one) on the same horizontal axis. Again because in general $y$ is not equal $x$ we get two different curves. Neither of them are straight lines in this case and we end up with a lens-shaped two phase region:
binary T-X diagram showing the lever rule
What happens to a mixture with a given overall composition x(=x1) when it is brought to a temperature where it boils can be seen at the intersection of a vertical line (an isopleth) at $x_{overall}$ and a horizontal one (an isotherm) at $T_{boil}$. If the intersection points in inside the two phase region a vapor phase and a liquid phase result that have a different composition from the overall one. The vapor phase is always richer in the more volatile component (the one with the lowest boiling point, on the left in the diagram). The liquid phase is enriched in the less volatile one.
The Lever Rule
How much of each phase is present is represented by the arrows in the diagram. The amount of liquid is proportional to the left arrow, the amount of gas to the right one (i.e. it works crosswise). The composition of the liquid in equilibrium with the vapor is:
$x_2 = \dfrac{n^{liq}_2}{n^{liq}_{1+2}} \nonumber$
$x_2^*n^{liq}_{1+2} = n^{liq}_2 \nonumber$
The composition of the vapor is:
$y_2 = \dfrac{n^{gas}_2}{n^{gas}_{1+2}} \nonumber$
$y_2^*n^{gas}_{1+2} = n^{gas}_2 \nonumber$
The overall composition is:
$x_{all} = \dfrac{n^{liq+gas}_2 }{n^{liq+gas}_{1+2}} \nonumber$
$x_{all}^*n^{liq+gas}_{1+2} = n^{gas}_2+n^{liq}_2 \nonumber$
$x_{all}^*n^{liq+gas}_{1+2} = y_2^*n^{gas}_{1+2}+x_2^*n^{liq}_{1+2} \nonumber$
$x_{all}^*n^{gas}_{1+2}+x_{all}^*n^{liq}_{1+2} = y_2^*n^{gas}_{1+2}+x_2^*n^{liq}_{1+2} \nonumber$
Thus:
$\dfrac{n^{liq}_{1+2} }{n^{gas}_{1+2}} = \dfrac{y_2-x_{all}}{ x_{all}-x_2} \nonumber$
Distillation
The difference in composition between the gas and the liquid can be exploited to separate the two components, at least partially. We could trap the vapor and cool it down to form a liquid with a different composition. We could then boil it again and repeat the process. Each time the vapor will be more enriched in the volatile phase whereas the residual liquid is more enriched in the less volatile one. This process is known as distillation. In practice the process is done one a fractionation column which makes it possible to have a series of vapor-liquid equilibria at once.
A good degree of purity can be reached this way, although 100% purity would take an infinite number of distillation steps.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(LibreTexts)/24%3A_Solutions_I_-_Volatile_Solutes/24.01%3A_Partial_Molar_Quantities_in_Solutions.txt
|
At equilibrium, there is no change in chemical potential for the system:
$\sum_i n_i d\mu_i = 0 \label{eq1}$
This is the Gibbs-Duhem relationship and it places a compositional constraint upon any changes in the chemical potential in a mixture at constant temperature and pressure for a given composition. This result is easily derived when one considers that $\mu_i$ represents the partial molar Gibbs function for component $i$. And as with other partial molar quantities:
$G_text{tot} = \sum_i n_i \mu_i$
Taking the derivative of both sides yields:
$dG_text{tot} = \sum_i n_i d \mu_i + \sum_i \mu_i d n_i$
But $dG$ can also be expressed as:
$dG = Vdp - sdT + \sum_i \mu_i d n_i$
Setting these two expressions equal to one another:
$\sum_i n_i d \mu_i + \cancel{ \sum_i \mu_i d n_i } = Vdp - sdT + \cancel{ \sum_i \mu_i d n_i}$
And after canceling terms, one gets:
$\sum_i n_i d \mu_i = Vdp - sdT \label{eq41}$
For a system at constant temperature and pressure:
$Vdp - sdT = 0 \label{eq42}$
Substituting Equation \ref{eq42} into \ref{eq41} results in the Gibbs-Duhem equation (Equation \ref{eq1}). This expression relates how the chemical potential can change for a given composition while the system maintains equilibrium.
Gibbs-Duhem for Binary Systems
For a binary system consisting of components two components, $A$ and $B$:
$n_Bd\mu_B + n_Ad\mu_A = 0$
Rearranging:
$d\mu_B = -\dfrac{n_A}{n_B} d\mu_A$
Consider a Gibbs free energy that only includes $μ_n$ conjugate variables as we obtained it from our scaling experiment at $T$ and $P$ constant:
$G = \mu_An_A + \mu_Bn_B \nonumber$
Consider a change in $G$:
$dG = d(\mu_An_A) + d(\mu_Bn_B) \nonumber$
$dG = n_Ad\mu_A+\mu_Adn_A + n_Bd\mu_B+\mu_Bdn_B \nonumber$
However, if we simply write out a change in $G$ due to the number of moles we have:
$dG = \mu_Adn_A +\mu_Bdn_B \nonumber$
Consequently the other terms must add up to zero:
$0 = n_Ad\mu_A+ n_Bd\mu_B \nonumber$
$d\mu_A= - \dfrac{n_B}{n_A}d\mu_B \nonumber$
$d\mu_A= - \dfrac{x_B}{x_A}d\mu_B \nonumber$
In the last step we have simply divided both denominator and numerator by the total number of moles. This expression is the Gibbs-Duhem equation for a 2-component system. It relates the change in one thermodynamic potential ($d\mu_A$) to the other ($d\mu_B$).
The Gibbs-Duhem equation relates the change in one thermodynamic potential ($d\mu_A$) to the other ($d\mu_B$).
Gibbs-Duhem in the Ideal Case
In the ideal case we have:
$\mu_B = \mu^*_B + RT \ln x_B \nonumber$
Gibbs-Duhem gives:
$d\mu_A = - \dfrac{x_B}{x_A} d\mu_B \nonumber$
As:
$d\mu_B = 0 + \dfrac{RT}{x_B} \nonumber$
with $x_B$ being the only active variable at constant temperature, we get:
$d\mu_A = - \dfrac{x_B}{x_A} \dfrac{RT}{x_B} = \dfrac{RT}{x_A} \nonumber$
If we now wish to find $\mu_A$ we need to integrate $d\mu_A$, e.g. form pure 1 to $x_A$. This produces:
$\mu_A = \mu^*_A + RT \ln x_A \nonumber$
This demonstrates that Raoult's law can only hold over the whole range for one component if it also holds for the other over the whole range.
24.03: Chemical Potential of Each Component Has the Same Value in Each Phase in Which the Component Ap
In much the same fashion as the partial molar volume is defined, the partial molar Gibbs function is defined for compound $i$ in a mixture:
$\mu_i = \left( \dfrac{\partial G}{\partial n_i} \right) _{P,T,n_j\neq i} \label{eq1}$
The partial molar function is of particular importance and is called the chemical potential. The chemical potential tells how the Gibbs function will change as the composition of the mixture changes. Since systems tend to seek a minimum aggregate Gibbs function, the chemical potential will point to the direction the system can move in order to reduce the total Gibbs function and reach equilibrium. In general, the total change in the Gibbs function ($dG$) can be calculated from:
$dG = \left( \dfrac{\partial G}{\partial P} \right) _{T,n_i} dP + \left( \dfrac{\partial G}{\partial T} \right) _{P, n_i }dT + \sum_i \left( \dfrac{\partial G}{\partial n_i} \right) _{T,n_j\neq i} dn_i$
Or, by substituting the definition for the chemical potential, and evaluating the pressure and temperature derivatives:
$dG = VdP - SdT + \sum_i \mu_i dn_i$
But as it turns out, the chemical potential can be defined as the partial molar quantity of any of the four major thermodynamic functions $U$, $H$, $A$, or $G$:
Table $1$: Chemical potential can be defined as the partial molar derivative any of the four major thermodynamic functions
$dU = TdS - PdV + \sum_i \mu_i dn_i$ $\mu_i = \left( \dfrac{\partial U}{\partial n_i} \right) _{S,V,n_j\neq i}$
$dH = TdS - VdT + \sum_i \mu_i dn_i$ $\mu_i = \left( \dfrac{\partial H}{\partial n_i} \right) _{S,P,n_j\neq i}$
$dA = -PdV - TdS + \sum_i \mu_i dn_i$ $\mu_i = \left( \dfrac{\partial A}{\partial n_i} \right) _{V,T,n_j\neq i}$
$dG = VdP - SdT + \sum_i \mu_i dn_i$ $\mu_i = \left( \dfrac{\partial G}{\partial n_i} \right) _{P,T,n_j\neq i}$
The last definition, in which the chemical potential is defined as the partial molar Gibbs function, is the most commonly used, and perhaps the most useful (Equation \ref{eq1}). As the partial most Gibbs function, it is easy to show that:
$d\mu = VdP - SdT$
where $V$ is the molar volume, and $S$ is the molar entropy. Using this expression, it is easy to show that:
$\left( \dfrac{\partial \mu}{\partial P} \right) _{T} = V$
and so at constant temperature:
$\int_{\mu^o}^{\mu} d\mu = \int_{P^o}^{P} V\,dP \label{eq5}$
So that for a substance for which the molar volume is fairly independent of pressure at constant temperature (i. e., $\kappa_T$ is very small), therefore Equation \ref{eq5} becomes:
$\int_{\mu^o}^{\mu} d\mu = V \int_{P^o}^{P} dP$
$\mu - \mu^o = V(P-P^o)$
or:
$\mu = \mu^o + V(P-P^o)$
Where $P^o$ is the standard state pressure (1 bar) and $\mu^o$ is the chemical potential at the standard pressure. If the substance is highly compressible (such as a gas) the pressure dependence of the molar volume is needed to complete the integral. If the substance is an ideal gas:
$V =\dfrac{RT}{P}$
So at constant temperature, Equation \ref{eq5} then becomes:
$\int_{\mu^o}^{\mu} d\mu = RT int_{P^o}^{P} \dfrac{dP}{P} \label{eq5b}$
or:
$\mu = \mu^o + RT \ln \left(\dfrac{P}{P^o} \right)$
A lot of chemistry takes place in solution and therefore this topic is of prime interest for chemistry.
Thermodynamic potentials of solutions
The Gibbs free energy of an ideal gas depends logarithmically on pressure:
$G = G^o + RT \ln \dfrac{P}{P^o} \nonumber$
Po is is often dropped out of the formula. and we write:
$G = G^o + RT \ln P \nonumber$
Notice however that although $P$ and $P/P^o$ have the same numerical value, the dimensions are different. $P$ usually has dimensions of bar, but $P/P^o$ is dimensionless.
If we have a gas mixture we can hold the same logarithmic argument for each partial pressure as the gases do not notice each other. We do need to take into account the number of moles of each and work with (partial) molar values, i.e. the thermodynamic potential:
$μ_j = μ_j^o + RT \ln \dfrac{P_j}{P^o} \label{B}$
If we are dealing with an equilibrium over an ideal liquid solution the situation in the gas phase gives us a probe for the situation in the liquid. The equilibrium must hold for each of all components j (say two in binary mixture). That means that for each of them the thermodynamic potential in the liquid and in the gas must be equal:
$μ_j^{sln} = μ_j^{gas} \nonumber$
for all $j$. Consider what happens to a pure component, e.g. $j=1$ in equilibrium with its vapor. We can write:
$μ_1^{pure \,liq}= μ_1^{pure\, vapor}=μ_1^o + RT \ln \dfrac{P^*_1}{P^o} \nonumber$
The asterisk in $P^*_1$ denotes the equilibrium vapor pressure of pure component 1 and we will use that to indicate the thermodynamic potential of pure compounds too:
$μ_1^{*liq}= μ_1^o + RT \ln \dfrac{P^*_1}{P^o} \label{A}$
Combining Equations $\ref{A}$ and $\ref{B}$ we find a relationship between the solution and the pure liquid:
$μ_j^{sln}=μ^*_j + RT \ln \dfrac{P_j}{P^*_j} \nonumber$
Notice that the gas and its pressure is used to link the mixture and the pure compound.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(LibreTexts)/24%3A_Solutions_I_-_Volatile_Solutes/24.02%3A_The_Gibbs-Duhem_Equation.txt
|
Liquids tend to be volatile, and as such will enter the vapor phase when the temperature is increased to a high enough value, provided they do not decompose first. A volatile liquid is one that has an appreciable vapor pressure at the specified temperature. An ideal mixture containing at least one volatile liquid can be described using Raoult’s Law:
$P_j = x_jP^*_j \nonumber$
Raoult’s law can be used to predict the total vapor pressure above a mixture of two volatile liquids. As it turns out, the composition of the vapor will be different than that of the two liquids, with the more volatile compound having a larger mole fraction in the vapor phase than in the liquid phase. This is summarized in the following diagram for an ideal mixture of two compounds, water and ethanol at 75 °C. At this temperature, water a pure vapor pressure of 384 Torr and ethanol has a pure vapor pressure of 945 Torr. In Figure 24.4.1, the composition of the liquid phase is represented by the solid line and the composition of the vapor phase is represented by the dashed line.
Often, it is desirable to depict the phase diagram at a single pressure so that temperature and composition are the variables included in the graphical representation. In such a diagram, the vapor, which exists at higher temperatures) is indicated at the top of the diagram, while the liquid is at the bottom. A typical temperature vs. composition diagram is depicted in Figure 24.4.2 for an ideal mixture of two volatile liquids.
In this diagram, $T_A$ and $T_B$ represent the boiling points of pure compounds $A$ and $B$. If a system having the composition indicated by $\chi_B^c$ has its temperature increased to that indicated by point $c$, The system will consist of two phases, a liquid phase, with a composition indicated by $\chi_B^d$ and a vapor phase indicated with a composition indicated by $\chi_B^b$. The relative amounts of material in each phase can be described by the lever rule, as described previously.
Further, if the vapor with composition $\chi_B^b$ is condensed (the temperature is lowered to that indicated by point b') and re-vaporized, the new vapor will have the composition consistent with $\chi_B^a$. This demonstrates how the more volatile liquid (the one with the lower boiling temperature, which is $A$ in the case of the above diagram) can be purified from the mixture by collecting and re-evaporating fractions of the vapor. If the liquid was the desired product, one would collect fractions of the residual liquid to achieve the desired result. This process is known as distillation.
The Gibbs energy of mixing is always negative
When we add $n_A$ moles of component $A$ and $n_B$ moles of component $B$ to form an ideal liquid solution, this is generally a spontaneous process. Let us consider the Gibbs free energy change of that process:
$\Delta_{mix}G = n_1\mu_1^{sln}+n_2\mu_2^{sln} - (n_1\mu_1^* + n_2\mu_2^*)$
Using:
$\mu_i^{sln} \equiv \mu_i^* + nRTx_i\ln{x_i}$
his expression simplifies to:
$\Delta_{mix} G = nRTx_A\ln{x_A} + nRTx_B\ln{x_B}$
where $n$ is the total moles. Mole fraction, $x_i$, is always less than one, so the Gibbs energy of mixing is always negative; mixing is always spontaneous. We can generalize this to mixtures with more than two components:
$\Delta_{mix} G = nRT\sum_i{x_i\ln{x_i}}$
This expression looks suspiciously familiar. Apart from a factor of $-T$, it is just like the entropy of mixing:
$\Delta_{mix} S = -nR\sum_i{x_i\ln{x_i}}$
Recalling the relationship between Gibbs energy and entropy:
$\Delta_{mix} G = \Delta_{mix}H-T\Delta_{mix}S$
This leaves no room at all for an enthalpy effect:
$\Delta_{mix} H = 0$
Even though there are strong interactions between neighboring particles in liquids, there is no enthalpy change. This implies that it does not matter what the neighboring molecules are. If we represent the average interaction energy between molecule $i$ and $j$ by $U_{ij}$, we are assuming that $U_{ij}$ is always the same. In practice, this is seldom the case. It usually does matter and then the enthalpy term is not zero. As this affects the thermodynamics of the liquid solution, it should also affect the vapor pressures that are in equilibrium with it.
In the ideal case, volumes are additive
From the change of $G$ in its natural variables, we know that:
$\left (\dfrac{\partial G}{\partial P} \right)_T =V \nonumber$
This means that if we take
$\left (\dfrac{\partial \Delta G}{\partial P} \right)_T =\Delta V_{mix} \nonumber$
In the ideal case we get:
$\left (\dfrac{\partial \Delta G^{ideal}}{\partial P} \right)_T =\Delta V_{mix}^{ideal} \nonumber$
$\left (\dfrac{\partial RT ( n_1 \ln x_1 + n_2\ln x_2)}{\partial P} \right)_T =\Delta V_{mix}^{ideal} =0 \nonumber$
In the ideal case, volumes are additive and we need not worry about how the partial molar volumes change with composition.
24.05: Most Solutions are Not Ideal
If we plot the partial pressure of one component, $P_1$, above a mixture with a mole fraction $x_1$, we should get a straight line with a slope of $P^*_1$ (Raoult's law). Above non-ideal solutions the graph will no longer be a straight line but a curve. However towards $x_1=1$ the curve typically approaches the Raoult line. On the other extreme, there often is a more or less linear region as well, but with a different slope (Figure 24.5.1 ). This means that we can identify two limiting laws:
• For $x \rightarrow 0$: Henry's law:
$P_1 = K_H x_1 \nonumber$
• For $x \rightarrow 1$: Raoult's law:
$P_1 = P^*_1 x_1 \nonumber$
This implies that the straight line that indicates the Henry expression will intersect the y-axis at $x=1$ (pure compound) at a different point than $P^*$. For $x \rightarrow 0$ (low concentrations) we can speak of component 1 being the solute (the minority component). At the other end $x \rightarrow 1$ it plays the role of the solvent (majority component).
Another thing to note is that $P^*$ is a property of one pure component, the value of $K_H$ by contrast is a property of the combination of two components, so it needs to be measured for each solute-solvent combination.
As you can see we have a description for both the high and the low end, but not in the middle. In general, the more modest the deviations from ideality the larger the range of validity of the two limiting laws. The way to determine $K_H$ would be to actually determine vapor pressures. How about the other component? Do we need to measure them too? Fortunately we can use thermodynamics to answer this question with no. There is a handy expression that saves us the trouble.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(LibreTexts)/24%3A_Solutions_I_-_Volatile_Solutes/24.04%3A_Ideal_Solutions_obey_Raoult%27s_Law.txt
|
The behaviors of ideal solutions of volatile compounds follow Raoult’s Law. Henry’s Law can be used to describe the deviations from ideality. Henry's law states:
$P_B = k_H P_B^o$
For which the Henry’s Law constant ($k_H$) is determined for the specific compound. Henry’s Law is often used to describe the solubilities of gases in liquids. The relationship to Raoult’s Law is summarized in Figure $1$.
Henry’s Law is depicted by the upper straight line and Raoult’s Law by the lower.
Example $1$: Solubility of Carbon Dioxide in Water
The solubility of $CO_2(g)$ in water at 25 oC is 3.32 x 10-2 M with a partial pressure of $CO_2$ over the solution of 1 bar. Assuming the density of a saturated solution to be 1 kg/L, calculate the Henry’s Law constant for $CO_2$.
Solution:
In one L of solution, there is 1000 g of water (assuming the mass of CO2 dissolved is negligible.)
$(1000 \,g) \left( \dfrac{1\, mol}{18.02\,g} \right) = 55\, mol\, H_2O$
The solubility of $CO_2$ can be used to find the number of moles of $CO_2$ dissolved in 1 L of solution also:
$\dfrac{3.32 \times 10^{-2} mol}{L} \cdot 1 \,L = 3.32 \times 10^{-2} mol\, CO_2$
and so the mol fraction of $CO_2$ is
$\chi_b = \dfrac{3.32 \times 10^{-2} mol}{55.5 \, mol} = 5.98 \times 10^{-4}$
And so
$10^5\, Pa = 5.98 \times 10^{-4} k_H$
or
$k_H = 1.67 \times 10^9\, Pa$
Azeotropes
An azeotrope is defined as the common composition of vapor and liquid when they have the same composition.
Azeotropes can be either maximum boiling or minimum boiling, as show in Figure $\PageIndex{2; left}$. Regardless, distillation cannot purify past the azeotrope point, since the vapor and the liquid phases have the same composition. If a system forms a minimum boiling azeotrope and has a range of compositions and temperatures at which two liquid phases exist, the phase diagram might look like Figure $\PageIndex{2; right}$:
Another possibility that is common is for two substances to form a two-phase liquid, form a minimum boiling azeotrope, but for the azeotrope to boil at a temperature below which the two liquid phases become miscible. In this case, the phase diagram will look like Figure $3$.
Example $1$:
In the diagram, make up of a system in each region is summarized below the diagram. The point e indicates the azeotrope composition and boiling temperature.
1. Single phase liquid (mostly compound A)
2. Single phase liquid (mostly compound B)
3. Single phase liquid (mostly A) and vapor
4. Single phase liquid (mostly B) and vapor
5. Vapor (miscible at all mole fractions since it is a gas)
Solution
Within each two-phase region (III, IV, and the two-phase liquid region, the lever rule will apply to describe the composition of each phase present. So, for example, the system with the composition and temperature represented by point b (a single-phase liquid which is mostly compound A, designated by the composition at point a, and vapor with a composition designated by that at point c), will be described by the lever rule using the lengths of tie lines lA and lB.
Gibbs-Duhem and Henry's law
What happens when Raoult does not hold over the whole range? Recall that in a gas:
$μ_j = μ_j^o + RT \ln \dfrac{P_j}{P^o} \label{B}$
or
$μ_j = μ_j^o + RT \ln P_j \nonumber$
after dropping $P^o=1\; bar$ out of the notation. Note that numerically this does not matter, since $P_j$ is now assumed to be dimensionless.
Let's consider $dμ_1$ at constant temperature:
$dμ_1 = RT\left(\dfrac{\partial \ln P_1}{ \partial x_1}\right)dx_1 \nonumber$
likewise:
$dμ_2 = RT\left(\dfrac{\partial \ln P_2}{ \partial x_2}\right)dx_2 \nonumber$
If we substitute into the Gibbs-Duhem expression we get:
$x_1 \left(\dfrac{∂\ln P_1}{ ∂x_1}\right) dx_1+x_2 \left(\dfrac{∂\ln P_2}{∂x_2} \right) dx_2=0 \nonumber$
Because $dx_1= -dx_2$:
$x_ 1 \left( \dfrac{∂\ln P_1}{ ∂x_1} \right) =x_2 \left( \dfrac{∂\ln P_2}{∂x_2} \right) \nonumber$
(This is an alternative way of writing Gibbs-Duhem).
If in the limit for $x_1 \rightarrow 1$ Raoult Law holds then
$P_1 \rightarrow x_1P^*_1 \nonumber$
Thus:
$\dfrac{∂ \ln P_1}{∂x_1} = \dfrac{1}{x_1} \nonumber$
and
$\dfrac{x_1}{x_1}=x_2 \dfrac{∂ \ln P_2}{∂x_2} \nonumber$
$1=x_2 \dfrac{∂ \ln P_2}{ ∂x_2} \nonumber$
$\dfrac{1}{x_2}= \dfrac{∂ \ln P_2}{∂x_2} \label{EqA12}$
We can integrate Equation $\ref{EqA12}$ to form a logarithmic impression, but it will have an integration constant:
$\ln P_2 =\ln x_2 + constant \nonumber$
This constant of integration can be folded into the logarithm as a multiplicative constant, $K$
$\ln P_2 = \ln \left(K x_2 \right) \nonumber$
So for $x_1 \rightarrow 1$ (i.e., $x_2 \rightarrow 0$), we get that
$P_2=K x_2 \nonumber$
where $K$ is some constant, but not necessarily $P^*$. What this shows is that when one component follows Raoult the other must follow Henry and vice versa. (Note that the ideal case is a subset of this case, in that the value of $K$ then becomes $P^*$ and the linearity must hold over the whole range.)
Margules Functions
Of course a big drawback of the Henry law is that it only describes what happens at the two extremes of the phase diagram and not in the middle. In cases of moderate non-ideality, it is possible to describe the whole range (at least in good approximation) using a Margules function:
$P_1= \left(x_1P^*_1 \right)f_{Mar} \nonumber$
The function $f_{Mar}$ has the shape:
$f_{Mar}= \text{exp} \left[ αx_2^2+βx_2^3+δx_2^3 + .... \right] \nonumber$
Notice that the Margules function involves the mole fraction of the opposite component. It is an exponential with a series expansion. with the constant and linear term missing. As you can see the function has a number of parameters $α$, $β$, $δ$ etc. that need to be determined by experiment. In general, the more the system diverges from ideality, the more parameters you need. Using Gibbs-Duhem is is possible to translate the expression for $P_1$ into the corresponding one for $P_2$.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(LibreTexts)/24%3A_Solutions_I_-_Volatile_Solutes/24.06%3A_Vapor_Pressures_of_Volatile_Binary_Solutions.txt
|
The bulk of the discussion in this chapter dealt with ideal solutions. However, real solutions will deviate from this kind of behavior. So much as in the case of gases, where fugacity was introduced to allow us to use the ideal models, activity is used to allow for the deviation of real solutes from limiting ideal behavior. The activity of a solute is related to its concentration by
$a_B=\gamma \dfrac{m_B}{m^o} \nonumber$
where $\gamma$ is the activity coefficient, $m_B$ is the molaliy of the solute, and $m^o$ is unit molality. The activity coefficient is unitless in this definition, and so the activity itself is also unitless. Furthermore, the activity coefficient approaches unity as the molality of the solute approaches zero, insuring that dilute solutions behave ideally. The use of activity to describe the solute allows us to use the simple model for chemical potential by inserting the activity of a solute in place of its mole fraction:
$\mu_B =\mu_B^o + RT \ln a_B \nonumber$
The problem that then remains is the measurement of the activity coefficients themselves, which may depend on temperature, pressure, and even concentration.
Activity Coefficients for Ionic Solutes
For an ionic substance that dissociates upon dissolving
$MX(s) \rightarrow M^+(aq) + X^-(aq) \nonumber$
the chemical potential of the cation can be denoted $\mu_+$ and that of the anion as $\mu_-$. For a solution, the total molar Gibbs function of the solutes is given by
$G = \mu_+ + \mu_- \nonumber$
where
$\mu = \mu^* + RT \ln a \nonumber$
where $\mu^*$ denotes the chemical potential of an ideal solution, and $a$ is the activity of the solute. Substituting his into the above relationship yields
$G = \mu^*_+ + RT \ln a_+ + \mu_-^* + RT \ln a_- \nonumber$
Using a molal definition for the activity coefficient
$a_i = \gamma_im_i \nonumber$
The expression for the total molar Gibbs function of the solutes becomes
$G = \mu_+^* + RT \ln \gamma_+ m_+ + \mu_-^* + RT \ln \gamma_- m_- \nonumber$
This expression can be rearranged to yield
$G = \mu_+^* + \mu_-^* + RT \ln m_+m_- + RT \ln \gamma_+\gamma _- \nonumber$
where all of the deviation from ideal behavior comes from the last term. Unfortunately, it impossible to experimentally deconvolute the term into the specific contributions of the two ions. So instead, we use a geometric average to define the mean activity coefficient, $\gamma _\pm$.
$\gamma_{\pm} = \sqrt{\gamma_+\gamma_-} \nonumber$
For a substance that dissociates according to the general process
$M_xX_y(s) \rightarrow x M^{y+} (aq) + yX^{x-} (aq) \nonumber$
the expression for the mean activity coefficient is given by
$\gamma _{\pm} = (\gamma_+^x \gamma_-^y)^{1/x+y} \nonumber$
Debeye-Hückel Law
In 1923, Debeye and Hückel (Debye & Hückel, 1923) suggested a means of calculating the mean activity coefficients from experimental data. Briefly, they suggest that
$\log _{10} \gamma_{\pm} = \dfrac{1.824 \times 10^6}{(\epsilon T)^{3/2}} |z_++z_- | \sqrt{I} \nonumber$
where $\epsilon$ is the dielectric constant of the solvent, $T$ is the temperature in K, $z_+$ and $z_-$ are the charges on the ions, and $I$ is the ionic strength of the solution. $I$ is given by
$I = \dfrac{1}{2} \dfrac{m_+ z_+^2 + m_-z_-^2}{m^o} \nonumber$
For a solution in water at 25 oC,
As seen before activities are a way to account for deviation from ideal behavior while still keeping the formulism for the ideal case intact. For example in a ideal solution we have:
$μ^{sln} = μ^* + RT \ln x_i \nonumber$
is replaced by
$μ^{sln} = μ^* + RT \ln a_i \nonumber$
The relationship between $a_i$ and $x_i$ is often written using an activity coefficient $γ$:
$a_i= γ_ix_i \nonumber$
Raoult versus Henry
Implicitly we have made use of Raoult's law here because we originally used
$x_i = \dfrac{P_i}{P^*_i} \nonumber$
In the case of a solvent this makes sense because Raoult's law is still valid in the limiting case, but for the solute it would make more sense to use Henry's law as a basis for the definition of activity:
$a_{solute,H} ≡ \dfrac{P_{solute}}{K_{x,H}} \nonumber$
This does mean that the $μ^*$ now becomes a $μ^{*Henry}$ because the extrapolation of the Henry law all the way to the other side of the diagram where $x_{solute}=1$ points to a point that is not the equilibrium vapor pressure of this component. In fact it represents a virtual state of the system that cannot be realized. This however does not affect the usefulness of the convention.
Various concentration units
The subscript X was added to the K value because we are still using mole fractions. However Henry's law is often used with other concentration measures. The most important are:
• molarity
• molality
• mole fraction
Both the numerical values and the dimensions of K will differ depending on which concentration measure is used. In addition the pressure units can differ. For example for oxygen in water we have:
Kx,H= 4.259 104 atm
Kcp,H= 1.3 10-3 mol/lit.atm
Kpc,H= 769.23 lit.atm/mol
As you can see Kcp,H is simply 1/Kpc,H, both conventions are used..
Note that in this case a choice based on Raoult is really not feasible. At room temperature we are far above the critical point of oxygen which make the equilibrium vapor pressure a non-existent entity. Returning to activities we could use each of the versions of K as a basis for the activity definition. This means that when using activities it must be specified what scale we are using. Activities and Henry coefficients of dissolved gases in water (both fresh and salt) are quite important in geochemistry, environmental chemistry etc.
Non-volatile solutes
A special case arises if the vapor pressure of a solute is negligible. For example if we dissolve sucrose in water. In that case we can still use the Henry based definition
$a_{solute,H} ≡ \dfrac{P_{solute}}{K_{x,H}} \nonumber$
Even though both $K$ and $P$ will be exceedingly small their ratio is still finite. However how do we determine either?
The answer lies in the solvent. Even if the vapor pressure of sucrose is immeasurably small, the water vapor pressure above the solution can be measured. The Gibbs-Duhem equation can then be used to translate one into the other. We can use Raoult Law to define the activity of the solvent:
$a_1 = \dfrac{P_1}{P^*_1} \nonumber$
We can measure the pressures as a function of the solute concentrations. At low concentrations
$\ln a_1 \ln x-1 ≈ -x-2 \nonumber$
At higher concentrations we will get deviations, we can write:
$\ln \dfrac{P_1}{P^*_1}=\ln a_1 ≈ -x_2φ \nonumber$
The 'fudge factor' $φ$ is known as the osmotic coefficient and can thus be determined as a function of the solute concentration from the pressure data. What we are really interested in is $a_2$, not $a_1$:
$a_2= γ_2x_2 \nonumber$
Using Gibbs-Duhem we can convert $φ$ into $γ_2$. Usually this is done in terms of molalities rather than mole fractions and it leads to this integral:
$\ln γ_{2,m} = φ – 1 + \int_{m'=0}^m \dfrac{φ – 1}{m'} dm' \nonumber$
24.08: Activities are Calculated with Respect to Standard States
Need to define a new variable. The thermodynamic activity, $a$, is the effective concentration of a species in a mixture. It is a dimensionless quantity that are calculated with respect to standard states. For a gas, this would be related to the fugacity and for a solution, to the concentration. The activity for a real gas:
$a_i=\frac{f_i}{P^{\circ}}=\frac{\phi_iP_i}{P^{\circ}}=\frac{\phi_i(y_iP)}{P^{\circ}}$
For systems where we treat the gases as ideal:
$\phi_i=1$
$a_i=\frac{P_i}{P^{\circ}}=y_i\frac{P}{P^{\circ}}$
The activity for a solution:
$a_i=\gamma i\sf\frac{[A]}{1\;\underline{M}}$
General chemistry and organic chemistry use ideal reactants where $\gamma_i=1$:
$a_i=\sf\frac{[A]}{1\;\underline{M}}$
The activity for a solid or liquid:
$a_i=1$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(LibreTexts)/24%3A_Solutions_I_-_Volatile_Solutes/24.07%3A_Activities_of_Nonideal_Solutions.txt
|
Activity and activity coefficients
In the ideal case, we have seen that the thermodynamic potential for species $i$ can be written as:
$μ_i^{sln}=μ^o_i + RT \ln \,x_i=μ^o_2 + RT \ln \left(\dfrac{P_i}{P^o}\right) \nonumber$
One approach to non-ideality is to simply redefine the problem and say:
$μ_i^{sln} \equiv μ^o_i + RT \ln\, a_i \nonumber$
≡ indicates this is actually a definition. The newly defined variable a2 is known as the activity. Alternatively we can define it as:
$[a_i] \equiv \dfrac{P_i}{P^o} \nonumber$
As at high enough values of the mole fraction we know that we can still apply Raoult law. So $a_i$ must approach $x_i$ in this limit, but for other concentrations this will no longer hold. Often this is expressed in terms of an activity coefficient $\gamma$:
$[a_i] = \gamma_i x_i \nonumber$
For high values of $x_i$, $\gamma_i$ will approach unity. If we model the non-ideality with a Margules function we see that:
$P_i = x_iP^of_{Mar}$
$[a_i] = \left[\frac{P_i}{P^0}\right] = \left[\frac{x_iP^0f_{Mar}}{P^0}\right] = [x_if_{Mar}]$
The activity coefficient and the Margules function are the same thing in this description.
Regular solutions
A special and simplest case of a Margules function is the case where all but one Margules parameters ($a$) can be neglected. Such a system is called a regular solutionIn this case, we can write:
$a_1 = x_1e^{ax_2^2}$
We can use Gibbs-Duhem to show that this implies:
$a_2 = x_2e^{ax_1^2}$
Gibbs free energy of regular solutions
Consider the change in Gibbs free energy when we mix two components to form a regular solution:
$\Delta G_{mix} = n_1\mu_1^{sln}+n_2\mu_2^{sln} - (n_1\mu_1^* + n_2\mu_2^*)$
Dividing by the total number of moles, we get:
$\Delta_{mix}G = x_1\mu_1^{sln}+x_2\mu_2^{sln} - (x_1\mu_1^* + x_2\mu_2^*)$
Using:
$\mu_i^{sln} \equiv \mu_i^* + RT\ln{a_i}$
and:
$[a_i] = \gamma_i x_i$
We get:
$\frac{\Delta_{mix} G}{RT} = x_1\ln{x_1} + x_2\ln{x_2} + x_1\ln{\gamma_1} + x_2\ln{\gamma_2}$
For a regular solution:
$\ln{\gamma_1} = \ln{f_{Mar}} = ax_2^2$
$\ln{\gamma_2} = \ln{f_{Mar}} = ax_1^2$
This gives:
$\begin{split} \frac{\Delta_{mix} G}{RT} &= x_1\ln{x_1} + x_2\ln{x_2} + x_1ax_2^2 + x_2ax_1^2 \ &= x_1\ln{x_1} + x_2\ln{x_2} + a[x_1+x_2]x_1x_2 \ &= x_1\ln{x_1} + x_2\ln{x_2} + ax_1x_2 \end{split}$
Since:
$x_1 + x_2 = 1$
In this expression we see that we have an additional term to the entropy of mixing term we had seen before. Its coefficient $a$ is dimensionless but represents the fact that the (strong!) interactions between the molecules are different depending on who is the neighbor. In general. $a$ can be written as $W/RT$, where $W$ represents an energy (enthalpy) that brings the difference in interaction energies into account. $W$ does not depend strongly on temperature. We could look at $W$ as the difference in average interaction energies:
$W = 2U_{12} - U_{11} - U_{22}$
Rearranging we get:
$\frac{\Delta_{mix} G}{W} = \frac{RT}{W} [x_1\ln{X_1}+x_2\ln{x_2}]+x_1x_2$
The two terms will compete as a function of temperature. The mixing entropy will be more important at high temperatures, the interaction enthalpy at low temperatures. The entropy term has a minimum at x1=0.5, the enthalpy term a maximum if W is positive. So, one tends to favor mixing, the other segregation and we will get a compromise between the two. Depending on the value of $RT/W$ (read: temperature), we can either get one or two minima. This means that at low temperatures there will be a solubility limit of 1 into 2 and vice versa. At higher temperatures the two components can mix completely. At the transition between these two regimes we will have critical or consolute point.
Notice that even though we used the vapor pressures of the gas to develop our theory, they are conspicuously absent from the final result. The same thing we said about melting points hold true here. Because we are dealing with the miscibility behavior of two condensed phases, the outcome should not depend very strongly on the total pressure of our experiment.
Although in regular solutions the consolute point is predicted to be a maximum in temperature, we can find them as minima as well in practice. The nicotine-water system even has two consolute points, an upper and a lower one. When heating up a mixture of these we first observe mixing, then segregation and then mixing again. Obviously this behavior is far more complicated than we can describe with just one Margules parameter.
Partial molar volumes
What we said above about volumes simply being additive in the ideal case is no longer true here.
$\left(\frac{\partial\Delta G_{mix}}{\partial P}\right)_T = \Delta V_\text{regular}$
$\left(\frac{\partial\Delta H_{mix}+RT(n_1\ln{[x_1]}+n_2\ln{[x_2]})}{\partial P}\right)_T = \Delta V_\text{mix}$
$\left(\frac{\partial\Delta H_{mix}}{\partial P}\right)_T = \Delta V_\text{mix}$
In general the enthalpy of mixing does depend on pressure as it is related to the interactions between the molecules in solution. ($W$ depends on the distance between them). This means that partial molar volumes now become a function of composition and volume is no longer simply additive.
Real solutions
Notice that the curves are symmetrical around $x=0.5$. This implies that it is as easy (or not) dissolving A into B as vice versa. In many cases this is not realistic. Many systems diverge more seriously from ideal behavior that the regular one. Up to a point we can model that by adding more terms to the Margules function. For example, adding a β-term undoes the symmetry (see example 24-7). However, many systems are so non-ideal that the Margules expression become unwieldy with too many parameters.
Boiling non-ideal solutions
Azeotropes
For ideal solutions we have seen that there is a lense shaped two-phase region between the gas and the liquid phase. For non-ideal systems the two-phase region can attain different shapes. In many cases there is either a minimum or a maximum. As such a point the phase gap closes to a point that is known as an azeotrope. It represents a composition of the liquid that boils congruently. That means that the vapor and the liquid have the same composition for a change. Azeotropes impose an important limitation unto distillation: they represent the end point of a distillation beyond which we can not purify by this method.
Eutectics
Another point to be made is that in the diagram with the consolute point we are assuming the pressure to be constant. If we lower the pressure this would affect the boiling points strongly: the whole gas-liquid gap would come down in temperature (see animation). The mixing behavior is only weakly affected. (The reason is that one involves the volume term of the gas, the other only of the liquid(s)). At lower pressures it is possible therefore that the consolute point is above the gas-liquid gap. In other words: the mixtures will boil before they get a chance to mix. The boiling points will be lower there than for the pure compounds. There will be a composition for which the boiling point is at a minimum and where the mixture boils congruently (i.e. to a vapor with the same (overall) composition).
The mutual solubility limits increase as temperature increases, just as happens in the critical mixing case, but that due to the competition from the vapor phase this process comes to an end at the eutectic temperature. At this temperature one liquid boils always completely, the other one in part. At the eutectic composition they both boil away simultaneously.
24.E: Solutions I- Liquid-Liquid Solutions (Exercises)
TBA
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(LibreTexts)/24%3A_Solutions_I_-_Volatile_Solutes/24.09%3A_Gibbs_Energy_of_Mixing_of_Binary_Solutions_in_Terms_of_the_Activity_Coefficient.txt
|
• 25.1: Standard State of Nonvolatile Solutions
The standard state is a hypothetical solution of 1 mol/L in which the solute particles do not interact with each other.
• 25.2: The Activities of Nonvolatile Solutes
Electrolytic nonvolatile solutes have long-range ion-ion interactions, meaning they do not behave ideally even at low concentrations, and, unlike gases, liquids, and solids, there is not a simple substitution for the activity, \(a\).
• 25.3: Colligative Properties Depend only on Number Density
Colligative properties are properties that depend on the number of particles rather than their total mass. This implies that these properties can be used to measure molar mass. Colligative properties include: melting point depression boiling point elevation osmotic pressure.
• 25.4: Osmotic Pressure can Determine Molecular Masses
Osmometry is still of some practical usefulness in polymer science as it is able to measure large molecules up to about 8000 daltons. Many polymers, however, are bigger than that and their mass distribution is usually determined by different means.
• 25.5: Electrolytes Solutions are Nonideal at Low Concentrations
A solution with a strong electrolyte produces multiple charged solutes in solution. We need to consider the dissociation process and stoichiometry of the salt, as well as the electrostatic interactions between the solutes. The result is that electrolytes behave nonideally even at low concentrations.
• 25.6: The Debye-Hückel Theory
Debye and Hückel came up with a theoretical expression that makes is possible to predict mean ionic activity coefficients as sufficiently dilute concentrations. The theory considers the vicinity of each ion as an atmosphere-like cloud of charges of opposite sign that cancels out the charge of the central ion.
• 25.7: Extending Debye-Hückel Theory to Higher Concentrations
The Debye–Hückel theory deviates from real systems at high concentrations because the model is simple and does not take into account effects such as ion association, incomplete dissociation, ion shape and size, polarizability of the ions, the role of the solvent. Several approaches have been proposed to extend the validity of the Debye–Hückel theory, including the Extended Debye-Hückel equation, the Davies equation, the Pitzer equations and specific ion interaction theory.
• 25.8: Homework Problems
25: Solutions II - Nonvolatile Solutes
The activity is a relative measure as it measures equilibrium relative to a standard state. The standard state is defined by the International Union of Pure and Applied Chemistry (IUPAC) and followed systematically by chemists around the globe.︎ The standard state for a solution is defined in terms of the infinite-dilution behavior. This is in contrast to the standard state concentration of 1 mol/L. This can be reconciled by considering that the standard state is a hypothetical solution of 1 mol/L in which the solute has infinite-dilution properties, e.g. solute particles do not interact with each other. This means that the activity coefficient describes all non-ideal behavior when the value is not equal to 1.
25.02: The Activities of Nonvolatile Solutes
For non-ideal gases, we introduced in chapter 11 the concept of fugacity as an effective pressure that accounts for non-ideal behavior. If we extend this concept to non-ideal solution, we can introduce the activity of a liquid or a solid, $a$, as:
$\mu_{\text{non-ideal}} = \mu^{- {\ominus} } + RT \ln a, \label{14.1.1}$
where $\mu$ is the chemical potential of the substance or the mixture, and $\mu^{-\ominus}$ is the chemical potential at standard state. Comparing this definition to Equation 11.4.2, it is clear that the activity is equal to the fugacity for a non-ideal gas (which, in turn, is equal to the pressure for an ideal gas). However, for a liquid and a liquid mixture, it depends on the chemical potential at standard state. This means that the activity is not an absolute quantity, but rather a relative term describing how “active” a compound is compared to standard state conditions. The choice of the standard state is, in principle, arbitrary, but conventions are often chosen out of mathematical or experimental convenience. We already discussed the convention that standard state for a gas is at $P^{-\ominus}=1\;\text{bar}$, so the activity is equal to the fugacity. The standard state for a component in a solution is the pure component at the temperature and pressure of the solution. This definition is equivalent to setting the activity of a pure component, $i$, at $a_i=1$.
For a component in a solution we can use Equation 11.4.2 to write the chemical potential in the gas phase as:
$\mu_i^{\text{vapor}} = \mu_i^{- {\ominus} } + RT \ln \dfrac{P_i}{P^{-\ominus}}. \label{14.1.2}$
If the gas phase is in equilibrium with the liquid solution, then:
$\mu_i^{\text{solution}} = \mu_i^{\text{vapor}} = \mu_i^*, \label{14.1.3}$
where $\mu_i^*$ is the chemical potential of the pure element. Subtracting Equation \ref{14.1.3} from Equation \ref{14.1.2}, we obtain:
$\mu_i^{\text{solution}} = \mu_i^* + RT \ln \dfrac{P_i}{P^*_i}. \label{14.1.4}$
For an ideal solution, we can use Raoult’s law, Equation 13.1.1, to rewrite Equation \ref{14.1.4} as:
$\mu_i^{\text{solution}} = \mu_i^* + RT \ln x_i, \label{14.1.5}$
which relates the chemical potential of a component in an ideal solution to the chemical potential of the pure liquid and its mole fraction in the solution. For a non-ideal solution, the partial pressure in Equation \ref{14.1.4} is either larger (positive deviation) or smaller (negative deviation) than the pressure calculated using Raoult’s law. The chemical potential of a component in the mixture is then calculated using:
$\mu_i^{\text{solution}} = \mu_i^* + RT \ln \left(\gamma_i x_i\right), \label{14.1.6}$
where $\gamma_i$ is a positive coefficient that accounts for deviations from ideality. This coefficient is either larger than one (for positive deviations), or smaller than one (for negative deviations). The activity of component $i$ can be calculated as an effective mole fraction, using:
$a_i = \gamma_i x_i, \label{14.1.7}$
where $\gamma_i$ is defined as the activity coefficient. The partial pressure of the component can then be related to its vapor pressure, using:
$P_i = a_i P_i^*. \label{14.1.8}$
Comparing Equation \ref{14.1.8} with Raoult’s law, we can calculate the activity coefficient as:
$\gamma_i = \dfrac{P_i}{x_i P_i^*} = \dfrac{P_i}{P_i^{\text{R}}}, \label{14.1.9}$
where $P_i^{\text{R}}$ is the partial pressure calculated using Raoult’s law. This result also proves that for an ideal solution, $\gamma=1$. Equation \ref{14.1.9} can also be used experimentally to obtain the activity coefficient from the phase diagram of the non-ideal solution. This is achieved by measuring the value of the partial pressure of the vapor of a non-ideal solution. Examples of this procedure are reported for both positive and negative deviations in Figure $1$.
• As we already discussed in chapter 10, the activity is the most general quantity that we can use to define the equilibrium constant of a reaction (or the reaction quotient). The advantage of using the activity is that it’s defined for ideal and non-ideal gases and mixtures of gases, as well as for ideal and non-ideal solutions in both the liquid and the solid phase.$^1$
1. Notice that, since the activity is a relative measure, the equilibrium constant expressed in terms of the activities is also a relative concept. In other words, it measures equilibrium relative to a standard state. This fact, however, should not surprise us, since the equilibrium constant is also related to $\Delta_{\text{rxn}} G^{-\ominus}$ using Gibbs’ relation. This is why the definition of a universally agreed-upon standard state is such an essential concept in chemistry, and why it is defined by the International Union of Pure and Applied Chemistry (IUPAC) and followed systematically by chemists around the globe.︎
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(LibreTexts)/25%3A_Solutions_II_-_Nonvolatile_Solutes/25.01%3A_Raoult%27s_and_Henry%27s_Laws_Define_Standard_States.txt
|
Colligative properties are important properties of solutions as they describe how the properties of the solvent will change as solute (or solutes) is (are) added. Before discussing these important properties, let us first review some definitions.
• Solution – a homogeneous mixture.
• Solvent – The component of a solution with the largest mole fraction
• Solute – Any component of a solution that is not the solvent.
Solutions can exist in solid (alloys of metals are an example of solid-phase solutions), liquid, or gaseous (aerosols are examples of gas-phase solutions) forms. For the most part, this discussion will focus on liquid-phase solutions.
Freezing Point Depression
In general (and as will be discussed in Chapter 8 in more detail) a liquid will freeze when
$\mu_{solid} \le \mu_{liquid} \nonumber$
As such, the freezing point of the solvent in a solution will be affected by anything that changes the chemical potential of the solvent. As it turns out, the chemical potential of the solvent is reduced by the presence of a solute.
In a mixture, the chemical potential of component $A$ can be calculated by
$\mu_A=\mu_A^o + RT \ln \chi_A \label{chemp}$
And because $\chi_A$ is always less than (or equal to) 1, the chemical potential is always reduced by the addition of another component.
The condition under which the solvent will freeze is
$\mu_{A,solid} = \mu_{A,liquid} \nonumber$
where the chemical potential of the liquid is given by Equation \ref{chemp}, which rearrangement to
$\dfrac{ \mu_A -\mu_A^o}{RT} = \ln \chi_A \nonumber$
To evaluate the temperature dependence of the chemical potential, it is useful to consider the temperature derivative at constant pressure.
$\left[ \dfrac{\partial}{\partial T} \left( \dfrac{\mu_A-\mu_A^o}{RT} \right) \right]_{p} = \left( \dfrac{\partial \ln \chi_A}{\partial T} \right)_p \nonumber$
$- \dfrac{\mu_A - \mu_A^o}{RT^2} + \dfrac{1}{RT} \left[ \left( \dfrac{\partial \mu_A}{\partial T} \right)_p -\left( \dfrac{\partial \mu_A^o}{\partial T} \right)_p \right] =\left( \dfrac{\partial \ln \chi_A}{\partial T} \right)_p \label{bigeq}$
Recalling that
$\mu = H = TS \nonumber$
and
$\left( \dfrac{\partial \mu}{\partial T} \right)_p =-S \nonumber$
Equation \ref{bigeq} becomes
$- \dfrac{(H_A -TS_A - H_A^o + TS^o_A)}{RT^2} + \dfrac{1}{RT} \left[ -S_A + S_A^o\right] =\left( \dfrac{\partial \ln \chi_A}{\partial T} \right)_p \label{bigeq2}$
And noting that in the case of the solvent freezing, $H_A^o$ is the enthalpy of the pure solvent in solid form, and $H_A$ is the enthalpy of the solvent in the liquid solution. So
$H_A^o - H_a = \Delta H_{fus} \nonumber$
Equation \ref{bigeq2} then becomes
$\dfrac{\Delta H_{fus}}{RT^2} - \cancel{ \dfrac{-S_A + S_A^o}{RT}} + \cancel{\dfrac{-S_A + S_A^o}{RT}}=\left( \dfrac{\partial \ln \chi_A}{\partial T} \right)_p \nonumber$
or
$\dfrac{\Delta H_{fus}}{RT^2} = \left( \dfrac{\partial \ln \chi_A}{\partial T} \right)_p \nonumber$
Separating the variables puts the equation into an integrable form.
$\int_{T^o}^T \dfrac{\Delta H_{fus}}{RT^2} dT = \int d \ln \chi_A \label{int1}$
where $T^{o}$ is the freezing point of the pure solvent and $T$ is the temperature at which the solvent will begin to solidify in the solution. After integration of Equation \ref{int1}:
$- \dfrac{\Delta H_{fus}}{R} \left( \dfrac{1}{T} - \dfrac{1}{T^{o}} \right) = \ln \chi_A \label{int3}$
This can be simplified further by noting that
$\dfrac{1}{T} - \dfrac{1}{T^o} = \dfrac{T^o - T}{TT^o} = \dfrac{\Delta T}{TT^o} \nonumber$
where $\Delta T$ is the difference between the freezing temperature of the pure solvent and that of the solvent in the solution. Also, for small deviations from the pure freezing point, $TT^o$ can be replaced by the approximate value $(T^o)^2$. So the Equation \ref{int3} becomes
$- \dfrac{\Delta H_{fus}}{R(T^o)^2} \Delta T = \ln \chi_A \label{int4}$
Further, for dilute solutions, for which $\chi_A$, the mole fraction of the solvent is very nearly 1, then
$\ln \chi_A \approx -(1 -\chi_A) = -\chi_B \nonumber$
where $\chi_B$ is the mole fraction of the solute. After a small bit of rearrangement, this results in an expression for freezing point depression of
$\Delta T = \left( \dfrac{R(T^o)^2}{\Delta H_{fus}} \right) \chi_B \nonumber$
The first factor can be replaced by $K_f$:
$\dfrac{R(T^o)^2}{\Delta H_{fus}} = K_f \nonumber$
which is the cryoscopic constant for the solvent.
$\Delta T$ gives the magnitude of the reduction of freezing point for the solution. Since $\Delta H_{fus}$ and $T^o$ are properties of the solvent, the freezing point depression property is independent of the solute and is a property based solely on the nature of the solvent. Further, since $\chi_B$ was introduced as $(1 - \chi_A)$, it represents the sum of the mole fractions of all solutes present in the solution.
It is important to keep in mind that for a real solution, freezing of the solvent changes the composition of the solution by decreasing the mole fraction of the solvent and increasing that of the solute. As such, the magnitude of $\Delta T$ will change as the freezing process continually removes solvent from the liquid phase of the solution.
Boiling Point Elevation
The derivation of an expression describing boiling point elevation is similar to that for freezing point depression. In short, the introduction of a solute into a liquid solvent lowers the chemical potential of the solvent, cause it to favor the liquid phase over the vapor phase. As sch, the temperature must be increased to increase the chemical potential of the solvent in the liquid solution until it is equal to that of the vapor-phase solvent. The increase in the boiling point can be expressed as
$\Delta T = K_b \chi_B \nonumber$
where
$\dfrac{R(T^o)^2}{\Delta H_{vap}} = K_b \nonumber$
is called the ebullioscopic constant and, like the cryoscopic constant, is a property of the solvent that is independent of the solute or solutes. A very elegant derivation of the form of the models for freezing point depression and boiling point elevation has been shared by F. E. Schubert (Schubert, 1983).
Cryoscopic and ebullioscopic constants are generally tabulated using molality as the unit of solute concentration rather than mole fraction. In this form, the equation for calculating the magnitude of the freezing point decrease or the boiling point increase is
$\Delta T = K_f \,m \nonumber$
or
$\Delta T = K_b \,m \nonumber$
where $m$ is the concentration of the solute in moles per kg of solvent. Some values of $K_f$ and $K_b$ are shown in the table below.
Substance $K_f$ (°C kg mol-1) $T^o_f$ (°C) $K_b$ (°C kg mol-1) $T^o_b$ (°C)
Water 1.86 0.0 0.51 100.0
Benzene 5.12 5.5 2.53 80.1
Ethanol 1.99 -114.6 1.22 78.4
CCl4 29.8 -22.3 5.02 76.8
Example $1$:
The boiling point of a solution of 3.00 g of an unknown compound in 25.0 g of CCl4 raises the boiling point to 81.5 °C. What is the molar mass of the compound?
Solution
The approach here is to find the number of moles of solute in the solution. First, find the concentration of the solution:
$(85.5\, °C- 76.8\, °C) = \left( 5.02\, °C\,Kg/mol \right) m \nonumber$
$m= 0.936\, mol/kg \nonumber$
Using the number of kg of solvent, one finds the number for moles of solute:
$\left( 0.936 \,mol/\cancel{kg} \right) (0.02\,\cancel{kg}) =0.0234 \, mol \nonumber$
The ratio of mass to moles yields the final answer:
$\dfrac{3.00 \,g}{0.0234} = 128 g/mol \nonumber$
Vapor Pressure Lowering
For much the same reason as the lowering of freezing points and the elevation of boiling points for solvents into which a solute has been introduced, the vapor pressure of a volatile solvent will be decreased due to the introduction of a solute. The magnitude of this decrease can be quantified by examining the effect the solute has on the chemical potential of the solvent.
In order to establish equilibrium between the solvent in the solution and the solvent in the vapor phase above the solution, the chemical potentials of the two phases must be equal.
$\mu_{vapor} = \mu_{solvent} \nonumber$
If the solute is not volatile, the vapor will be pure, so (assuming ideal behavior)
$\mu_{vap}^o + RT \ln \dfrac{p'}{p^o} = \mu_A^o + RT \ln \chi_A \label{eq3}$
Where $p’$ is the vapor pressure of the solvent over the solution. Similarly, for the pure solvent in equilibrium with its vapor
$\mu_A^o = \mu_{vap}^o + RT \ln \dfrac{p_A}{p^o} \label{eq4}$
where $p^o$ is the standard pressure of 1 atm, and $p_A$ is the vapor pressure of the pure solvent. Substituting Equation \ref{eq4} into Equation \ref{eq3} yields
$\cancel{\mu_{vap}^o} + RT \ln \dfrac{p'}{p^o}= \left ( \cancel{\mu_{vap}^o} + RT \ln \dfrac{p_A}{p^o} \right) + RT \ln \chi_A \nonumber$
The terms for $\mu_{vap}^o$ cancel, leaving
$RT \ln \dfrac{p'}{p^o}= RT \ln \dfrac{p_A}{p^o} + RT \ln \chi_A \nonumber$
Subtracting $RT \ln(P_A/P^o)$ from both side produces
$RT \ln \dfrac{p'}{p^o} - RT \ln \dfrac{p_A}{p^o} = RT \ln \chi_A \nonumber$
which rearranges to
$RT \ln \dfrac{p'}{p_A} = RT \ln \chi_A \nonumber$
Dividing both sides by $RT$ and then exponentiating yields
$\dfrac{p'}{p_A} = \chi_A \nonumber$
or
$p'=\chi_Ap_A \label{RL}$
This last result is Raoult’s Law. A more formal derivation would use the fugacities of the vapor phases, but would look essentially the same. Also, as in the case of freezing point depression and boiling point elevations, this derivation did not rely on the nature of the solute! However, unlike freezing point depression and boiling point elevation, this derivation did not rely on the solute being dilute, so the result should apply the entire range of concentrations of the solution.
Example $2$:
Consider a mixture of two volatile liquids A and B. The vapor pressure of pure A is 150 Torr at some temperature, and that of pure B is 300 Torr at the same temperature. What is the total vapor pressure above a mixture of these compounds with the mole fraction of B of 0.600. What is the mole fraction of B in the vapor that is in equilibrium with the liquid mixture?
Solution
Using Raoult’s Law (Equation \ref{RL})
$p_A = (0.400)(150\, Toor) =60.0 \,Torr \nonumber$
$p_B = (0.600)(300\, Toor) =180.0 \,Torr \nonumber$
$p_{tot} = p_A + p_B = 240 \,Torr \nonumber$
To get the mole fractions in the gas phase, one can use Dalton’s Law of partial pressures.
$\chi_A = \dfrac{ p_A}{p_{tot}} = \dfrac{60.0 \,Torr}{240\,Torr} = 0.250 \nonumber$
$\chi_B = \dfrac{ p_B}{p_{tot}} = \dfrac{180.0 \,Torr}{240\,Torr} = 0.750 \nonumber$
And, of course, it is also useful to note that the sum of the mole fractions is 1 (as it must be!)
$\chi_A+\chi_B =1 \nonumber$
Osmotic Pressure
Osmosis is a process by which solvent can pass through a semi-permeable membrane (a membrane through which solvent can pass, but not solute) from an area of low solute concentration to a region of high solute concentration. The osmotic pressure is the pressure that when exerted on the region of high solute concentration will halt the process of osmosis.
The nature of osmosis and the magnitude of the osmotic pressure can be understood by examining the chemical potential of a pure solvent and that of the solvent in a solution. The chemical potential of the solvent in the solution (before any extra pressure is applied) is given by
$\mu_A = \mu_A^o + RT \ln \chi_A \nonumber$
And since xA < 1, the chemical potential is of the solvent in a solution is always lower than that of the pure solvent. So, to prevent osmosis from occurring, something needs to be done to raise the chemical potential of the solvent in the solution. This can be accomplished by applying pressure to the solution. Specifically, the process of osmosis will stop when the chemical potential solvent in the solution is increased to the point of being equal to that of the pure solvent. The criterion, therefore, for osmosis to cease is
$\mu_A^o(p) = \mu_A(\chi_b, +\pi) \nonumber$
To solve the problem to determine the magnitude of p, the pressure dependence of the chemical potential is needed in addition to understanding the effect the solute has on lowering the chemical potential of the solvent in the solution. The magnitude, therefore, of the increase in chemical potential due to the application of excess pressure p must be equal to the magnitude of the reduction of chemical potential by the reduced mole fraction of the solvent in the solution. We already know that the chemical potential of the solvent in the solution is reduced by an amount given by
$\mu^o_A - \mu_A = RT \ln \chi_A \nonumber$
And the increase in chemical potential due to the application of excess pressure is given by
$\mu(p+\pi) = \mu(p) + \int _{p}^{\pi} \left( \dfrac{\partial \mu}{\partial p} \right)_T dp \nonumber$
The integrals on the right can be evaluated by recognizing
$\left( \dfrac{\partial \mu}{\partial p} \right)_T = V \nonumber$
where $V$ is the molar volume of the substance. Combining these expressions results in
$-RT \ln \chi_A = \int_{p}^{p+\pi} V\,dp \nonumber$
If the molar volume of the solvent is independent of pressure (has a very small value of $\kappa_T$ – which is the case for most liquids) the term on the right becomes.
$\int_{p}^{\pi} V\,dP = \left. V p \right |_{p}^{p+\pi} = V\pi \nonumber$
Also, for values of $\chi_A$ very close to 1
$\ln \chi_A \approx -(1- \chi_A) = - \chi_B \nonumber$
So, for dilute solutions
$\chi_B RT = V\pi \nonumber$
Or after rearrangement
$\pi \dfrac{\chi_B RT}{V} \nonumber$
again, where $V$ is the molar volume of the solvent. And finally, since $\chi_B/V$ is the concentration of the solute $B$ for cases where $n_B \ll n_A$. This allows one to write a simplified version of the expression which can be used in the case of very dilute solutions
$\pi = [B]RT \nonumber$
When a pressure exceeding the osmotic pressure $\pi$ is applied to the solution, the chemical potential of the solvent in the solution can be made to exceed that of the pure solvent on the other side of the membrane, causing reverse osmosis to occur. This is a very effective method, for example, for recovering pure water from a mixture such as a salt/water solution.
Colligative properties are properties that depend on the number of particles rather than their total mass. This implies that these properties can be used to measure molar mass. Colligative properties include:
• melting point depression
• boiling point elevation
• osmotic pressure
Melting Point Depression
When we freeze a dilute solution the resulting frozen solvent is often quite a bit purer than the original solution. Let us consider this problem and make the following rather opposite assumptions about the solvent component:
1. In the liquid state it can be considered to follow Raoult's law over a sufficient concentration range
2. In the solid state, the solubility is nil
Under these two assumptions, we can consider the thermodynamic potentials of the solvent component (1) at the freezing point. They should be equal once equilibrium has been reached:
$μ_1^s = μ_1^{sln} \nonumber$
$μ_1^s = μ_1^{liq*} + RT\ln\, a_1 \nonumber$
$\dfrac{μ_1^s - μ_1^{liq*} }{RT}=\ln \, a_1 \nonumber$
$\dfrac{-Δμ_1}{RT}=\ln \,a_1 \nonumber$
We can now apply Gibbs-Helmholtz relations by differentiation with respect to temperature:
$\dfrac{\partial}{\partial\;T} \dfrac{Δμ_1}{T} = \dfrac{- Δ_{fus}H_{molar,1}}{T^2} \nonumber$
$\dfrac{\partial [\ln\,a_1}{\partial T} = \dfrac{Δ_{fus}H_{molar,1}}{RT^2} \nonumber$
This means that we can actually use the quantity $Δ_{fus}H_{molar,1}/RT^2$ to determine activities by integration, but usually Raoult Law is assumed valid:
$\ln a_1 \ln x_1 ≈ -x_2 \nonumber$
If we integrate $Δ_{fus}H_{molar,1}/RT^2$ from the melting point of the pure solvent $T_m^*$ to the actual melting point of the solution $T_m$ we get:
$-x_2 = \dfrac{Δ_{fus}H_{molar,1}}{R} \left( \dfrac{1}{T_m^*}- \dfrac{1}{T_m} \right) = \dfrac{Δ_{fus}H_{molar,1}}{R} \left( \dfrac{T_m- T_m^*}{T_m^* T_m} \right) ≈ \dfrac{Δ_{fus}H_{molar,1}}{R} \left(\dfrac{ -ΔT}{T_m^{*2}} \right) \nonumber$
This is often rewritten in terms of molality as:
$ΔT = K_f.m \nonumber$
If Kf is known for the solvent we can add a number of grams of an unknown compound to the solvent measure the temperature depression, this tells us the molality. From molality and weight we can then calculate molar mass. Boiling point elevation is quite similar.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(LibreTexts)/25%3A_Solutions_II_-_Nonvolatile_Solutes/25.03%3A_Colligative_Properties_Depend_only_on_Number_Density.txt
|
Some membrane materials are permeable for some molecules, but not for others. This is often a matter of the size of the molecules, but it can also be a question of solubility of the molecule in the barrier material. Many biological membranes have semipermeable properties and osmosis is therefore an important biological process. Figure 25.4.1 shows a simple osmotic cell. Both compartments contain water, but the one on the right also contains a solute whose molecules (represented by green circles) are too large to pass through the membrane. Many artificial and natural substances are capable of acting as semi-permeable membranes. For example, the walls of most plant and animal cells fall into this category.
If solvent molecules can pass through the membrane, but solute molecules (or ions) cannot, solvent molecules will spontaneously migrate across the membrane to increase the solution's volume and thus reduce its concentration. If the solution is ideal, this process is in many ways analogous to the spontaneous increase in volume of a gas allowed to expand against vacuum. Of course the volume of the 'solute-gas' is limited by the availability of solvent and, if done under gravity in a U-shaped tube, by the build up of hydrostatic pressure. This pressure is known as the osmotic pressure $\Pi$. At equilibrium we can write:
$μ^*(T,P) = μ^{sln}(T,P+ Π,a_1) \nonumber$
$μ^*(T,P) = μ^*(T,P+ Π)+ RT\ln a_1 \nonumber$
From Gibbs energy ($dG$) in its natural variables ($P,T$) we know that:
$\left(\dfrac{∂G}{∂P} \right)_{T,x} = V \nonumber$
Taking the partial versus $x_1$ we get:
$\left( \dfrac{∂μ^*}{∂P} \right)_{T,x_{j}} = \bar{V}^*_{1} \nonumber$
This means we can integrate over the molar volume to convert $μ^*(T,P+ Π)$ to a different pressure:
$μ^*(T,P+\Pi) = μ^*(T,P) + \int_P^{P+ \Pi} \bar{V}^*_{1} dP \nonumber$
Thus we get:
$μ^*(T,P) = μ^*(T,P+ \Pi)+ RT\ln a_1 \nonumber$
$μ^*(T,P) = μ^*(T,P)+ \Pi\bar{V}^*_{1}+ RT\ln a_1 \nonumber$
Once again using the ideal approximation:
$\ln a_1 \ln x_1 ≈ -x_2 \nonumber$
we get:
$RT x_2 = \Pi\bar{V}^*_{1} \nonumber$
$x_2 = \dfrac{n_2}{n_1+n_2}≈\dfrac{n_2}{n_1} \nonumber$
The combination gives an expression involving the molarity:
$\Pi=RTc \nonumber$
Where $c$ is the molar concentration. Osmosis can be used in reverse, if we apply about 30 bar to sea water we can obtain fresh water on the other side of a suitable membrane. This process is used in some places, but better membranes would be desirable and they easily get clogged. The resulting water is not completely salt-free and this means that if used for agriculture the salt may accumulate on the field over time.
Determining Molar Masses
Both melting point depression and boiling point elevation only facilitate the determination of relatively small molar weights. The need for such measurements is no longer felt because we now have good techniques to determine the structure of most small to medium size molecules. For polymers this is a different matter. They usually have a molecular weight (mass) distribution and determining it is an important topic of polymer science.
Osmometry is still of some practical usefulness. It is also colligative and able to measure up to about 8000 daltons. Many polymers are much bigger than that. Their mass distribution is usually determined by different means. The polymers is dissolved and led over a chromatographic column usually based on size-exclusion. The effluent is then probed as function of the elution time by a combination of techniques:
1. UV absorption (determine the monomer concentration)
2. Low Angle Laser Light Scattering (LALLS) and/or Viscometry
The latter two provide information on the molar mass distribution but they give a different moment of that distribution. The combination of techniques gives an idea not only of how much material there is of a given molar mass but also of the linearity or degree of branching of the chains.
Purity analysis
Nevertheless melting point depression is still used in a somewhat different application. When a slightly impure solid is melted its melting point in depressed. Also the melting process is not sudden but takes place over the whole trajectory from typically a lower eutectic temperature up to the depressed melting point (the liquid line in the phase diagram). In organic synthesis the melting behavior is often used as a first convenient indication of purity. In a differential scanning calorimetry (DSC) experiment the melting peak becomes progressively skewed towards lower temperatures at higher impurity levels. The shape of the curve can be modeled with a modified version of the melting point depression expression. This yields a value for the total impurity level in the solid. This technique is used in the pharmaceutical industry for quality control purposes.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(LibreTexts)/25%3A_Solutions_II_-_Nonvolatile_Solutes/25.04%3A_Osmotic_Pressure_can_Determine_Molecular_Masses.txt
|
A solution with a strong electrolyte, such as NaCl in water, is perhaps one of the most obvious systems to consider but, unfortunately, is also one of the more difficult ones. The reason is that the electrolyte produces two charged solutes, Na+ and Cl+ (both in hydrated form), in solution. We need to consider the dissociation process and stoichiometry as we are bringing more than one solute species into solution. We also need to consider electrostatic interactions between solutes. The charges introduce a strong interaction that falls off with r-1, as opposed to ~r-6 if only neutral species are present. This causes a very serious divergence from ideality even at very low concentrations. Consider a salt going into solution:
$C_{ν_+}A_{ν_-} \rightarrow ν_+C^{z+} + ν_-A^{z-} \label{Eq1}$
where $ν_+$ and $ν_-$ are the stoichiometric coefficients and $z_+$ and $z_-$ are the formal charges of the cation and anion, respectively. As we shall see, the stoichiometric coefficients involved in the dissociation process are important for a proper description of the thermodynamics of strong electrolytes. Charge neutrality demands:
$ν_+z_+ + ν_- z_- = 0 \label{Eq2}$
Thermodynamic potentials versus the dissociation
For the salt, we can write:
$μ_2 =μ_2^o + RT \ln a_2 \label{Eq3}$
However, we need to take into account the dissociation of the salt. To do so, we write:
$μ_2 = ν_+μ_+ + ν_-μ_- \label{Eq4}$
This implies:
$μ_2^o = ν_+μ_+^o + ν_-μ_-^o \label{Eq5}$
where
$μ_+ =μ_-^o + RT\ln a_+ \label{Eq6}$
$μ_- =μ_-^o + RT\ln a_- \label{Eq7}$
Usually Henry's law is taken as standard state for both type of ions. However, we cannot measure the activities of the ions separately as it is impossible to add one without adding the other. Nevertheless, we can derive a useful formalism that takes into account the dissociation process. If we substitute the last two equations in the ones above we get:
$ν_+\ln a_+ + ν_- \ln a_-=\ln a_2 \label{Eq8}$
Taking the exponent of either side of Equation $\ref{Eq8}$, we get:
$a_2 =a_+^{ν_+}a_-^{ ν-} \label{Eq9}$
Notice that the stoichiometric coefficients (Equation $\ref{Eq1}$) are exponents in Equation $\ref{Eq9}$. We now introduce the sum of the stoichiometric coefficients:
$ν_+ + ν_- = ν \label{Eq10}$
and define the mean ionic activity $a_±$ as:
$a_±^ ν ≡ a_2 =a_+^{ν+}a_-^{ ν-} \nonumber$
Note
The mean ionic activity $a_\pm$ and the activity of the salt are closely related but the relationship involves exponents due to stoichiometric coefficients involved in the dissociation process. For example:
• For Na1Cl1: ν= 1+1 = 2: a± 2 = aNaCl
• For Al2(SO4)3: ν= 2+3 = 5: a± 5 = aAl2(SO4)3
Activity coefficients
All this remains a formality unless we find a way to relate it back to the concentration of the salt. Usually molality is used as a convenient concentration measure rather than molarity because we are dealing with pretty strong deviations from ideal behavior and that implies that volume may not be an additive quantity. Molality does not involve volume in contrast to molarity. Working with molalities, we can define activity coefficients for both ions, even though we have no hope to determine them separately:
$a_+ =γ_+ m_+ \label{Eq11}$
$a_- =γ_- m_- \label{Eq12}$
Stoichiometry dictates the molalities of the individual ions must be related to the molality of the salt $m$ by:
$m_-=ν_-m \label{Eq13}$
$m_+=ν_+m \label{Eq14}$
Note
We cannot measure the activities of the ions separately because it is impossible to add one without adding the other
Analogous to the mean ionic activity, we can define a mean ionic molality as:
$m_{\pm}^ν ≡ m_+^{ν+}m_-^{ ν-} \label{Eq15}$
We can do the same for the mean ionic activity coefficient:
$γ_{\pm}^ν = γ_+^{ν+}γ_-^{ν-} \label{Eq16}$
Using this definitions we can rewrite:
$a_2=a_{\pm}^ν=a_+^{ν+}a_-^{ν-} \label{Eq17}$
as:
$a_2=a_{\pm}^ ν =γ_{\pm}^ ν m_{\pm}^ ν \label{Eq18}$
Note
Note that when preparing a salt solution of molality $m$, we should substitute:
$m_-=ν_-m \nonumber$
$m_+=ν_+m \nonumber$
into:
$m_±^ν ≡ m_+^{ν+}m_-^{ ν-} \nonumber$
Example 25.5.1: Aluminum Sulfate
For Al2(SO4)3 we get:
• ν= 2+3 = 5
• a± 5 = aAl2(SO4)3
• m-=3m
• m+=2m
So:
m± ν =m+ν+m- ν-=(2m)2(3m)3=108m5
aAl2(SO4)3= a± 5 =108m5γ±5
As you can see the stoichiometry enter both into the exponents and into the calculation of the molality. Notice that the activity of the salt now goes as the fifth power of its overall molality (on top of the dependency of γ± of exp(√m) as shown below).
Measuring mean ionic activity coefficients
In contrast to the individual coefficients, the mean ionic activity coefficient $γ_{\pm}$ is a quantity that can be determined. In fact we can use the same Gibbs-Duhem trick we did for the sucrose problem to do so. We simply measure the water vapor pressure above the salt solution and use:
$\ln γ_{\pm} = φ -1 + \int_{m'=0}^m [ φ -1 ]m' \,dm' \nonumber$
The fact that the salt itself has a negligible vapor pressure does not matter. Particularly for ions with high charges, the deviations from ideality are very strong even at tiny concentrations. Admittedly doing these vapor pressure measurements in pretty tedious, there are some other procedures involving electrochemical potentials. However, they too are tedious.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(LibreTexts)/25%3A_Solutions_II_-_Nonvolatile_Solutes/25.05%3A_Electrolytes_Solutions_are_Nonideal_at_Low_Concentrations.txt
|
As ionic solutions are very common in chemistry, having to measure all activity coefficients ($γ_{\pm}$) for all possible solute-solvent combinations is a pretty daunting task, even though in times past extensive tabulation has taken place. We should be grateful for the rich legacy that our predecessors have left us in this respect (it would be hard to get any funding to do such tedious work today). Of course, it would be very desirable to be able to calculate $γ_{\pm}$ values from first principles or if that fails by semi-empirical means. Fortunately, considerable progress has been made on this front as well. We can only scratch the surface of that topic in this course and will briefly discuss the simplest approach due to Debye and Hückel
Debye and Hückel came up with a theoretical expression that makes is possible to predict mean ionic activity coefficients as sufficiently dilute concentrations. The theory considers the vicinity of each ion as an atmosphere-like cloud of charges of opposite sign that cancels out the charge of the central ion (Figure $1$). From a distance the cloud looks neutral. The quantity $1/κ$ is a measure for the size of this cloud and $\kappa$ is the Debye-length. Its size depends on the concentration of all other ions.
Ionic Strength
To take the effect from all other ions into account, it is useful to define the ionic strength ($I$) as:
$I =\dfrac{1}{2} \sum m_iz_i^2 \nonumber$
where $m_i$ is the molality of ion $i$ and $z_i$ is its charge coefficient. Note that highly charged ions (e.g. $z=3+$) contribute strongly (nine times more than +1 ions), but the formula is linear in the molality. Using the ionic strength the Debye-length becomes:
$κ^2 = constant \, I \nonumber$
The constant contains $kT$ and $ε_rε_o$ in the denominator and the number of Avogadro $N_A$ and the square of the charge of the electron $e$ in the numerator:
$constant= 2000 \dfrac{e^2N_A}{ε_rε_okT} \nonumber$
The Debye length and the logarithmic mean ionic activity coefficient are proportional:
$\ln γ_{\pm} \propto κ \nonumber$
Again there are a number of factors in the proportionality constant:
$\ln γ_± = -|q_+q_-| \dfrac{κ}{8πε_rε_okT} \nonumber$
Note
The factors $ε_r$ and $ε_o$ are the relative permittivity of the medium and the permittivity of vacuum, respectively. Note that the factor $8πε_rε_o$ is specific to the SI system of units. In cgs units the expression would look different, because the permittivities are defined differently in that system
If there is only one salt being dissolved, the ionic strength depends linearly on its concentration, the Debye length κ and $\ln γ_±$, therefore, go as the square root of concentration (usually molality):
$\ln \gamma_{\pm} \propto \sqrt{m} \nonumber$
If there are other ions present the ionic strength involves all of them. This fact is sometimes used to keep ionic strength constant while changing the concentration of one particular ion. Say we wish to lower the concentration of Cu2+ in a redox reaction but we want to keep activity coefficients the same as much as possible. We could then replace it by an ion of the same charge say Zn2+ that does not partake in the reaction. A good way to do that is to dilute the copper solution with a zinc solution of the same concentration instead of with just solvent. The mean activity coefficient is given by the logarithm of this quantity as follows:
$\log _{10}\gamma _{\pm }=-Az_{j}^{2}{\frac {\sqrt {I}}{1+Ba_{0}{\sqrt {I}}}} \label{DH}$
with:
$A={\frac {e^{2}B}{2.303\times 8\pi \epsilon _{0}\epsilon _{r}k_{\rm {B}}T}}$
$B=\left({\frac {2e^{2}N}{\epsilon _{0}\epsilon _{r}k_{\rm {B}}T}}\right)^{1/2}$
where $I$ is the ionic strength and $a_0$ is a parameter that represents the distance of closest approach of ions. For aqueous solutions at 25 °C $A = 0.51\, mol^{−1}/2dm^{3/2}$ and $B = 3.29 nm^{−1}mol^{−1}/2dm^{3/2}$.
Unfortunately this theory only works at very low concentrations and is therefore also known as the Debye limiting law (Figure $2$). There are a number of refinements that aim at extending the range of validity of the theory to be able to work at somewhat higher concentrations. These are discussed in the next section.
The most significant aspect of Equation \ref{DH} is the prediction that the mean activity coefficient is a function of ionic strength rather than the electrolyte concentration. For very low values of the ionic strength the value of the denominator in the expression above becomes nearly equal to one. In this situation the mean activity coefficient is proportional to the square root of the ionic strength.
Importance for Colloids
When a solid is formed by a reaction from solution it is sometimes possible that it remains dispersed as very small particles in the solvent. The sizes typically range in the nanometers This is why it has become fashionable to call them nanoparticles, although they had been known as colloidal particles since the mid nineteenth century. They are smaller than the wavelength of the visible reason. This causes liquids that contain them to remain clear, although they can at times be beautifully colored. A good example is the reduction of AuCl4- with citrate to metallic gold. This produces clear wine red solutions, even at tiny gold concentrations.
$\ce{2n AuCl4(aq)^{-} + n \,citrate^{3-}(aq) + 2n\, H_2O(l) \rightarrow} \[4pt] \ce{2n\, Au(colloid) + 3n\, CH2O(aq) + 3n\, CO2(g) + 8n \,Cl^{-}(aq) + 3n\, H^{+}(aq)} \nonumber$
The reason the gold does not precipitate completely is typically that the nanoparticle (AuNP) formed during the reaction are charged by the attachment of some of the ionic species in solution to its surface. This results in an charged particle with an atmosphere with a certain Debye length around it (Figure $3$). This charged cloud prevents the particle form coalescing with other particles by electrostatic repulsion.
Such a system is called a colloid. Of course these systems are metastable. Often they have a pretty small threshold to crashing to a real precipitate under influence of the strong van der Waals interactions that the particles experience once they manage to get in close contact. Under the right conditions colloids can survive for a long time. Some gold colloids prepared by Faraday in the 1850's are still stable today.
It will be clear from the above that addition of a salt -particularly containing highly charged ions like 3+ or 3-- may destabilize the colloid because the ionic strength will changed drastically and this will affect the Debye length.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(LibreTexts)/25%3A_Solutions_II_-_Nonvolatile_Solutes/25.06%3A_The_Debye-Huckel_Theory.txt
|
The equation for $\log \gamma _{\pm }$ predicted from Debye–Hückel limiting law is:
$\log _{10}\gamma _{\pm }=-Az_{j}^{2}{\frac {\sqrt {I}}{1+Ba_{0}{\sqrt {I}}}} \label{DH}$
It gives satisfactory agreement with experimental measurements for low electrolyte concentrations, typically less than $10^{−3} mol/L$. Deviations from the theory occur at higher concentrations and with electrolytes that produce ions of higher charges, particularly asymmetrical electrolytes. These deviations occur because the model is oversimplified, so there is little to be gained by making small adjustments to the model. Instead, we must challenge the individual assumptions of the model:
• Ions do not interact with each other. Ion association may take place, particularly with ions of higher charge. This was followed up in detail by Niels Bjerrum. The Bjerrum length is the separation at which the electrostatic interaction between two ions is comparable in magnitude to $kT$.
• Complete dissociation. A weak electrolyte is one that is not fully dissociated. As such it has a dissociation constant. The dissociation constant can be used to calculate the extent of dissociation and hence, make the necessary correction needed to calculate activity coefficients.
• Ions are spherical point charges that cannot be polarizable. Ions, as all other atoms and molecules, have a finite size. Many ions, such as the nitrate ion $\ce{NO3^{−}}$, are not spherical. Polyatomic ions are polarizable.
• The solvent composition does not matter. The solvent is not a structureless medium but is made up of molecules. The water molecules in aqueous solution are both dipolar and polarizable. Both cations and anions have a strong primary solvation shell and a weaker secondary solvation shell. Ion–solvent interactions are ignored in Debye–Hückel theory.
• Ionic radius is negligible. At higher concentrations, the ionic radius becomes comparable to the radius of the ionic atmosphere.
Most extensions to the Debye–Hückel theory are empirical in nature. They usually allow the Debye–Hückel equation to be followed at low concentration and add further terms in some power of the ionic strength to fit experimental observations. Several approaches have been proposed to extend the validity of the Debye–Hückel theory.
Extended Debye-Hückel Equation
One such approach is the Extended Debye-Hückel Equation:
$- \log(\gamma) = \dfrac{A|z_+z_-|\sqrt{I}}{1 + Ba\sqrt{I}}$
where $\gamma$ is the activity coefficient, $z$ is the integer charge of the ion $\mu$ is the ionic strength of the aqueous solution, and $a$, is the size or effective diameter of the ion in angstrom. The effective hydrated radius of the ion, $a$ is the radius of the ion and its closely bound water molecules. Large ions and less highly charged ions bind water less tightly and have smaller hydrated radii than smaller, more highly charged ions. Typical values are 3 Å for ions such as H+,Cl-,CN-, and HCOO-. The effective diameter for the hydronium ion is 9 Å. \ (A\) and $B$ are constants with values of respectively 0.5085 and 0.3281 at 25°C in water.
Other approaches include the Davies equation, Pitzer equations and specific ion interaction theory.
Davis Equation
The Davies equation is an empirical extension of Debye–Hückel theory which can be used to calculate activity coefficients of electrolyte solutions at relatively high concentrations at 25 °C. The equation, originally published in 1938, was refined by fitting to experimental data. The final form of the equation gives the mean molal activity coefficient f± of an electrolyte that dissociates into ions having charges z1 and z2 as a function of ionic strength I:
$-\log f_{\pm }=0.5z_{1}z_{2}\left({\frac {\sqrt {I}}{1+{\sqrt {I}}}}-0.30I\right).$
The second term, 0.30 $I$, goes to zero as the ionic strength goes to zero, so the equation reduces to the Debye–Hückel equation at low concentration. However, as concentration increases, the second term becomes increasingly important, so the Davies equation can be used for solutions too concentrated to allow the use of the Debye–Hückel equation. For 1:1 electrolytes the difference between measured values and those calculated with this equation is about 2% of the value for 0.1 M solutions. The calculations become less precise for electrolytes that dissociate into ions with higher charges. Further discrepancies will arise if there is association between the ions, with the formation of ion pairs, such as $\ce{Mg^{2+}SO^{2-}4}$.
Plot of activity coefficients calculated using the Davies equation.
Pitzer Equations
Pitzer equations are important for the understanding of the behaviour of ions dissolved in natural waters such as rivers, lakes and sea-water. They were first described by physical chemist Kenneth Pitzer. The parameters of the Pitzer equations are linear combinations of parameters, of a virial expansion of the excess Gibbs free energy, which characterize interactions amongst ions and solvent. The derivation is thermodynamically rigorous at a given level of expansion. The parameters may be derived from various experimental data such as the osmotic coefficient, mixed ion activity coefficients, and salt solubility. They can be used to calculate mixed ion activity coefficients and water activities in solutions of high ionic strength for which the Debye–Hückel theory is no longer adequate.
An expression is obtained for the mean activity coefficient.
$\ln \gamma _{\pm }= \dfrac {p\ln \gamma _{M}+q\ln \gamma _{X}}{p+q}$
$\ln \gamma _{\pm }=|z^{+}z^{-}|f^{\gamma }+m\left({\frac {2pq}{p+q}}\right)B_{MX}^{\gamma }+m^{2}\left[2{\frac {(pq)^{3/2}}{p+q}}\right]C_{MX}^{\gamma }$
These equations were applied to an extensive range of experimental data at 25 °C with excellent agreement to about 6 mol kg−1 for various types of electrolyte. The treatment can be extended to mixed electrolytes and to include association equilibria. Values for the parameters β(0), β(1) and C for inorganic and organic acids, bases and salts have been tabulated. Temperature and pressure variation is also discussed.
Specific ion interaction theory
Specific ion Interaction Theory (SIT theory) is a theory used to estimate single-ion activity coefficients in electrolyte solutions at relatively high concentrations. It does so by taking into consideration interaction coefficients between the various ions present in solution. Interaction coefficients are determined from equilibrium constant values obtained with solutions at various ionic strengths. The determination of SIT interaction coefficients also yields the value of the equilibrium constant at infinite dilution.
The activity coefficient of the jth ion in solution is written as $γ_j$ when concentrations are on the molal concentration scale and as yj when concentrations are on the molar concentration scale. (The molality scale is preferred in thermodynamics because molal concentrations are independent of temperature). The basic idea of SIT theory is that the activity coefficient can be expressed as
$\log \gamma _{j}=-z_{j}^{2}{\frac {0.51{\sqrt {I}}}{1+1.5{\sqrt {I}}}}+\sum _{k}\epsilon _{jk}m_{k}$
where z is the electrical charge on the ion, $I$ is the ionic strength, $ε$ and $b$ are interaction coefficients and $m$ are concentrations. The summation extends over the other ions present in solution, which includes the ions produced by the background electrolyte. The first term in these expressions comes from Debye-Hückel theory. The second term shows how the contributions from "interaction" are dependent on concentration. Thus, the interaction coefficients are used as corrections to Debye-Hückel theory when concentrations are higher than the region of validity of that theory.
25.08: Homework Problems
In the mid 1920's the German physicist Werner Heisenberg showed that if we try to locate an electron within a region $Δx$; e.g. by scattering light from it, some momentum is transferred to the electron, and it is not possible to determine exactly how much momentum is transferred, even in principle. Heisenberg showed that consequently there is a relationship between the uncertainty in position $Δx$ and the uncertainty in momentum $Δp$.
$\Delta p \Delta x \ge \frac {\hbar}{2} \label {5-22}$
You can see from Equation $\ref{5-22}$ that as $Δp$ approaches 0, $Δx$ must approach ∞, which is the case of the free particle discussed previously.
This uncertainty principle, which also is discussed in Chapter 4, is a consequence of the wave property of matter. A wave has some finite extent in space and generally is not localized at a point. Consequently there usually is significant uncertainty in the position of a quantum particle in space. Activity 1 at the end of this chapter illustrates that a reduction in the spatial extent of a wavefunction to reduce the uncertainty in the position of a particle increases the uncertainty in the momentum of the particle. This illustration is based on the ideas described in the next section.
Exercise $1$
Compare the minimum uncertainty in the positions of a baseball (mass = 140 gm) and an electron, each with a speed of 91.3 miles per hour, which is characteristic of a reasonable fastball, if the standard deviation in the measurement of the speed is 0.1 mile per hour. Also compare the wavelengths associated with these two particles. Identify the insights that you gain from these comparisons.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(LibreTexts)/25%3A_Solutions_II_-_Nonvolatile_Solutes/25.07%3A_Extending_Debye-Huckel_Theory_to_Higher_Concentrations.txt
|
Many important chemical reactions -if not most- are performed in solution rather than between solids or gases. Solid state reactions are often very slow and not all chemical species can be put in the vapor phase because they decompose before evaporating.
Often we are not concerned with the temporal aspects of a reaction. That can be technologically very important but it is the domain of kinetics - a different branch of Physical Chemistry - rather than classical thermodynamics.. The latter is more concerned with the endpoint. This is the thermodynamically speaking the (stable) equilibrium, but chemically it can either represent a completed reaction or a chemical equilibrium.
Unfortunately, of the three main aggregation states: gas – liquid – solid, the structure of liquids is least understood and one of the most complex liquids is also one of the most extensively used ones: water. It is vital to many branches of chemistry varying from geochemistry to environmental chemistry to biochemistry. We shall make just a small inroad into its complexity.
Extent of reaction
To describe the progress of a reaction we define the extent of reaction. It is usually denoted by the Greek letter $ξ$.
Consider a generic reaction:
$v_AA + v_BB \rightleftharpoons v_YY + v_ZZ \nonumber$
Using stoichiometry we can define the extent by considering how the number of moles (or molar amounts) of each species changes during the reaction:
reactants
• $n_A= n_{A,0} - v_A ξ$
• $n_B= n_{B,0} - v_B ξ$
products
• $n_Y= n_{Y,0} + v_Y ξ$
• $n_Z= n_{Z,0} + v_Z ξ$
The dimension of ξ is [mol] because the stoichiometric coefficients vi are dimensionless integers. If the reaction goes to completion for one of the reactants -the limiting reactant- nA or B=nlimiting will go to zero. If we start with $n_{limiting}= v_{limiting}$ moles, the value of ξ starts at 0 (no products) and goes to 1 at completion (limiting reactant depleted). When approaching an equilibrium $ξ$ will not go beyond $ξ_{eq}$.
Measuring ξ
The extent of reaction is what is the central subject of reaction kinetics. Its value is typically measured as a function of time indirectly by measuring a quantity q that is linearly dependent on ξ(t):
$q( ξ ) = aξ +b \nonumber$
Consider the situation at the extremes $ξ=0$ and $ξ=1$:
$q_0= a.0+b= b \nonumber$
$q_1= a.1+b= a+b \nonumber$
$q_1-q_0= a \nonumber$
Thus, $ξ$ can be found from
$\dfrac{q(t)-q0}{q_1-q_0}=\dfrac{q(t)-b}{a} \nonumber$
The nature of $q$ can vary widely from UV/Vis absorption, conductivity, gravimetric to caloric data.
In practice, $q_0$ at $ξ=0$ is often hard to observe because it takes time to mix the reactants, particularly in solutions, and q1 at ξ=1 may never be reached if the reaction goes to equilibrium. Nevertheless the values of a and b can often be found from the available data by fitting techniques.
In (equilibrium, static) thermodynamics we are only concerned with the endpoint:
• $ξ=1$: the reaction runs to completion
• $ξ=ξ_{eq}$: the react ion goes to a state of chemical equilibrium
Thermodynamic Potentials
As we have seen we can write any change in the Gibbs free energy due to changes in the molar amounts of the species involved in the reaction (at $T$, $P$ constant) as:
$dG =\sum \dfrac{∂G}{∂n_i} dn_i = \sum μ_idn_i \nonumber$
where $μ$ is the thermodynamic potential, often called chemical potential when dealing with reactions. From the definition of ξ we can see by differentiation that
• $d n_A=- v_Adξ$
• $d n_B=- v_Adξ$
• $d n_Y= v_Ydξ$
• $d n_Z= v_Zdξ$
This allows us to unify the changes in the molar amount of all the species into one single variable $dξ$. We get:
$dG = \left[ \sum -v_{i,reactants} μ_{i,reactants} + \sum+v_{i,products} μ_{j,products} \right]dξ \nonumber$
or
$\left (\dfrac{∂G}{∂ξ} \right)_{T,P} = -\sum v_{i,r}μ_{i,r}+ \sum v_{i,p}μ_{j,p} \nonumber$
This quantity is also written as:
$\left( \dfrac{∂G}{∂ξ} \right)_{T,P} =Δ_rG \nonumber$
This quantity gives the change in Gibbs free energy for the reaction (as written!!) for Δξ=1 mole. (Units are [J/mol] therefore). This is the change in Gibbs energy (the slope of $G$ vs $\xi$) when the extent of reaction changes by one mole with a fixed composition. Equilibrium results when Gibbs energy is at a minimum with respect to the extent of reaction.
Gas Reactions
Let us assume that our reaction is entirely between gas species and that the gas is sufficiently dilute that we can use the ideal gas law. Then we can write for each species:
$μ_i= μ_i^o+RT \ln \dfrac{P_i}{P_i^o} \nonumber$
We can then split up the ΔrG expression in two parts:
$Δ_rG = Δ_rG^o + RT\ln Q \nonumber$
The standard potentials:
$Δ_rG^o = -\sum v_i,_r μ^o_{i,r} + \sum v_{i,p} μ^o_{j,p} \nonumber$
and the logarithmic terms:
$RT \ln Q= - v_A RT \ln \left( \dfrac{P_A}{P_A^o} \right)- v_B RT\ln \left( \dfrac{P_B}{P_B^o} \right) + v_YRT \ln \left( \dfrac{P_Y}{P_Y^o} \right) + v_ZRT \ln \left( \dfrac{P_Z}{P_Z^o} \right) \nonumber$
We can combine all the logarithmic terms into Q, called the reaction quotient. The stoichiometric coefficients become exponents and the reactants' factors will be 'upside down' compared to the products, because of the properties of logarithms:
$a \ln x = \ln x^a \nonumber$
$- a \ln x = \ln \left( \dfrac{1}{x^a} \right) \nonumber$
We have kept the standard pressures $P_i^o$ in the expression, but often they are omitted. They are usually all 1 bar, but in principle we could choose 1 bar for A 1 Torr for B an 1 psi for the products. It creates a valid (though ridiculous) definition of what o stands for. (Of course the value of $Δ_rG^o$ does depend on that choice!).
We could write
$RT \ln Q = RT \ln \dfrac{Q_P}{Q^o} \nonumber$
$Q^o$ is typically unity in magnitude but it cancels the dimensions of $Q_P$. That means that $Q$ and $Q_P$ are equal in magnitude and we can get $Q$ from $Q_P$ by simply dropping the dimensions. $Q$ is dimensionless but $Q_P$ usually is not. Often this fine distinction is simply not made and $Q^o$ is omitted, we get:
$Δ_rG = Δ_rG^o + RT\ln \dfrac{P_Y^{v_Y}P_Z^{v_Z} }{P_A^{v_A}P_B^{v_B}} \nonumber$
Notice the difference between $Δ_rG$ which denotes the conditions (e.g. pressures) of your reaction and $Δ_rG^o$ denotes standard conditions.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(LibreTexts)/26%3A_Chemical_Equilibrium/26.01%3A_Equilibrium_Results_when_Gibbs_Energy_is_Minimized.txt
|
Consider a generic reaction:
$v_AA + v_BB \rightleftharpoons v_YY + v_ZZ \nonumber$
When the reaction reaches equilibrium, the Gibbs energy of reaction, $Δ_rG$, goes to zero. In this case, the reaction quotient, $Q$, is usually rewritten as the equilibrium constant, $K$, and we get:
$\Delta_rG = \Delta_rG^o + RT\ln K=0 \nonumber$
$\Delta_rG^o = - RT\ln K \nonumber$
where
$K=\dfrac{P_{eq,Y}^{vY}P_{eq,Z}^{vZ}}{P_{eq,A}^{vA}P_{eq,B}^{vB}} \nonumber$
Note
As you see $Δ_rG^o$ is not zero, because the standard state does not represent an equilibrium state (typically).
We can calculate the temperature dependence of $K$. Rather than look at:
$\left(\frac{\partial K}{\partial T}\right)_P \nonumber$
We will look at:
$\left(\frac{\partial\ln{K}}{\partial T}\right)_P \nonumber$
Starting with Gibbs energy:
$\Delta_rG^{\circ}=-RT\ln{K} \nonumber$
$\therefore\ln{K}=-\frac{\Delta rG^{\circ}}{RT} \nonumber$
$\left(\frac{\partial\ln{K}}{\partial T}\right)_P=-\frac{1}{R}\left[\frac{\partial(\Delta_rG^{\circ}/T)}{\partial T}\right]_P \nonumber$
We need to take the Gibbs-Helmholtz equation:
$\Delta \bar{H}=-T^2\left[\frac{\partial (\Delta\bar{G}/T)}{\partial T}\right]_P \nonumber$
We can also write it as:
$\left[\frac{\partial\left(\Delta\bar{G}/T\right)}{\partial T}\right]_P=-\frac{\Delta \bar{H}}{T^2} \nonumber$
Plugging the Gibbs-Helmholtz equation into our earlier equation, we obtain the Van't Hoff equation:
$\left(\frac{\partial\ln{K}}{\partial T}\right)_P=\frac{\Delta_rH^{\circ}}{RT^2} \nonumber$
The temperature dependence of the equilibrium constant depends on enthalpy of reaction. Rearranging:
$d\left(\ln{K}\right)=\frac{\Delta_rH^{\circ}(T)}{RT^2}dT \nonumber$
This is the Van't Hoff equation. We can use it to find $K$ at other temperatures. Assuming enthalpy is independent of temperature, the integrated form becomes:
$\ln{\frac{K(T_2)}{K(T_1)}}=-\frac{\Delta_rH^{\circ}}{R}\left(\frac{1}{T_2}-\frac{1}{T_1}\right) \nonumber$
Le Chatelier's Principle
In the ideal gas reaction case, $K$ only depends on temperature, $T$, (just like $U$) not on the total pressure, $P$. This leads to the well-known principle of Le Chatelier. Consider a gas reaction like:
$A \rightleftharpoons B + C \nonumber \nonumber$
e.g.
$PCl_5 \rightleftharpoons PCl_3 + Cl_2 \nonumber$
In pressures, the equilibrium constant becomes:
$K = \dfrac{P_BP_C}{P_A} \nonumber$
If initially nA =1 we have at an extent ξ:
nA = 1- ξ
nB= ξ
nC= ξ
nTotal = 1+ ξ
The partial pressures are given by Dalton's law:
PA= [1- ξ /1+ ξ] P
PB [ξ / 1+ ξ ] P
PC [ξ /1+ ξ ] P
The equilibrium constant becomes:
$K = P\dfrac{ξ_{eq}^2}{1-ξ_{eq}^2} \nonumber$
Even though the total pressure, $P$, does occur in this equation, $K$ is not dependent on $P$. If the total pressure is changed (e.g. by compression of the gas) the value of ξeq will change (the equilibrium shifts) in response. It will go to the side with the fewer molecules. This fact is known as le Chatelier's principle
If the system is not ideal we will also get a Chatelier shift, but the value of $K$ may change a little because the value of the activity coefficients (or fugacity) is a little dependent on the pressure too. In solution the same thing holds. In ideal solutions $K$ is only $T$ dependent, but as we saw these systems are rare. Particularly in ionic solutions equilibrium constants will be effected by other things than just temperature, e.g. changes in the ionic strength and we need to find the activity coefficients to make any predictions.
Example
Consider the reaction of ammonia decomposition:
$\sf 2NH_3\it\left(g\right)\sf\leftrightharpoons N_2\it\left(g\right)\sf+3H_2\it\left(g\right) \nonumber$
$\Delta_fH^{\circ}(\text{NH}_3, g)=-45.90\;\sf kJ/mol \nonumber$
The reaction is initially at equilibrium. For each of the stresses, use your General Chemistry knowledge to decide if equilibrium will be unaffected, or if it will shift towards reactants or products.
Process Shifts towards
Isothermal removal of $\sf H_2$ gas
Isothermal addition of $\sf N_2$ gas
Isothermal decrease of container volume
Isothermal and isochoric addition of argon gas
Isobaric increase of temperature
Write the equilibrium constant expression for the above chemical reaction in terms of gas-phase mole fractions.
Concentration
The gas law contains a hidden definition of concentration:
$PV= nRT \nonumber$
$P= \left(\dfrac{n}{V} \right) RT \nonumber$
$P= c RT \nonumber$
$c= \dfrac{P}{RT} \label{Conc}$
Here c stands for the molar amount per unit volume or molarity. For gaseous mixtures we do not use this fact much, but it provides the link to the more important liquid solution as a reaction medium. We can rewrite the equilibrium constant as
$K= \dfrac{c_{eq,Y}^{v_Y} c_{eq,Z}^{v_Z} }{c_{eq,A}^{v_A} c_{eq,B}^{v_B}} \label{K}$
However, c (Equation $\ref{Conc}$) is substituted into $K$ (Equation $\ref{K}$), then not all the factors of $RT$ cancel. The missing term g. ln[RT] depends on the stoichiometric coefficients:
$g=v_Y+v_Z-v_A-v_B \nonumber$
The term is generally incorporated in ΔrGo so that the latter now refers to a new standard state of 1 mole per liter of each species rather than 1 bar of each (or so).
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(LibreTexts)/26%3A_Chemical_Equilibrium/26.02%3A_An_Equilibrium_Constant_is_a_Function_of_Temperature_Only.txt
|
$\newcommand{\tx}[1]{\text{#1}} % text in math mode$ $\newcommand{\subs}[1]{_{\text{#1}}} % subscript text$ $\newcommand{\sups}[1]{^{\text{#1}}} % superscript text$ $\newcommand{\st}{^\circ} % standard state symbol$ $\newcommand{\id}{^{\text{id}}} % ideal$ $\newcommand{\rf}{^{\text{ref}}} % reference state$ $\newcommand{\units}[1]{\mbox{\thinspace#1}}$ $\newcommand{\K}{\units{K}} % kelvins$ $\newcommand{\degC}{^\circ\text{C}} % degrees Celsius$ $\newcommand{\br}{\units{bar}} % bar (\bar is already defined)$ $\newcommand{\Pa}{\units{Pa}}$ $\newcommand{\mol}{\units{mol}} % mole$ $\newcommand{\V}{\units{V}} % volts$ $\newcommand{\timesten}[1]{\mbox{\,\times\,10^{#1}}}$ $\newcommand{\per}{^{-1}} % minus one power$ $\newcommand{\m}{_{\text{m}}} % subscript m for molar quantity$ $\newcommand{\CVm}{C_{V,\text{m}}} % molar heat capacity at const.V$ $\newcommand{\Cpm}{C_{p,\text{m}}} % molar heat capacity at const.p$ $\newcommand{\kT}{\kappa_T} % isothermal compressibility$ $\newcommand{\A}{_{\text{A}}} % subscript A for solvent or state A$ $\newcommand{\B}{_{\text{B}}} % subscript B for solute or state B$ $\newcommand{\bd}{_{\text{b}}} % subscript b for boundary or boiling point$ $\newcommand{\C}{_{\text{C}}} % subscript C$ $\newcommand{\f}{_{\text{f}}} % subscript f for freezing point$ $\newcommand{\mA}{_{\text{m},\text{A}}} % subscript m,A (m=molar)$ $\newcommand{\mB}{_{\text{m},\text{B}}} % subscript m,B (m=molar)$ $\newcommand{\mi}{_{\text{m},i}} % subscript m,i (m=molar)$ $\newcommand{\fA}{_{\text{f},\text{A}}} % subscript f,A (for fr. pt.)$ $\newcommand{\fB}{_{\text{f},\text{B}}} % subscript f,B (for fr. pt.)$ $\newcommand{\xbB}{_{x,\text{B}}} % x basis, B$ $\newcommand{\xbC}{_{x,\text{C}}} % x basis, C$ $\newcommand{\cbB}{_{c,\text{B}}} % c basis, B$ $\newcommand{\mbB}{_{m,\text{B}}} % m basis, B$ $\newcommand{\kHi}{k_{\text{H},i}} % Henry's law constant, x basis, i$ $\newcommand{\kHB}{k_{\text{H,B}}} % Henry's law constant, x basis, B$ $\newcommand{\arrow}{\,\rightarrow\,} % right arrow with extra spaces$ $\newcommand{\arrows}{\,\rightleftharpoons\,} % double arrows with extra spaces$ $\newcommand{\ra}{\rightarrow} % right arrow (can be used in text mode)$ $\newcommand{\eq}{\subs{eq}} % equilibrium state$ $\newcommand{\onehalf}{\textstyle\frac{1}{2}\D} % small 1/2 for display equation$ $\newcommand{\sys}{\subs{sys}} % system property$ $\newcommand{\sur}{\sups{sur}} % surroundings$ $\renewcommand{\in}{\sups{int}} % internal$ $\newcommand{\lab}{\subs{lab}} % lab frame$ $\newcommand{\cm}{\subs{cm}} % center of mass$ $\newcommand{\rev}{\subs{rev}} % reversible$ $\newcommand{\irr}{\subs{irr}} % irreversible$ $\newcommand{\fric}{\subs{fric}} % friction$ $\newcommand{\diss}{\subs{diss}} % dissipation$ $\newcommand{\el}{\subs{el}} % electrical$ $\newcommand{\cell}{\subs{cell}} % cell$ $\newcommand{\As}{A\subs{s}} % surface area$ $\newcommand{\E}{^\mathsf{E}} % excess quantity (superscript)$ $\newcommand{\allni}{\{n_i \}} % set of all n_i$ $\newcommand{\sol}{\hspace{-.1em}\tx{(sol)}}$ $\newcommand{\solmB}{\tx{(sol,\,m\B)}}$ $\newcommand{\dil}{\tx{(dil)}}$ $\newcommand{\sln}{\tx{(sln)}}$ $\newcommand{\mix}{\tx{(mix)}}$ $\newcommand{\rxn}{\tx{(rxn)}}$ $\newcommand{\expt}{\tx{(expt)}}$ $\newcommand{\solid}{\tx{(s)}}$ $\newcommand{\liquid}{\tx{(l)}}$ $\newcommand{\gas}{\tx{(g)}}$ $\newcommand{\pha}{\alpha} % phase alpha$ $\newcommand{\phb}{\beta} % phase beta$ $\newcommand{\phg}{\gamma} % phase gamma$ $\newcommand{\aph}{^{\alpha}} % alpha phase superscript$ $\newcommand{\bph}{^{\beta}} % beta phase superscript$ $\newcommand{\gph}{^{\gamma}} % gamma phase superscript$ $\newcommand{\aphp}{^{\alpha'}} % alpha prime phase superscript$ $\newcommand{\bphp}{^{\beta'}} % beta prime phase superscript$ $\newcommand{\gphp}{^{\gamma'}} % gamma prime phase superscript$ $\newcommand{\apht}{\small\aph} % alpha phase tiny superscript$ $\newcommand{\bpht}{\small\bph} % beta phase tiny superscript$ $\newcommand{\gpht}{\small\gph} % gamma phase tiny superscript$ $\newcommand{\upOmega}{\Omega}$ $\newcommand{\dif}{\mathop{}\!\mathrm{d}} % roman d in math mode, preceded by space$ $\newcommand{\Dif}{\mathop{}\!\mathrm{D}} % roman D in math mode, preceded by space$ $\newcommand{\df}{\dif\hspace{0.05em} f} % df$ $\newcommand{\dBar}{\mathop{}\!\mathrm{d}\hspace-.3em\raise1.05ex{\Rule{.8ex}{.125ex}{0ex}}} % inexact differential$ $\newcommand{\dq}{\dBar q} % heat differential$ $\newcommand{\dw}{\dBar w} % work differential$ $\newcommand{\dQ}{\dBar Q} % infinitesimal charge$ $\newcommand{\dx}{\dif\hspace{0.05em} x} % dx$ $\newcommand{\dt}{\dif\hspace{0.05em} t} % dt$ $\newcommand{\difp}{\dif\hspace{0.05em} p} % dp$ $\newcommand{\Del}{\Delta}$ $\newcommand{\Delsub}[1]{\Delta_{\text{#1}}}$ $\newcommand{\pd}[3]{(\partial #1 / \partial #2 )_{#3}} % \pd{}{}{} - partial derivative, one line$ $\newcommand{\Pd}[3]{\left( \dfrac {\partial #1} {\partial #2}\right)_{#3}} % Pd{}{}{} - Partial derivative, built-up$ $\newcommand{\bpd}[3]{[ \partial #1 / \partial #2 ]_{#3}}$ $\newcommand{\bPd}[3]{\left[ \dfrac {\partial #1} {\partial #2}\right]_{#3}}$ $\newcommand{\dotprod}{\small\bullet}$ $\newcommand{\fug}{f} % fugacity$ $\newcommand{\g}{\gamma} % solute activity coefficient, or gamma in general$ $\newcommand{\G}{\varGamma} % activity coefficient of a reference state (pressure factor)$ $\newcommand{\ecp}{\widetilde{\mu}} % electrochemical or total potential$ $\newcommand{\Eeq}{E\subs{cell, eq}} % equilibrium cell potential$ $\newcommand{\Ej}{E\subs{j}} % liquid junction potential$ $\newcommand{\mue}{\mu\subs{e}} % electron chemical potential$ $\newcommand{\defn}{\,\stackrel{\mathrm{def}}{=}\,} % "equal by definition" symbol$ $\newcommand{\D}{\displaystyle} % for a line in built-up$ $\newcommand{\s}{\smash[b]} % use in equations with conditions of validity$ $\newcommand{\cond}[1]{\$-2.5pt]{}\tag*{#1}}$ $\newcommand{\nextcond}[1]{\[-5pt]{}\tag*{#1}}$ $\newcommand{\R}{8.3145\units{J\,K\per\,mol\per}} % gas constant value$ $\newcommand{\Rsix}{8.31447\units{J\,K\per\,mol\per}} % gas constant value - 6 sig figs$ $\newcommand{\jn}{\hspace3pt\lower.3ex{\Rule{.6pt}{2ex}{0ex}}\hspace3pt}$ $\newcommand{\ljn}{\hspace3pt\lower.3ex{\Rule{.6pt}{.5ex}{0ex}}\hspace-.6pt\raise.45ex{\Rule{.6pt}{.5ex}{0ex}}\hspace-.6pt\raise1.2ex{\Rule{.6pt}{.5ex}{0ex}} \hspace3pt}$ $\newcommand{\lljn}{\hspace3pt\lower.3ex{\Rule{.6pt}{.5ex}{0ex}}\hspace-.6pt\raise.45ex{\Rule{.6pt}{.5ex}{0ex}}\hspace-.6pt\raise1.2ex{\Rule{.6pt}{.5ex}{0ex}}\hspace1.4pt\lower.3ex{\Rule{.6pt}{.5ex}{0ex}}\hspace-.6pt\raise.45ex{\Rule{.6pt}{.5ex}{0ex}}\hspace-.6pt\raise1.2ex{\Rule{.6pt}{.5ex}{0ex}}\hspace3pt}$ The relation $K=e^{-\frac{\Delta_rG^\circ}{RT}}$ gives us a way to evaluate the thermodynamic equilibrium constant $K$ of a reaction at a given temperature from the value of the standard molar reaction Gibbs energy $\Delsub{r}G\st$ at that temperature. If we know the value of $\Delsub{r}G\st$, we can calculate the value of $K$. One method is to calculate $\Delsub{r}G\st$ from values of the standard molar Gibbs energy of formation $\Delsub{f}G\st$ of each reactant and product. These values are the standard molar reaction Gibbs energies for the formation reactions of the substances. To relate $\Delsub{f}G\st$ to measurable quantities, we make the substitution $\mu_i = H_i - TS_i$ (Eq. 9.2.46) in $\Delsub{r}G = \sum_i\!\nu_i\mu_i$ to give $\Delsub{r}G = \sum_i\!\nu_i H_i - T \sum_i\!\nu_i S_i$, or $\Delsub{r}G = \Delsub{r}H - T\Delsub{r}S \tag{11.8.20}$ When we apply this equation to a reaction with each reactant and product in its standard state, it becomes $\Delsub{r}G\st = \Delsub{r}H\st - T\Delsub{r}S\st \tag{11.8.21}$ where the standard molar reaction entropy is given by $\Delsub{r}S\st = \sum_i\nu_i S_i\st \tag{11.8.22}$ If the reaction is the formation reaction of a substance, we have $\Delsub{f}G\st = \Delsub{f}H\st - T\sum_i\nu_i S_i\st \tag{11.8.23}$ where the sum over $i$ is for the reactants and product of the formation reaction. We can evaluate the standard molar Gibbs energy of formation of a substance, then, from its standard molar enthalpy of formation and the standard molar entropies of the reactants and product. Extensive tables are available of values of $\Delsub{f}G\st$ for substances and ions. An abbreviated version at the single temperature $298.15\K$ is given in Appendix H. For a reaction of interest, the tabulated values enable us to evaluate $\Delsub{r}G\st$, and then $K$, from the expression (analogous to Hess’s law) $\Delsub{r}G\st = \sum_i\nu_i\Delsub{f}G\st(i) \tag{11.8.24}$ The sum over $i$ is for the reactants and products of the reaction of interest. Recall that the standard molar enthalpies of formation needed in Eq. 11.8.23 can be evaluated by calorimetric methods (Sec. 11.3.2). The absolute molar entropy values $S_i\st$ come from heat capacity data or statistical mechanical theory by methods discussed in Sec. 6.2. Thus, it is entirely feasible to use nothing but calorimetry to evaluate an equilibrium constant, a goal sought by thermodynamicists during the first half of the 20th century. (Another method, for a reaction that can be carried out reversibly in a galvanic cell, is described in Sec. 14.3.3.) For ions in aqueous solution, the values of $S\m\st$ and $\Delsub{f}G\st$ found in Appendix H are based on the reference values $S\m\st=0$ and $\Delsub{f}G\st = 0$ for H$^+$(aq) at all temperatures, similar to the convention for $\Delsub{f}H\st$ values discussed in Sec. 11.3.2. For a reaction with aqueous ions as reactants or products, these values correctly give $\Delsub{r}S\st$ using Eq. 11.8.22, or $\Delsub{r}G\st$ using Eq. 11.8.24. Note that the values of $S\m\st$ in Appendix H for some ions, unlike the values for substances, are negative; this simply means that the standard molar entropies of these ions are less than that of H$^+$(aq). The relation of Eq. 11.8.23 does not apply to an ion, because we cannot write a formation reaction for a single ion. Instead, the relation between $\Delsub{f}G\st$, $\Delsub{f}H\st$ and $S\m\st$ is more complicated. Consider first a hypothetical reaction in which hydrogen ions and one or more elements form H$_2$ and a cation M$^{z_+}$ with charge number $z_+$: \[ z_+\tx{H$^+$(aq)}+\tx{elements} \arrow (z_+/2)\tx{H$_2$(g)}+\tx{M$^{z_+}$(aq)} \nonumber$ For this reaction, using the convention that $\Delsub{f}H\st$, $S\m\st$, and $\Delsub{f}G\st$ are zero for the aqueous H$^+$ ion and the fact that $\Delsub{f}H\st$ and $\Delsub{f}G\st$ are zero for the elements, we can write the following expressions for standard molar reaction quantities: $\Delsub{r}H\st = \Delsub{f}H\st(\tx{M$^{z_+}$}) \tag{11.8.25}$ $\Delsub{r}S\st = (z_+/2)S\m\st(\tx{H$_2$}) +S\m\st(\tx{M$^{z_+}$})- \! \sum_{\tx{elements}}\!\!\!S_i\st \tag{11.8.26}$ $\Delsub{r}G\st = \Delsub{f}G\st(\tx{M$^{z_+}$}) \tag{11.8.27}$ Then, from $\Delsub{r}G\st=\Delsub{r}H\st-T\Delsub{r}S\st$, we find $\Delsub{f}G\st(\tx{M$^{z_+}$}) = \Delsub{f}H\st(\tx{M$^{z_+}$}) \quad -T\left[ S\m\st(\tx{M$^{z_+}$}) -\sum_{\tx{elements}}\!\!\!S_i\st + (z_+/2)S\m\st(\tx{H$_2$}) \right] \tag{11.8.28}$ For example, the standard molar Gibbs energy of the aqueous mercury(I) ion is found from $\textstyle \Delsub{f}G\st(\tx{Hg$_2$$^{2+}$}) = \Delsub{f}H\st(\tx{Hg$_2$$^{2+}$}) - TS\m\st(\tx{Hg$_2$$^{2+}$}) + 2TS\m\st(\tx{Hg}) - \frac{2}{2}TS\m\st(\tx{H$_2$}) \tag{11.8.29}$
For an anion X$^{z_-}$ with negative charge number $z_-$, using the hypothetical reaction $|z_-/2| \tx{H$_2$(g)}+\tx{elements} \arrow |z_-| \tx{H$^+$(aq)}+\tx{X$^{z_-}$(aq)} \nonumber$ we find by the same method $\Delsub{f}G\st(\tx{X$^{z_-}$}) = \Delsub{f}H\st(\tx{X$^{z_-}$}) -T\left[ S\m\st(\tx{X$^{z_-}$})- \! \sum_{\tx{elements}}\!\!\!S_i\st -|z_-/2| S\m\st(\tx{H$_2$}) \right] \tag{11.8.30}$ For example, the calculation for the nitrate ion is $\textstyle \Delsub{f}G\st(\tx{NO$_3$$^-$}) = \Delsub{f}H\st(\tx{NO$_3$$^-$}) - TS\m\st(\tx{NO$_3$$^-$}) + \frac{1}{2}TS\m\st(\tx{N$_2$}) + \frac{3}{2}TS\m\st(\tx{O$_2$}) + \frac{1}{2}TS\m\st(\tx{H$_2$}) \tag{11.8.31}$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(LibreTexts)/26%3A_Chemical_Equilibrium/26.03%3A_Standard_Gibbs_Energies_of_Formation_Can_Be_Used_to_Calculate_Equilibrium_Constants.txt
|
As discussed in Section 26.4, we defined the extent of the reaction ($\xi$) as a quantitative measure of how far along the reaction has evolved. For simple reactions like $\ce{A <=> B}$, the extent of reaction is easy to define and is simply the number of moles of $\ce{A}$ that has been converted to $\ce{B}$. Since $d\xi$ has the same value as $dn$, we can write that
\begin{align*} dG &= \mu_A dn_A + \mu_B dn_B \[4pt] &= -\mu_A d\xi + \mu_B d\xi. \end{align*} \nonumber
The minus sign comes from the fact that when the reaction goes in the left to right direction, the amount of $\ce{A}$ is decreasing, while the amount of $\ce{B}$ is increasing. Looking at these equations, it is reasonable to suggest that:
$\left(\frac{\partial G}{\partial \xi}\right)_{P T}=\mu_{B}-\mu_{A} \nonumber$
This is the slope of the free energy with respect to the extent of the reaction. This relationship will have a region where the sign is negative, one point where the value is zero and a region where the value is positive (Figure $1$). If we look at a plot of $G$ as a function of $\xi$, we can see that the point where is the minimum of the curve:
$\left(\frac{\partial G}{\partial \xi}\right)_{PT}=0 \nonumber$
This is the point where $\mu_A$ and $\mu_B$ are the same. On one side of the minimum, the slope is negative and on the other side, the slope is positive. In both cases, the reaction is spontaneous (i.e, $dG < 0$) as long as the reaction is evolving towards the minimum, which will be called the equilibrium position. Hence, all reactions spontaneously move towards equilibrium; the immediate question is what value of $\xi$ corresponds to the equilibrium position of a reaction?
Terminology
• A reaction for which $Δ_rG < 0$ is called exergonic.
• A reaction for which $Δ_r G > 0$ is called endergonic.
• $Δ_r G < 0$, the forward reaction is spontaneous.
• $Δ_r G > 0$, the reverse reaction is spontaneous.
• $Δ_r G = 0$, the reaction is at equilibrium.
Ideal Gas Equilibrium
To understand how we can find the minimum and what the Gibbs free energy of a reaction depends on, let's first start with a reaction that converts one ideal gas into another.
$\ce{A<=>B} \nonumber$
Let's assume the reaction enthalpy is zero and hence the only thing that determines what the ratio of product to reactant should be is the entropy term (the mixing term), which is the most favorable when the mixture is half and half.
$\Delta G_{\text {mix }}=n R T\left(\chi_{A} \ln \chi_{A}+ \chi_{B} \ln \chi_{B}\right) \label{Smixing}$
Remember that the Gibbs free energy of mixing is not a molar quantity and depends on $n$ (unlike the reaction Gibbs free energy). Also, the Gibbs free energy of mixing is defined relative to pure $\ce{A}$ and $\ce{B}$. The free energy of mixing in Equation \ref{Smixing} is at a minimum when the amounts of $\ce{A}$ equal $\ce{B}$.
Thus, if we have a reaction $\ce{A<=>B}$ and there is no enthalpy term (and no change in the inherent entropy of A vs. B), we would expect the system to have the minimum Gibbs free energy when the mole fraction of $\ce{A}$ and $\ce{B}$ are each 0.5.
Think about it
If $\ce{A}$ can form $\ce{B}$ and $\ce{B}$ can form $\ce{A}$ and there is no other forces involved (no heat/enthalpy) that would favor one product over the other, probability would just state that eventually you will have statistically the same amount of both A and B present -- this is the lowest free energy state. (This, of course, will not be true if the enthalpy for the reaction is not zero or if $\ce{A}$ and $\ce{B}$ have different inherent entropies). Usually there are additional terms for the reaction.
Lets work through this for an ideal gas reaction:
\begin{align*} \Delta_{r} G &=\mu_{B}-\mu_{A} \[4pt] &=\mu_{B}^{o}+R T \ln \dfrac{P_{B}}{P^{\theta}}-\mu_{A}^{o}-R T \ln \dfrac{P_{A}}{P^{\theta}} \[4pt] &=\mu_{B}^{o}-\mu_{A}^{o}+R T \ln \dfrac{P_{B}}{P_{A}} \end{align*} \nonumber
What we normally do at this point is to give the first two terms a special name. Since it is the difference between the chemical potentials at standard conditions, we refer to it as the Gibbs free energy of reaction at standard conditions or the standard Gibbs free energy of reaction.
\begin{align} \Delta_{r} G^{o} &= \mu_B^{o}-\mu_A^{o} \nonumber \[4pt] \Delta_r G &= \Delta_{r} G^{o} + R T \ln \dfrac{P_{B}}{P_{A}} \label{eq30} \end{align}
There are two parts to Equation \ref{eq30}. The first term ($\Delta_{r} G^{o}$) is the Gibbs free energy for converting one mole of $\ce{A}$ to $\ce{B}$ under standard conditions (1 bar of both $\ce{A}$ and $\ce{B}$). The second term is the mixing term that is minimized when the amounts of A and B are equal to one another (Figure $2$). The first term you look up in a reference book and is specific for a specific reaction. The second term is calculated knowing the partial pressures of $\ce{A}$ and $\ce{B}$ in the gas.
Now the minimum absolute Gibbs free energy will occur at the bottom of the curve where the slope is zero. Thus, the lowest free energy will occur when the reaction free energy (i.e., the slope of that curve in Figure $1$) is equal to zero. The chemical potentials of $\ce{A}$ and $\ce{B}$ are equal.
$\mu_A = \mu_B \nonumber$
At this point, the reaction will neither go forwards or backwards and we call this equilibrium. Hence, at equilibrium:
\begin{align*} \Delta_{r} G &=\Delta_{r} G^{o}+R T \ln \dfrac{P_{B}}{P_{A}} \[4pt] &=0 \end{align*} \nonumber
and the specific ratio of $P_A$ and $P_B$ necessary to ensure $\Delta_{r} G=0$ is characteristic of the reaction and is called the equilibrium constant for that reaction.
$K_{e q}=\dfrac{P_{B}}{P_{A}} \nonumber$
We can now relate thermodynamic quantities to concentrations of molecules at equilibrium. We can also see that at equilibrium:
\begin{aligned} &0=\Delta_{r}G^{o}+R T \ln K_{e q} \[4pt] &\Delta_{r} G^{o}=-R T \ln K_{e q} \[4pt] &K_{eq}=e^{-\dfrac{\Delta_{r} G^{o}}{R T}} \end{aligned} \nonumber
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(LibreTexts)/26%3A_Chemical_Equilibrium/26.04%3A_A_Plot_of_the_Gibbs_Energy_of_a_Reaction_Mixture_Against_the_Extent_of_Reaction_Is_a_Minimum_at_Equilibri.txt
|
The Gibbs free energy function was constructed to be able to predict which changes could occur spontaneously. If we start with a set of initial concentrations we can write them in a reaction quotient
$Δ_rG = Δ_rG^o + RT \ln Q \nonumber$
if we subtract the equilibrium version of this expression:
$0= Δ_rG^o + RT\ln K \nonumber$
we get
$Δ_rG = RT \ln \left(\dfrac{Q}{K} \right) \nonumber$
That gives us the sign of $Δ_rG$. If this is negative the reaction will spontaneously proceed from left to right as written, if positive it will run in reverse. In both case the value of $Q$ will change until $Q=K$ and equilibrium has been reached.
The main difference between $K$ and $Q$ is that $K$ describes a reaction that is at equilibrium, whereas $Q$ describes a reaction that is not at equilibrium. To determine $Q$, the concentrations of the reactants and products must be known. For a given general chemical equation:
$aA + bB \rightleftharpoons cC + dD \label{1}$
the $Q$ equation is written by multiplying the activities for the species of the products and dividing by the activities of the reactants. If any component in the reaction has a coefficient, indicated above with lower case letters, the concentration is raised to the power of the coefficient. $Q$ for the above equation is therefore:
$Q = \dfrac{[C]^c[D]^d}{[A]^a[B]^b} \label{2}$
Note
A comparison of $Q$ with $K$ indicates which way the reaction shifts and which side of the reaction is favored:
• If $Q>K$, then the reaction favors the reactants. This means that in the $Q$ equation, the ratio of the numerator (the concentration or pressure of the products) to the denominator (the concentration or pressure of the reactants) is larger than that for $K$, indicating that more products are present than there would be at equilibrium. Because reactions always tend toward equilibrium (Le Châtelier's Principle), the reaction produces more reactants from the excess products, therefore causing the system to shift to the LEFT. This allows the system to reach equilibrium.
• If $Q<K$, then the reaction favors the products. The ratio of products to reactants is less than that for the system at equilibrium—the concentration or the pressure of the reactants is greater than the concentration or pressure of the products. Because the reaction tends toward reach equilibrium, the system shifts to the RIGHT to make more products.
• If $Q=K$, then the reaction is already at equilibrium. There is no tendency to form more reactants or more products at this point. No side is favored and no shift occurs.
26.06: The Sign of G and not G Determines the Direction of Reaction Spontaneity
It is important to distinguish between the Gibbs energy of reaction, $\Delta_rG$ and the standard state Gibbs energy of reaction, $\Delta_rG^\circ$. The $^\circ$ refers to standard state conditions. That is, each reactant and product has a partial pressure of 1 bar if a gas, a concentration of 1 M if a solution, and they are all unmixed from each other. Such idealized conditions, while convenient for serving as a reference state, do not actually represent real reaction conditions. Consider the reaction of nitrogen with hydrogen to form ammonia:
$\sf N_2\it\left(g\right)\sf+3 H_2\it\left(g\right)\sf\rightarrow 2 NH_3\it\left(g\right) \nonumber$
$\Delta_rG^\circ=-32.9 \;\frac{\text{kJ}}{\text{mol K}} \nonumber$
If the reaction was run under standard state conditions (1 bar partial pressure of each gas), the reaction would shift towards the products since $\Delta_rG^\circ<0$. That is, the partial pressures of N2 and O2 will decrease and the partial pressure of NH3 will increase until equilibrium is reached. The Gibbs energy of reaction is dependent on the composition:
$\Delta_rG=\Delta_rG^\circ+RT\ln{Q}=\Delta_rG^\circ+RT\ln{\left(\frac{P_{\text{NH}_3}^2}{P_{\text{N}_2}P_{\text{H}_2}^3}\right)} \nonumber$
At equilibrium, the minimum Gibbs energy of reaction will be reached:
$\Delta_rG=0 \;\frac{\text{kJ}}{\text{mol K}} \nonumber$
And the reaction quotient will equal the equilibrium constant:
$Q=K \nonumber$
26.07: The Van 't Hoff Equation
We can use Gibbs-Helmholtz to get the temperature dependence of $K$
$\left( \dfrac{∂[ΔG^o/T]}{∂T} \right)_P = \dfrac{-ΔH^o}{T^2} \nonumber$
At equilibrium, we can equate $ΔG^o$ to $-RT\ln K$ so we get:
$\left( \dfrac{∂[\ln K]}{∂T} \right)_P = \dfrac{ΔH^o}{RT^2} \nonumber$
We see that whether $K$ increases or decreases with temperature is linked to whether the reaction enthalpy is positive or negative. If temperature is changed little enough that $ΔH^o$ can be considered constant, we can translate a $K$ value at one temperature into another by integrating the above expression, we get a similar derivation as with melting point depression:
$\ln \dfrac{K(T_2)}{K(T_1)} = \dfrac{-ΔH^o}{R} \left( \dfrac{1}{T_2} - \dfrac{1}{T_1} \right) \nonumber$
If more precision is required we could correct for the temperature changes of $ΔH^o$ by using heat capacity data.
How $K$ increases or decreases with temperature is linked to whether the reaction enthalpy is positive or negative.
The expression for $K$ is a rather sensitive function of temperature given its exponential dependence on the difference of stoichiometric coefficients One way to see the sensitive temperature dependence of equilibrium constants is to recall that
$K=e^{−\Delta_r{G^o}/RT}\label{18}$
However, since under constant pressure and temperature
$\Delta{G^o}= \Delta{H^o}−T\Delta{S^o} \nonumber$
Equation $\ref{18}$ becomes
$K=e^{-\Delta{H^o}/RT} e^{\Delta {S^o}/R}\label{19}$
Taking the natural log of both sides, we obtain a linear relation between $\ln K$and the standard enthalpies and entropies:
$\ln K = - \dfrac{\Delta {H^o}}{R} \dfrac{1}{T} + \dfrac{\Delta{S^o}}{R}\label{20}$
which is known as the van ’t Hoff equation. It shows that a plot of $\ln K$ vs. $1/T$ should be a line with slope $-\Delta_r{H^o}/R$ and intercept $\Delta_r{S^o}/R$.
Hence, these quantities can be determined from the $\ln K$ vs. $1/T$ data without doing calorimetry. Of course, the main assumption here is that $\Delta_r{H^o}$ and $\Delta_r{S^o}$ are only very weakly dependent on $T$, which is usually valid.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(LibreTexts)/26%3A_Chemical_Equilibrium/26.05%3A_Reaction_Quotient_and_Equilibrium_Constant_Ratio_Determines_Reaction_Direction.txt
|
Consider the general gas phase chemical reaction represented by
${\ce {\nu_{A}A + \nu_{B}B <=>\nu_{C}C + \nu_{D}D}} \label{eq1}$
where A, B, C and D are the reactants and products of the reaction, and $\nu _{A}$ is the stoichiometric coefficient of chemical A, $\nu _{B}$ is the stoichiometric coefficient of chemical B, and so on. Each of the gases involved in the reaction will eventually reach an equilibrium concentration when the forward and reverse reaction rates become equal. The distribution of reactants to products at the equilibrium point is represented by the equilibrium constant ($K_{c}$):
$K_{c}=\dfrac {[C]^{\nu _{C}}[D]^{\nu _{D}}}{[A]^{\nu _{A}}[B]^{\nu _{B}}}\nonumber$
If the system is not at equilibrium, a shift in the number of reactants and products will occur to lower the overall energy of the system. The difference in the energy of the system at this non-equilibrated point and the energy of the system at equilibrium for any particular species is termed chemical potential. When both temperature and volume are constant for both points aforementioned, the chemical potential $\mu$ of species $i$ is expressed by the equation
$\mu _{i}={\left({\dfrac {\partial A}{\partial N_{i}}}\right)}_{T,V,N_{j\neq 1}}\label{chempot}$
where $A$ is the Helmholtz energy, and $N_{i}$ is the number of molecules of species i. The Helmholtz energy can be determined as a function of the total partition function, $Q$:
$A=-k_{B}T\ln Q\nonumber$
where $k_{B}$ is the Boltzmann constant and $T$ is the temperature of the system. The total partition function is given by
$Q={\dfrac {q_{i}(V,T)^{Ni}}{N_{i}!}}\nonumber$
where $q_{i}$ is the molecular partition function of chemical species $i$. Substituting the molecular partition function into the equation for Helmholtz energy yields:
$A=-k_{B}T\ln \left({\dfrac {q_{i}(V,T)^{N_i}}{N_{i}!}}\right) \nonumber$
and then further substituting this equation into the definition of chemical potential (Equation \ref{chempot}) yields:
$\mu _{i}={\left({\dfrac {\partial -k_{B}T\ln \left({\dfrac {q_{i}(V,T)^{Ni}}{N_{i}!}}\right)}{\partial N_{i}}}\right)}_{T,V,N_{j\neq 1}}\nonumber$
rearranging this equation the following derivative can be set-up:
$\mu _{i}=-k_{B}T{\left[{\left({\dfrac {\partial {N_i}\ln \left({q_{i}(V,T)}\right)}{\partial N_{i}}}\right)}-{\left({\dfrac {\partial \ln \left({N_{i}!}\right)}{\partial N_{i}}}\right)}\right]}_{T,V,N_{j\neq 1}}\nonumber$
From this point Sterling's approximation
$\ln {N!}=N\ln N-N\nonumber$
can be substituted into the derivative to yield:
$\mu _{i}=-k_{B}T{\left[{\left({\dfrac {\partial {N_i}\ln \left({q_{i}(V,T)}\right)}{\partial N_{i}}}\right)}-{\left({\dfrac {\partial \left({N_{i}\ln N_{i}-N_{i}}\right)}{\partial N_{i}}}\right)}\right]}_{T,V,N_{j\neq 1}}\nonumber$
From here the derivative can be rearranged and solved:
\begin{align} \mu_{i} &=-k_{B} T\left[\ln \left(q_{i}(V, T)\right) \dfrac{\partial N_{i}}{\partial N_{i}}+N_{i} \dfrac{\partial \ln \left(q_{i}(V, T)\right)}{\partial N_{i}}-\ln \left(N_{i}\right) \dfrac{\partial N_{i}}{\partial N_{i}}-N_{i} \dfrac{\partial \ln \left(N_{i}\right)}{\partial N_{i}}+\dfrac{\partial N_{i}}{\partial N_{i}}\right]_{T, V, N_{j \neq 1}} \nonumber \[4pt] &=-k_{B}T{\left({\ln {\left({q_{i}(V,T)}\right)}}+{0}-{\ln \left({N_{i}}\right)}-{N_{i}{\dfrac {1}{N_{i}}}}+{1}\right)} \nonumber\[4pt] &=-k_{B}T{\left({\ln {\left({q_{i}(V,T)}\right)}}-{\ln \left({N_{i}}\right)}\right)} \nonumber \[4pt] &=-k_{B}T\ln \left({\dfrac {q_{i}(V,T)}{N_{i}}}\right) \label{chempot2}\end{align}
A variable, $\lambda$, is then defined such that $dN_{j}=\nu _{j}d\lambda$, where $j$ = A, B, C or D and $\nu _{j}$ is taken to be positive for products and negative for reactants. A change in $\lambda$ therefore corresponds to a change in the concentrations of the reactants and products. Thus, at equilibrium,
$\left({\dfrac {\partial A}{\partial \lambda }}\right)_{T,V}=0\nonumber$
Helmholtz Energy with respect to Equilibrium Shifts
From Classical thermodynamics, the total differential of $A$ is:
$dA=-SdT-pdV+\sum _{j}\mu _{j}dN_{j}\nonumber$
For a reaction at a fixed volume and temperature (such as in the canonical ensemble), $dT$ and $dV$ equal 0. Therefore,
\begin{align*} dA &=\sum _{j}\mu _{j}dN_{j} \[4pt] &=\sum _{j}\mu _{j}\nu _{j}d\lambda \[4pt] &=d\lambda \sum _{j}\mu _{j}\nu _{j}\end{align*} \nonumber
with
$\sum _{j}\mu _{j}\nu _{j}=0\nonumber$
Substituting the expanded form of chemical potential (Equation \ref{chempot2}):
\begin{align*} -k_{B}T\sum _{j}\ln \left({\dfrac {q_{i}}{N_{i}}}\right)\nu _{j} &=0 \[4pt] \sum _{j}\ln \left({\dfrac {q_{i}}{N_{i}}}\right)\nu _{j} &=0 \[4pt] \sum _{j}\nu _{j}[\ln(q_{j})-\ln(N_{j})]&=0\end{align*} \nonumber
For the reaction in Equation \ref{eq1}:
$[\nu _{C}\ln(q_{C})-\nu _{C}\ln(N_{C})]+[\nu _{D}\ln(q_{D})-\nu _{D}\ln(N_{D})]-[\nu _{A}\ln(q_{A})-\nu _{A}\ln(N_{A})]-[\nu _{B}\ln(q_{B})-\nu _{B}\ln(N_{B})]=0\nonumber$
This equation simplifies to
${\dfrac {(q_{C})^{\nu _{C}}(q_{D})^{\nu _{D}}}{(q_{A})^{\nu _{A}}(q_{B})^{\nu _{B}}}}={\dfrac {(N_{C})^{\nu _{C}}(N_{D})^{\nu _{D}}}{(N_{A})^{\nu _{A}}(N_{B})^{\nu _{B}}}}\nonumber$
By dividing all terms by volume, and noting the relationship $\dfrac {N_{A}}{V}=\dfrac {\#molecules}{volume}=\rho _{A}=[A]$ where $\rho$ is referred to a number density, the following equation is obtained:
$K_{c}={\dfrac {\rho _{C}^{\nu _{C}}\rho _{D}^{\nu _{D}}}{\rho _{A}^{\nu _{A}}\rho _{B}^{\nu _{B}}}}={\dfrac {(q_{C}/V)^{\nu _{C}}(q_{D}/V)^{\nu _{D}}}{(q_{A}/V)^{\nu _{A}}(q_{B}/V)^{\nu _{B}}}}\label{Kc}$
Hence, knowledge of the molecular partition function for all species in the reaction will ensure the calculation of the equilibrium constant.
Example $1$: Reacting Diatomic Molecules
Calculate the equilibrium constant ($K_c$) for the reaction of $\ce {H2 (g)}$ and $\ce {Cl2 (g)}$ at 650 K.
$\ce {H2 (g) + Cl2 (g) <=> 2HCl (g)}\nonumber$
Solution
We will use Equation \ref{Kc} that uses Molecular Partition Functions to evaluate $K_c$.
$K_{c}(T)=\dfrac{\left(\dfrac{q_{\mathrm{HCl}}}{V}\right)^{2}}{\left(\dfrac{q_{\mathrm{H}_{2}}}{V}\right)\left(\dfrac{q_{\mathrm{Cl}_{2}}}{V}\right)} \label{eqA}$
This equation is expanded into the rotational, vibrational, translational and electronic molecular partition functions of each species.
$K_c(T) = \dfrac{\left(\dfrac{q_{\text {trans } \mathrm{HCl}} q_{rot\mathrm{HCl}} q_{vib \mathrm{HCl}} q_{elec \mathrm{HCl}}}{V}\right)^{2}}{\left(\dfrac{q_{\text {trans} \mathrm{H}_{2}} q_{\text {rot } \mathrm{H}_{2}} q_{vib \mathrm{H}_{2}} q_{elec \mathrm{H}_{2}}}{V}\right) \left(\dfrac{q_{\text {trans} \mathrm{Cl}_{2}} q_{\text {rot } \mathrm{Cl}_{2}} q_{vib \mathrm{Cl}_{2}} q_{elec \mathrm{Cl}_{2}}}{V}\right)} \nonumber$
A simple problem solving strategy for finding equilibrium constants via statistical mechanics is to separate the equation into the molecular partition functions of each of the reactant and product species, solve for each one, and recombine them to arrive at a final answer (e.g., for $\ce{HCl}$.
$\dfrac{q_{\mathrm{HCl}}}{V} =\underbrace{\left(\dfrac{2 \pi m k_{B} T}{h^{2}}\right)^{3 / 2}}_{\text{translation}} \times \underbrace{\dfrac{2 k_{B} T \mu r_{e}^{2}}{\sigma \hbar^{2}}}_{\text{rotation}} \times \underbrace{\dfrac{1}{1-\exp \left(\dfrac{-h \nu}{k_{B} T}\right)}}_{\text{vibration}} \times \underbrace{g_{1} \exp \left(\dfrac{D_{0}}{k_{B} T}\right)}_{\text{electronic}} \nonumber$
To simplify the calculations of molecular partition functions, the characteristic temperature of rotation ($\Theta _{r}$) and vibration ($\Theta _{\nu }$) are used.
$\frac{q_{\mathrm{HCl}}}{V} =\left(\frac{2 \pi m k_{B} T}{h^{2}}\right)^{3 / 2} \times \frac{T}{\sigma \Theta_{r}} \times \frac{1}{1-\exp \left(\frac{-\Theta_{\nu}}{T}\right)} \times g_{1} \exp \left(\dfrac{D_{0}}{R T} \right)\label{eqB}$
These values are constants that incorporate the physical constants found in the rotational and vibrational partition functions of the molecules. Tabulated values of $\Theta _{r}$ and $\Theta _{\nu }$ for select molecules can be found here.
Species $\Theta_{\nu}$ (K) $\Theta_{r}$ (K) $D_{0}$ (kJ mol-1)
$\ce {Cl2 (g)}$ 6125 0.351 239.0
$\ce {H2 (g)}$ 808 87.6 431.9
$\ce {HCl (g)}$ 4303 15.2 427.7
We need to evaluate Equation \ref{eqB} for each species then then evaluate Equation \ref{eqA} directly.
For $\ce{HCl}$:
$\frac{q_{\mathrm{HCl}}}{V}=\left(\frac{2 \pi\left(2.1957 \times 10^{-24} \mathrm{~kg}\right)\left(1.38065 \times 10^{-23} \mathrm{J K}^{-1}\right)(650 K)}{\left(6.62607 \times 10^{-34} \mathrm{Js}\right)^{2}}\right)^{3 / 2} \times \frac{650 \mathrm{~K}}{(1)(15.2 \mathrm{~K})} \times \frac{1}{1-\exp \left(\dfrac{-4303 \mathrm{~K}}{650 \mathrm{~K}}\right)} \times(1) \exp \left(\dfrac{427,700 \mathrm{ J mol}^{-1}}{\left(8.3145\, \mathrm{J\,K\,mol}{}^{-1}\right)(650 \mathrm{~K})}\right)\nonumber$
$\frac{q_{\mathrm{HCl}}}{V}=\left(1.4975 \times 10^{35} \mathrm{~m}^{-3}\right)(42.76)(1.0013)\left(2.3419 \times 10^{34}\right)\nonumber$
For $\ce{H2}$:
$\frac{q_{\mathrm{H}_{2}}}{V} =\left(\frac{2 \pi\left(1.2140 \times 10^{-25} \mathrm{~kg}\right)\left(1.38065 \times 10^{-23} \mathrm{JK}^{-1}\right)(650 \mathrm{~K})}{\left(6.62607 \times 10^{-34} \mathrm{Js}\right)^{2}}\right)^{3 / 2} \times \dfrac{650 \mathrm{~K}}{(2)(87.6 \mathrm{~K})} \times \frac{1}{1-\exp \left(\dfrac{-808 \mathrm{~K}}{650 \mathrm{~K}}\right)} \times(1) \exp \left(\dfrac{431,900 \mathrm{J mol}^{-1}}{\left(8.3145 \mathrm{~J ~K mol}^{-1}\right)(650 \mathrm{~K})}\right)\nonumber$
$\frac{q_{\mathrm{H}_{2}}}{V}=\left(1.9468 \times 10^{33} \mathrm{~m}^{-3}\right)(3.71)(1.41)\left(5.0942 \times 10^{34}\right) \nonumber$
For $\ce{Cl2}$:
$\dfrac{q_{\mathrm{Cl}_{2}}}{V} =\left(\frac{2 \pi\left(4.2700 \times 10^{-24} \mathrm{~kg}\right)\left(1.38065 \times 10^{-23} \mathrm{JK}^{-1}\right)(650 K)}{\left(6.62607 \times 10^{-34} \mathrm{Js}^{2}\right.}\right)^{3 / 2} \times \frac{650 \mathrm{~K}}{(2)(0.351 \mathrm{~K})} \times \frac{1}{1-\exp \left(\dfrac{-6125 \mathrm{~K}}{650 \mathrm{~K}}\right)} \times(1) \exp \left(\dfrac{239,000 \mathrm{Jmol}^{-1}}{\left(8.3145 \mathrm{~J~K~mol}^{-1}\right)(650 \mathrm{~K})}\right)\nonumber$
$\frac{q_{\mathrm{Cl}_{2}}}{V}=\left(5.4839 \times 10^{35} \mathrm{~m}^{-3}\right)(925.9)(1.00)\left(1.606 \times 10^{19}\right) \nonumber$
Combining the terms from each species, the following expression is obtained:
\begin{align*} K_{c} &=\frac{\left(1.9468 \times 10^{33} \mathrm{~m}^{-3}\right)^{2}}{\left(1.9468 \times 10^{33} \mathrm{~m}^{-3}\right)\left(5.4839 \times 10^{35} \mathrm{~m}^{-3}\right)} \times \frac{(42.76)^{2}}{(3.71)(925.9)} \times \frac{(1.0013)^{2}}{(1.41)(1.00)} \times \frac{\left(2.3419 \times 10^{34}\right)^{2}}{\left(1.606 \times 10^{19}\right)\left(5.0942 \times 10^{34}\right)} \[4pt] &=(0.003550)(0.1333)(0.711)\left(6.7037 \times 10^{14}\right) \[4pt] & =\left(2.26 \times 10^{11}\right) \end{align*} \nonumber
At 650 K, the reaction between $\ce {H2 (g)}$ and $\ce{Cl2(g)}$ proceeds spontaneously towards the products. From a statistical mechanics point of view, the product $\ce{HCl(g)}$ molecule has more states accessible to it than the reactant species. The spontaneity of this reaction is largely due to the electronic partition function: two very strong H—Cl bonds are formed at the expense of a very strong H—H bond and a relatively weak Cl—Cl bond.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(LibreTexts)/26%3A_Chemical_Equilibrium/26.08%3A_Equilibrium_Constants_in_Terms_of_Partition_Functions.txt
|
Real molecules can be significantly more complicated than the simple models that we often use to approximate them. To obtain a high degree of accuracy, calculations include complex corrections that can significantly decrease the efficiency of the calculations. For this reason, numerical tables of partition functions and other thermodynamical data are extensively tabulated. By placing these data in tables, they become readily accessible. The tables include experimental data combined with theoretical calculations that represent a collection of thermodynamic properties of substances.
The tabulated thermochemical properties of a substance are often given in JANAF (joint, army, navy, air force) tables. NIST contains are a large database of JANAF tables for many substances. Consider the JANAF table for methane (CH4). The table contains the constant pressure heat capacity ($C_P$), standard state entropy ($S^\circ$), standard state Gibbs energy ($G^\circ$), standard state enthalpy ($H^\circ$), standard state enthalpy of formation ($\Delta_fH^\circ$), standard state Gibbs energy of formation ($\Delta_fH^\circ$), equilibrium constant of formation expressed as a log value ($\log(K_f)$). $G^\circ$ and $H^\circ$ are expressed relative to the standard molar enthalpy at 298.15 K, $H^\circ(298.15\;\text{K})$.
26.10: Real Gases Are Expressed in Terms of Partial Fugacities
The relationship for chemical potential
$\mu = \mu^o + RT \ln \left( \dfrac{p}{p^o} \right) \nonumber$
was derived assuming ideal gas behavior. But for real gases that deviate widely from ideal behavior, the expression has only limited applicability. In order to use the simple expression on real gases, a “fudge” factor is introduced called fugacity. Using fugacity instead of pressure, the chemical potential expression becomes
$\mu = \mu^o + RT \ln \left( \dfrac{f}{f^o} \right) \nonumber$
where $f$ is the fugacity. Fugacity is related to pressure, but contains all of the deviations from ideality within it. To see how it is related to pressure, consider that a change in chemical potential for a single component system can be expressed as
$d\mu - Vdp - SdT \nonumber$
and so
$\left(\dfrac{\partial \mu}{\partial p} \right)_T = V \label{eq3}$
Differentiating the expression for chemical potential above with respect to pressure at constant volume results in
$\left(\dfrac{\partial \mu}{\partial p} \right)_T = \left \{ \dfrac{\partial}{\partial p} \left[ \mu^o + RT \ln \left( \dfrac{f}{f^o} \right) \right] \right \} \nonumber$
which simplifies to
$\left(\dfrac{\partial \mu}{\partial p} \right)_T = RT \left[ \dfrac{\partial \ln (f)}{\partial p} \right]_T = V \nonumber$
Multiplying both sides by $p/RT$ gives
$\left[ \dfrac{\partial \ln (f)}{\partial p} \right]_T = \dfrac{pV}{RT} =Z \nonumber$
where $Z$ is the compression factor as discussed previously. Now, we can use the expression above to obtain the fugacity coefficient $\gamma$, as defined by
$f= \gamma p \nonumber$
Taking the natural logarithm of both sides yields
$\ln f= \ln \gamma + \ln p \nonumber$
or
$\ln \gamma = \ln f - \ln p \nonumber$
Using some calculus and substitutions from above,
$\int \left(\dfrac{\partial \ln \gamma}{\partial p} \right)_T dp = \int \left(\dfrac{\partial \ln f}{\partial p} - \dfrac{\partial \ln p }{\partial p} \right)_T dp \nonumber$
$= \int \left(\dfrac{Z}{\partial p} - \dfrac{1}{\partial p} \right)_T dp \nonumber$
Finally, integrating from $0$ to $p$ yields
$\ln \gamma = \int_0^{p} \left( \dfrac{ Z-1}{p}\right)_T dp \nonumber$
If the gas behaves ideally, $\gamma = 1$. In general, this will be the limiting value as $p \rightarrow 0$ since all gases behave ideal as the pressure approaches 0. The advantage to using the fugacity in this manner is that it allows one to use the expression
$\mu = \mu^o + RT \ln \left( \dfrac{f}{f^o}\right) \nonumber$
to calculate the chemical potential, insuring that Equation \ref{eq3} holds even for gases that deviate from ideal behavior!
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(LibreTexts)/26%3A_Chemical_Equilibrium/26.09%3A_Molecular_Partition_Functions_and_Related_Thermodynamic_Data_Are_Extensively_Tabulated.txt
|
The above is a general principle that can be extended to other concentration units and to liquid solutions, ideal or not. In non-ideal systems we could replace
μi= μio+RTln Pi/Pio
by:
μi= μio+RTln ai
and follow the same procedure as above. In stead of an expression for K involving pressures or concentrations it now read in activities:
K=[aeq,YvYaeq,ZvZ /aeq,AvAaeq,BvB]
For each species we could write the activity as:
ai = γici/cio
Here cio is unity in whatever concentration measure we wish to choose. Again its function is to cancel the dimension of ci.
With this split in three factors we can write K as three factors as well:
K =KγKc/Ko
Kγ=[γ eq,YvYγ eq,ZvZeq,AvAγeq,BvB]
Kc=[ceq,YvYceq,ZvZ /ceq,AvAceq,BvB]
Kco=[covYcovZ /covAcovB]
The last factor is unity, it cancels the dimensions of Kc and is often omitted. The factor Kγ is unity if the solution is ideal. Obviously, for ionic solutions that is seldom the case.
activities of pure condensed phases
Sometimes one of the reactants or products is a pure solid (precipitate) or liquid (more solvent e.g.). What activity should we assign in such a case?
We start by choosing a suitable standard state, say the pure compound at 1 bar and temperature of interest, we then have:
μ = μo
but also:
μ = μo +RTlna
So a=1 at standard conditions
Any change can be written as
dμ = RTdlna
We can study the pressure dependence by considering:
∂ μ /∂P |T = Vpartial molar ( Vbar)
For a solid or liquid Vbar is a relatively small and constant value. Thus we can write:
dμ = VbardP
RTdlna = VbardP
dlna = VbardP/RT
Upon integration to a different pressure P' we find
lna' = (P'-1) Vbar/RT
S&McQ
1083
Example 26-12 shows that for graphite the activity is only 1.01 at 100 bars, so the activity is not very pressure dependent. Mostly if pure condensed compounds are involved in reactions the activity can taken as unity.
This is also in line with what we said previously about the solvent following Raoult's law. In the limit of the solvent going to pure solvent we have that its P goes to P*. As the activity is defined as P/P* this converges to unity. If a reaction produces more solvent molecules we can usually consider their activity equal to one in very good approximation for dilute solutions, even if they are already non-ideal.
The fact that a=1 for pure condensed phases has an important consequence for reactions (in general: processes) that only involve such phases. If all activities are unity, Q=1 and lnQ=0 which means that ΔrG = ΔrGo + 0. Thus ΔrG can only be zero -i.e. an equilibrium achieved- if ΔrGo happens to be zero, which is generally not the case. In fact there can only be an equilbrium at one specific temperature:
ΔrGo = 0
ΔrHo-TΔrSo
Tequilibrium= ΔrHorSo
If the process is the transformation from a solid to a liquid this is the well-known melting point. At temperature other than 0oC only one phase can exist: either ice or water. If the other is present, that is an unstable condition and it will transform entirely to the stable form. In other words the process will go to completion, not equilbirium. Only at 0oC can the two coexist in equilibrium. This holds for all melting points but it also holds for e.g. a solid-solid chemical reactions only producing, say, another solid.
Another way of expressing the above is to say that in order to have equilibrium at a series of temperatures, one needs at least one species involved for which activity depends on composition, e.g. a dilute solute or a gas.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(LibreTexts)/26%3A_Chemical_Equilibrium/26.11%3A_Thermodynamic_Equilibrium_Constants_Are_Expressed_in_Terms_of_Activities.txt
|
Weak electrolytes
We have seen that strong electrolytes are non-ideal even at tiny concentrations and that even the Debye-Hückel theory only allowed us to work at very small concentrations of strong electrolytes. The same problems are encountered with weak electrolytes but they are compounded by the equilibrium that is inherent to their solutions. Take acetic acid as in household vinegar:
$HAc + H_2O \rightleftharpoons Ac^- + H_3O^+ \nonumber$
We can write the equilibrium constant as:
$K = \dfrac{a_{Ac^-} a_{H_3O^+}}{a_{HAc}a_{H_2O}} \nonumber$
As
$a_{H2O}=1 \nonumber$
$K = \dfrac{a_{Ac-} a_{H_3O^+}}{a_{HAc}} \nonumber$
At an initial concentration of say 0.1 mol/l the activity coefficient for HAc (being a neutral species) is essentially one, but for the other species we should write:
$(a_{Ac^-} a_{H_3O^+})= [Ac^-][H_3O^+]γ_±^2 \nonumber$
The value of $γ_±^2$ is not unity and this will affect the equilibrium. In first approximation we will ignore that fact and write:
$K = \dfrac{ [Ac^-][H_3O^+]}{[HAc]} = 1.74 \times 10^{-5} \nonumber$
(We can either write [mol.l] as dimensions or drop them, depending on whether we are talking about K or Kc)
At equilibrium we would get:
$K = \dfrac{x^2}{0.1-x} =1.74 \times 10^{-5} \nonumber$
which yields
$x=1.31 \times 10^{-3} mol/l \nonumber$
We can now use Debye-Hückel theory to estimate the means ionic activity coefficient but the concentrations are already to high for that. Instead we will use one of its extensions:
$\ln γ_± = - 1.173|z_+z_-| \dfrac{\sqrt{I}}{1+\sqrt{I}} \nonumber$
The ionic strength is ½ ([Ac-]+[H3O+]) =x
Although we do not know x precisely the value we have is at least a starting point. We can use it to calculate a first approximation for γ±2. We find a value of 0.921. We then divide the value of K by this value and recalculate x. It changes from x=1.31 10-3 mol/l to x=1.365 10-3 mol/l. Now we can repeat this process until the value of x does not change appreciably anymore (converges). This process is called iteration. The final value is about x=1.37 10-3 mol/l.
As can be seen the non-ideality does change the values from the ones you would have calculated before entering this course, but the difference is not staggering, at least if no other solutes are present.
Question: does the equilibrium change if we add 0.5 mol/l NaCl to the solution?
Solubility products
For solubility products the differences can be more important. Take BaF2 in water K= 1.7 10-6.
$K = γ_±^3 [Ba^{2+}][F^-]2 \nonumber$
Let's start by assuming ideality and say that γ±3=1 and say that :[Ba2+]=x
$[F^-]=2x \nonumber$
Thus K= x (4x2) so x= 7.52 10-3 mol/lit
The ionic strength is
$I = \dfrac{1}{2}(+2)^2x+(-1)^2x = 3x \nonumber$
The extended Debye-Hückel theory gives γ± =0.736 this raises x to 0.0102. Repeating the process a few times we find x=0.011. This means an increase of about 30% due to non-ideal behavior for this sparingly soluble salt. Again the presence of other solutes may induce larger effects because they add to the ionic strength.
Exercise
Even pure water contains OH- and H3O+ ions. The K for this equilibrium is 10-14. How does the pH change if we add 0.5 m/lit of a strong electrolyte M2+ X2-? CH 431/Lecture 17/answer2
26.13: Homework Problems
In the mid 1920's the German physicist Werner Heisenberg showed that if we try to locate an electron within a region $Δx$; e.g. by scattering light from it, some momentum is transferred to the electron, and it is not possible to determine exactly how much momentum is transferred, even in principle. Heisenberg showed that consequently there is a relationship between the uncertainty in position $Δx$ and the uncertainty in momentum $Δp$.
$\Delta p \Delta x \ge \frac {\hbar}{2} \label {5-22}$
You can see from Equation $\ref{5-22}$ that as $Δp$ approaches 0, $Δx$ must approach ∞, which is the case of the free particle discussed previously.
This uncertainty principle, which also is discussed in Chapter 4, is a consequence of the wave property of matter. A wave has some finite extent in space and generally is not localized at a point. Consequently there usually is significant uncertainty in the position of a quantum particle in space. Activity 1 at the end of this chapter illustrates that a reduction in the spatial extent of a wavefunction to reduce the uncertainty in the position of a particle increases the uncertainty in the momentum of the particle. This illustration is based on the ideas described in the next section.
Exercise $1$
Compare the minimum uncertainty in the positions of a baseball (mass = 140 gm) and an electron, each with a speed of 91.3 miles per hour, which is characteristic of a reasonable fastball, if the standard deviation in the measurement of the speed is 0.1 mile per hour. Also compare the wavelengths associated with these two particles. Identify the insights that you gain from these comparisons.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(LibreTexts)/26%3A_Chemical_Equilibrium/26.12%3A_Activities_are_Important_for_Ionic_Species.txt
|
• 27.1: The Average Translational Kinetic Energy of a Gas
The gas laws were derived from empirical observations. Connecting these laws to fundamental properties of gas particles is subject of great interest. The kinetic molecular theory is one such approach. In its modern form, the kinetic molecular theory of gases is based on five basic postulates. This section will describe the development of the ideal gas law using these five postulates.
• 27.2: The Gaussian Distribution of One Component of the Molecular Velocity
The distribution function for one component of the molecular velocity is a Gaussian curve. Because the molecule can move in both a positive or negative direction along an axis, the range of the distribution function is negative infinity to positive infinity.
• 27.3: The Distribution of Molecular Speeds is Given by the Maxwell-Boltzmann Distribution
If we were to plot the number of molecules whose velocities fall within a series of narrow ranges, we would obtain a slightly asymmetric curve known as a velocity distribution. The peak of this curve would correspond to the most probable velocity. This velocity distribution curve is known as the Maxwell-Boltzmann distribution, but is frequently referred to only by Boltzmann's name.
• 27.4: The Frequency of Collisions with a Wall
The frequency with which a gas collides with a wall is dependent upon the number density (the number of molecules per volume) and the average molecular speed.
• 27.5: The Maxwell-Boltzmann Distribution Has Been Verified Experimentally
The Maxwell-Boltzman distruction distribution has been verified experimentally by a device called a velocity selector, which is essentially a series of spinning wheels with a hole through which the gas is effused. This ensures that only gas particles with a certain velocity will pass through all the holes as the wheels are spun at various rates. Thus it is possible to count the number of particles with various velocities and show that, indeed they do satisfy the Maxwell-Boltzmann distribution.
• 27.6: Mean Free Path
The mean free path is the distance a particle will travel, on average, before experiencing a collision event. This is defined as the product of the speed of a particle and the time between collisions.
• 27.7: Rates of Gas-Phase Chemical Reactions
The rate law of a gas-phase reaction can be derived using collision theory. The rate of the reaction depends on the rate of collisions with sufficient kinetic energy that exceeds the activation energy.
• 27.E: The Kinetic Theory of Gases (Exercises)
27: The Kinetic Theory of Gases
The laws that describe the behavior of gases were well established long before anyone had developed a coherent model of the properties of gases. In this section, we introduce a theory that describes why gases behave the way they do. The theory we introduce can also be used to derive laws such as the ideal gas law from fundamental principles and the properties of individual particles.
One key property of the individual particles is their velocity. However, in a sample of many gas particles, the particles will likely have various velocities. Rather than list the velocity of each individual gas molecule, we can combine these individual velocities in several ways to obtain "collective" velocities that describe the sample as a whole.
Table 27.1.1: Kinetic Properties of a Thermalized Ensemble (i.e., follows Maxwell-Boltzmann Distribution)
Property Speed Kinetic Energy
Most probable $\sqrt{\dfrac{2k_bT}{m}}$ $k_BT$
Average $\sqrt{\dfrac{8k_bT}{\pi m}}$ $\dfrac{4k_BT}{\pi}$
Root-mean-square $\sqrt{\dfrac{3k_bT}{m}}$ $\dfrac{3}{2} k_BT$
In the following example, these three collective velocities are defined and calculated for a sample of gas consisting of only eight molecules.
Example 27.1.1 : A Gas Sample with Few Molecules
The speeds of eight molecules were found to be 1.0, 4.0, 4.0, 6.0, 6.0, 6.0, 8.0, and 10.0 m/s. Calculate their average speed ($v_{\rm avg}$) root mean square speed ($v_{\rm rms}$), and most probable speed ($v_{\rm mp}$).
Solution
Start with definitions:
• average speed ($v_{\rm avg}$) = the sum of all the speeds divided by the number of molecules
• root-mean square speed ($v_{\rm rms}$) = the square root of the sum of the squared speeds divided by the number of molecules
• most probable speed ($v_{\rm mp}$) = the speed at which the greatest number of molecules is moving
The average speed:
\begin{align*} v_{\rm avg} &=\rm\dfrac{(1.0+4.0+4.0+6.0+6.0+6.0+8.0+10.0)\;m/s}{8} \[4pt] &=5.6\;m/s \end{align*} \nonumber
The root-mean square speed:
\begin{align*}v_{\rm rms} &=\rm\sqrt{\dfrac{(1.0^2+4.0^2+4.0^2+6.0^2+6.0^2+6.0^2+8.0^2+10.0^2)\;m^2/s^2}{8}} \[4pt] &=6.2\;m/s\end{align*} \nonumber
The most probable speed:
Of the eight molecules, three have speeds of 6.0 m/s, two have speeds of 4.0 m/s, and the other three molecules have different speeds. Hence
$v_{\rm mp}=6.0\, m/s. \nonumber$
Using expressions for $v_{mp}$, $v_{ave}$, or $v_{rms}$, it is fairly simple to derive expressions for kinetic energy from the expression
$E_{kin} = \dfrac{1}{2} mv^2 \nonumber$
It is important to remember that there will be a full distribution of molecular speeds in a thermalized sample of gas. Some molecules will be traveling faster and some more slowly. It is also important to recognize that the most probable, average, and RMS kinetic energy terms that can be derived from the kinetic molecular theory do not depend on the mass of the molecules (Table 27.1.1). As such, it can be concluded that the average kinetic energy of the molecules in a thermalized sample of gas depends only on the temperature. However, the average speed depends on the molecular mass. So, for a given temperature, light molecules will travel faster on average than heavier molecules.
The calculations carried out in Example 27.1.1 become cumbersome as the number of molecules in the sample of gas increases. Thus, a more efficient way to determine the various collective velocities for a gas sample containing a large number of molecules is required.
A Molecular Description of Pressure and Molecular Speed
The kinetic molecular theory of gases explains the laws that describe the behavior of gases. Developed during the mid-19th century by several physicists, including the Austrian Ludwig Boltzmann (1844–1906), the German Rudolf Clausius (1822–1888), and the Scotsman James Clerk Maxwell (1831–1879), this theory is based on the properties of individual particles as defined for an ideal gas and the fundamental concepts of physics. Thus the kinetic molecular theory of gases provides a molecular explanation for observations that led to the development of the ideal gas law. The kinetic molecular theory of gases is based on the following five postulates:
1. A gas is composed of a large number of particles called molecules (whether monatomic or polyatomic) that are in constant random motion.
2. Because the distance between gas molecules is much greater than the size of the molecules, the volume of the molecules is negligible.
3. Intermolecular interactions, whether repulsive or attractive, are so weak that they are also negligible.
4. Gas molecules collide with one another and with the walls of the container, but these collisions are perfectly elastic; that is, they do not change the average kinetic energy of the molecules.
5. The average kinetic energy of the molecules of any gas depends on only the temperature, and at a given temperature, all gaseous molecules have exactly the same average kinetic energy.
Figure 27.1.1 : Visualizing molecular motion. Molecules of a gas are in constant motion and collide with one another and with the container wall.
Although the molecules of real gases have nonzero volumes and exert both attractive and repulsive forces on one another, for the moment we will focus on how the kinetic molecular theory of gases relates to the properties of gases we have been discussing. In Topic 1C, we explain how this theory must be modified to account for the behavior of real gases.
Postulates 1 and 4 state that gas molecules are in constant motion and collide frequently with the walls of their containers. The collision of molecules with their container walls results in a momentum transfer (impulse) from molecules to the walls (Figure 27.1.2 ).
Figure 27.1.2 : Note: In this figure, the symbol $u$ is used to represent velocity. In the rest of this text, velocity will be represented with the symbol $v$. Momentum transfer (Impulse) from a molecule to the container wall as it bounces off the wall. Momentum transfer ($\Delta \rho_x$) for an elastic collision is equal to m$\Delta v_x$, where m is the mass of the molecule and $\Delta v_x$ is the change in the $x$ component of the molecular velocity ($v_{x_{final}}-v_{x_{initial}})$.The wall is perpendicular to $x$ axis. Since the collisions are elastic, the molecule bounces back with the same velocity in the opposite direction, and $\Delta v_x$ equals $2v_x$.
The momentum transfer to the wall perpendicular to $x$ axis as a molecule with an initial velocity $v_x$ in the $x$ direction hits is expressed as:
$\rm momentum\; transfer_x\;= \Delta \rho_x = m\Delta v_x = 2mv_x \label{1.2.1}$
The collision frequency, a number of collisions of the molecules to the wall per unit area and per second, increases with the molecular speed and the number of molecules per unit volume.
$f\propto (v_x) \times \Big(\dfrac{N}{V}\Big) \label{1.2.2}$
The pressure the gas exerts on the wall is expressed as the product of impulse and the collision frequency.
$P\propto (2mv_x)\times(v_x)\times\Big(\dfrac{N}{V}\Big)\propto \Big(\dfrac{N}{V}\Big)mv_x^2 \label{1.2.3}$
At any instant, however, the molecules in a gas sample are traveling at different speed. Therefore, we must replace $v_x^2$ in the expression above with the average value of $v_x^2$, which is denoted by $\bar{v_x^2}$. The overbar designates the average value over all molecules.
The exact expression for pressure is given as :
$P=\dfrac{N}{V}m\bar{v_x^2} \label{1.2.4}$
Finally, we must consider that there is nothing special about $x$ direction. We should expect that $\bar{v_x^2}= \bar{v_y^2}=\bar{v_z^2}=\dfrac{1}{3}\bar{v^2}$. Here the quantity $\bar{v^2}$ is called the mean-square speed defined as the average value of square-speed ($v^2$) over all molecules. Since $v^2=v_x^2+v_y^2+v_z^2$ for each molecule, $\bar{v^2}=\bar{v_x^2}+\bar{v_y^2}+\bar{v_z^2}$. By substituting $\dfrac{1}{3}\bar{v^2}$ for $\bar{v_x^2}$ in the expression above, we can get the final expression for the pressure:
$P=\dfrac{1}{3}\dfrac{N}{V}m\bar{v^2} \label{1.2.5}$
Because volumes and intermolecular interactions are negligible, postulates 2 and 3 state that all gaseous particles behave identically, regardless of the chemical nature of their component molecules. This is the essence of the ideal gas law, which treats all gases as collections of particles that are identical in all respects except mass. Postulate 2 also explains why it is relatively easy to compress a gas; you simply decrease the distance between the gas molecules.
Postulate 5 provides a molecular explanation for the temperature of a gas. Postulate 5 refers to the average translational kinetic energy of the molecules of a gas,
$\epsilon = m\bar{v^2}/2\label{1.2.6}$
By rearranging equation $\ref{1.2.5}$ and substituting in equation $\ref{1.2.6}$, we obtain
$PV = \dfrac{1}{3} N m \bar{v^2} = \dfrac{2}{3} N \epsilon \label{1.2.7}$
The 2/3 factor in the proportionality reflects the fact that velocity components in each of the three directions contributes ½ kT to the kinetic energy of the particle. The average translational kinetic energy is directly proportional to temperature:
$\epsilon = \dfrac{3}{2} kT \label{1.2.8}$
in which the proportionality constant $k$ is known as the Boltzmann constant. Substituting Equation $\ref{1.2.8}$ into Equation $\ref{1.2.7}$ yields
$PV = \left( \dfrac{2}{3}N \right) \left( \dfrac{3}{2}kT \right) =NkT \label{1.2.9}$
The Boltzmann constant $k$ is just the gas constant per molecule, so if N is chosen as Avogadro's number, $N_A$, then $N_Ak$ is R, the gas constant per mole. Thus, for n moles of particles, the Equation $\ref{1.2.9}$ becomes
$PV = nRT \label{1.2.10}$
which is the ideal gas law.
As noted in Example 27.1.1 , the root-mean square speed ($v_{\rm rms}$) is the square root of the sum of the squared speeds divided by the number of particles:
$v_{\rm rms}=\sqrt{\bar{v^2}}=\sqrt{\dfrac{v_1^2+v_2^2+\cdots v_N^2}{N}} \label{1.2.11}$
where $N$ is the number of particles and $v_i$ is the speed of particle $i$.
The $v_{\rm rms}$ for a sample containing a large number of molecules can be obtained by combining equations $\ref{1.2.7}$ and $\ref{1.2.8}$ in a slightly different fashion than that used to obtain equation $\ref{1.2.10}$:
$PV=\dfrac{1}{3} N m \bar{v^2} = \dfrac{2}{3} N \epsilon \tag{27.1.10}$
$\epsilon = \dfrac{3}{2} kT \tag{27.1.11}$
$\dfrac{1}{3} N m \bar{v^2}= \left(\dfrac{2}{3}\right)\left(\dfrac{3}{2}\right) NkT \label{1.2.12}$
$\dfrac{1}{3} N m \bar{v^2}= NkT \label{1.2.13}$
$N m \bar{v^2}= 3NkT \label{1.2.14}$
If N is chosen to be Avogadro's number, $N_A$, then $N_Am = M$, the molar mass, and $N_Ak = R$, the gas constant per mole,
$\bar{v^2} = \dfrac{3RT}{M} \label{1.2.15}$
$v_{\rm rms}=\sqrt{\bar{v^2}}=\sqrt{\dfrac{3RT}{M}} \label{1.2.16}$
In Equation $\ref{1.2.16}$, $v_{\rm rms}$ has units of meters per second; consequently, the units of molar mass $M$ are kilograms per mole, temperature $T$ is expressed in kelvins, and the ideal gas constant $R$ has the value 8.3145 J/(K•mol). Equation $\ref{1.2.16}$ shows that $v_{\rm rms}$ of a gas is proportional to the square root of its Kelvin temperature and inversely proportional to the square root of its molar mass. The root mean-square speed of a gas increase with increasing temperature. At a given temperature, heavier gas molecules have slower speeds than do lighter ones.
Example 27.1.2 :
What is the root mean-square speed for $\rm O_2$ molecules at 25ºC?
Given: Temperature in ºC, type of molecules, ideal gas gas constant
Asked for: $v_{\rm rms}$, the root mean-square speed
Strategy:
Convert temperature to kelvins:
$\rm T\; (in\; kelvin) = (25ºC + 273ºC)\dfrac{1\; K}{1\; ºC} = 298\; K$
Convert molar mass of $\rm O_2$ molecules to kg per mole:
$\rm M\; (in\; \dfrac{kg}{mole}) = 32.00\dfrac{g}{mole}\rm x\dfrac{1\; kg}{1000\; g}=0.03200\dfrac{kg}{mole}$
Use equation $\ref{1.2.16}$ to calculate the rms speed.
Solution
$v_{\rm rms_{\rm O_2}}= \sqrt{\dfrac{3\rm (8.3145\dfrac{J}{K·mole})(298.15\; K)}{\rm 0.03200\dfrac{kg}{mole}}}=482\dfrac{m}{s}$
Exercise 27.1.2
What is the root mean-square speed for $\rm Cl_2$ molecules at 25ºC?
$v_{\rm rms_{\rm Cl_2}}= 324\dfrac{m}{s}$
Many molecules, many velocities
At temperatures above absolute zero, all molecules are in motion. In the case of a gas, this motion consists of straight-line jumps whose lengths are quite great compared to the dimensions of the molecule. Although we can never predict the velocity of a particular individual molecule, the fact that we are usually dealing with a huge number of them allows us to know what fraction of the molecules have kinetic energies (and hence velocities) that lie within any given range.
The trajectory of an individual gas molecule consists of a series of straight-line paths interrupted by collisions. What happens when two molecules collide depends on their relative kinetic energies; in general, a faster or heavier molecule will impart some of its kinetic energy to a slower or lighter one. Two molecules having identical masses and moving in opposite directions at the same speed will momentarily remain motionless after their collision.
If we could measure the instantaneous velocities of all the molecules in a sample of a gas at some fixed temperature, we would obtain a wide range of values. A few would be zero, and a few would be very high velocities, but the majority would fall into a more or less well-defined range. We might be tempted to define an average velocity for a collection of molecules, but here we would need to be careful: molecules moving in opposite directions have velocities of opposite signs. Because the molecules are in a gas are in random thermal motion, there will be just about as many molecules moving in one direction as in the opposite direction, so the velocity vectors of opposite signs would all cancel and the average velocity would come out to zero. Since this answer is not very useful, we need to do our averaging in a slightly different way.
The proper treatment is to average the squares of the velocities, and then take the square root of this value to obtain the root-mean-square velocity ($v_{\rm rms}$), which is what we developed above. This velocity describes the gas sample as a whole, but it does not tell us about the range of velocities possible, nor does it tell us the distribution of velocities. To obtain a more complete description of how many gas molecules are likely to be traveling at a given velocity range we need to make use of the Maxwell-Boltzmann distribution law.
Contributors and Attributions
• Tom Neils, Grand Rapids Community College (editing)
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(LibreTexts)/27%3A_The_Kinetic_Theory_of_Gases/27.01%3A_The_Average_Translational_Kinetic_Energy_of_a_Gas.txt
|
As was shown in section 27.1, the pressure of an ideal gas is given as the total force exerted per unit area
$P = \dfrac{F_{tot}}{A} = N_{tot} \left( \dfrac{mv_x^2}{V} \right) = \dfrac{N_{tot}m}{V} v_x^2 \nonumber$
The question then becomes how to deal with the velocity term. Initially, it was assumed that all of the molecules had the same velocity, and so the magnitude of the velocity in the x-direction was merely a function of the trajectory. However, real samples of gases comprise molecules with an entire distribution of molecular speeds and trajectories. To deal with this distribution of values, we replaced ($v_x^2$) with the squared average of velocity in the x direction $\langle v_x \rangle ^2$.
$P = \dfrac{N_{tot}m}{V} \langle v_x \rangle ^2 \nonumber$
The distribution function for velocities in the x direction, known as the Maxwell-Boltzmann distribution, is given by:
$f(v_x) = \underbrace{\sqrt{ \dfrac{m}{2\pi k_BT} }}_{\text{normalization term}} \underbrace{\exp \left(\dfrac{-mv_x^2}{2k_BT} \right)}_{\text{exponential term}}\label{27.2.1}$
This function has two parts: a normalization constant and an exponential term. The normalization constant is derived by noting that
$\int _{-\infty}^{\infty} f(v_x) dv_x =1 \label{27.2.2}$
Normalizing the Maxwell-Boltzmann Distribution
The Maxwell-Boltzmann distribution has to be normalized because it is a continuous probability distribution. As such, the sum of the probabilities for all possible values of vx must be unity. And since $v_x$ can take any value between -∞ and ∞, then Equation \ref{27.2.2} must be true. So if the form of $f(v_x)$ is assumed to be
$f(v_x) = N \exp \left(\dfrac{-mv_x^2}{2k_BT} \right) \nonumber$
The normalization constant $N$ can be found from
$\int_{-\infty}^{\infty} f(v_x) dv_x = \int_{-\infty}^{\infty} N \exp \left(\dfrac{-mv_x^2}{2k_BT} \right) dv_x =1 \nonumber$
The expression can be simplified by letting $\alpha = m/2k_BT$:
$N \int_{-\infty}^{\infty} \exp \left(-\alpha v_x^2 \right) dv_x =1 \nonumber$
A table of definite integrals says that
$\int_{-\infty}^{\infty} e^{- a x^2} dx = \sqrt{\dfrac{\pi}{a}} \nonumber$
So
$N \sqrt{\dfrac{\pi}{\alpha}} = 1 \nonumber$
Thus,
$N = \sqrt{\dfrac{\alpha}{\pi}} = \left( \dfrac{m}{2\pi k_BT} \right) ^{1/2} \nonumber$
And thus the normalized distribution function is given by
$f(v_x) = \left( \dfrac{m}{2\pi k_BT} \right) ^{1/2} \text{exp} \left( \dfrac{m v_x^2}{2 k_BT} \right) \label{MB}$
Calculating an Average from a Probability Distribution
Calculating an average for a finite set of data is fairly easy. The average is calculated by
$\bar{x} = \dfrac{1}{N} \sum_{i=1}^N x_i \nonumber$
But how does one proceed when the set of data is infinite? Or how does one proceed when all one knows are the probabilities for each possible measured outcome? It turns out that that is fairly simple too!
$\bar{x} = \sum_{i=1}^N x_i P_i \nonumber$
where $P_i$ is the probability of measuring the value $x_i$. This can also be extended to problems where the measurable properties are not discrete (like the numbers that result from rolling a pair of dice) but rather come from a continuous parent population. In this case, if the probability is of measuring a specific outcome, the average value can then be determined by
$\bar{x} = \int x P(x) dx \nonumber$
where $P(x)$ is the function describing the probability distribution, and with the integration taking place across all possible values that x can take.
Calculating the average velocity in the x direction
A value that is useful (and will be used in further developments) is the average velocity in the x direction. This can be derived using the probability distribution, as shown in the mathematical development box above. The average value of $v_x$ is given by
$\langle v_x \rangle = \int _{-\infty}^{\infty} v_x f(v_x) dx \nonumber$
This integral will, by necessity, be zero. This must be the case as the distribution is symmetric, so that half of the molecules are traveling in the +x direction, and half in the –x direction. These motions will have to cancel. So, a more satisfying result will be given by considering the magnitude of $v_x$, which gives the speed in the x direction. Since this cannot be negative, and given the symmetry of the distribution, the problem becomes
$\langle |v_x |\rangle = 2 \int _{0}^{\infty} v_x f(v_x) dx \nonumber$
In other words, we will consider only half of the distribution, and then double the result to account for the half we ignored.
For simplicity, we will write the distribution function as
$f(v_x) = N \exp(-\alpha v_x^2) \nonumber$
where
$N= \left( \dfrac{m}{2\pi k_BT} \right) ^{1/2} \nonumber$
and
$\alpha = \dfrac{m}{2k_BT}. \nonumber$
A table of definite integrals shows
$\int_{0}^{\infty} x e^{- a x^2} dx = \dfrac{1}{2a} \nonumber$
so
$\langle v_x \rangle = 2N \left( \dfrac{1}{2\alpha}\right) = \dfrac{N}{\alpha}\nonumber$
Substituting our definitions for $N$ and $\alpha$ produces
$\langle v_x \rangle = \left( \dfrac{m}{2\pi k_BT} \right)^{1/2} \left( \dfrac{2 k_BT}{m} \right) = \left( \dfrac{2 k_BT}{ \pi m} \right)^{1/2} \nonumber$
This expression indicates the average speed for motion in one direction.
It is important to note that equation $\ref{27.2.1}$ describes the distribution function for one component of the molecular velocity. Because a molecule is able to move in a positive or a negative direction, the range of one component of the molecular velocity ($v_x$ in this case) is $-\infty$ to $\infty$. This distribution of velocities is a Gaussian distribution of velocities, as shown in Figure 27.2.1 .
Figure 27.2.1 : Distribution of the x component of the velocity of a nitrogen molecule at 300 K and 1000 K. (CC BY-NC; Ümit Kaya via LibreTexts)
We will find in section 27.3 that the distribution of molecular speeds is not a Gaussian distribution.
Contributors and Attributions
• Patrick E. Fleming (Department of Chemistry and Biochemistry; California State University, East Bay)
• Tom Neils (Grand Rapids Community College, editing)
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(LibreTexts)/27%3A_The_Kinetic_Theory_of_Gases/27.02%3A_The_Distribution_of_the_Components_of_Molecular_Speeds_are_Described_by_a_Gaussian_Distribution.txt
|
The Boltzmann Distribution
If we were to plot the number of molecules whose velocities fall within a series of narrow ranges, we would obtain a slightly asymmetric curve known as a velocity distribution. The peak of this curve would correspond to the most probable velocity. This velocity distribution curve is known as the Maxwell-Boltzmann distribution, but is frequently referred to only by Boltzmann's name. The Maxwell-Boltzmann distribution law was first worked out around 1860 by the great Scottish physicist, James Clerk Maxwell (1831-1879), who is better known for discovering the laws of electromagnetic radiation. Later, the Austrian physicist Ludwig Boltzmann (1844-1906) put the relation on a sounder theoretical basis and simplified the mathematics somewhat. Boltzmann pioneered the application of statistics to the physics and thermodynamics of matter and was an ardent supporter of the atomic theory of matter at a time when it was still not accepted by many of his contemporaries.
In section 27.2 we saw that the distribution function for molecular speeds in the x direction is given by:
$f(v_x) = \underbrace{\sqrt{ \dfrac{m}{2\pi k_BT} }}_{\text{normalization term}} \underbrace{\exp \left(\dfrac{-mv_x^2}{2k_BT} \right)}_{\text{exponential term}} \nonumber$
However, real gas samples have molecules not only with a distribution of molecular speeds and but also a random distribution of directions. Using normal vector magnitude properties (or simply using the Pythagorean Theorem), it can be seen that
$\langle v \rangle^2 = \langle v_x \rangle^2 + \langle v_y \rangle^2 + \langle v_z \rangle^2 \nonumber$
Since the direction of travel is random, the velocity can have any component in x, y, or z directions with equal probability. As such, the average value of the x, y, or z components of velocity should be the same. And so
$\langle v \rangle^2 = 3 \langle v_x \rangle^2 \nonumber$
Substituting this into the expression for pressure yields
$p =\dfrac{ N_{tot}m}{3V} \langle v \rangle^2 \nonumber$
All that remains is to determine the form of the distribution of velocity magnitudes the gas molecules can take. In his 1860 paper (Illustrations of the dynamical theory of gases. Part 1. On the motions and collisions of perfectly elastic spheres, 1860), Maxwell proposed a form for this distribution of speeds which proved to be consistent with observed properties of gases (such as their viscosities). He derived this expression based on a transformation of coordinate system from Cartesian coordinates ($x$, $y$, $z$) to spherical polar coordinates ($v$, $\theta$, $\phi$). In this new coordinate system, $v$ represents the magnitude of the velocity (or the speed) and all of the directional data is carried in the angles $\theta$ and $\phi$. The infinitesimal volume unit becomes
$dx\,dy\,dz\, = v^2 \sin( \theta) \,dv\,d\theta \,d\phi \nonumber$
Applying this transformation of coordinates Maxwell’s distribution took the following form
$f(v) = 4\pi v^2 \left(\dfrac{m}{2\pi k_BT} \right)^{3/2} \exp \left(\dfrac{-mv^2}{2k_BT} \right) \label{27.3.1}$
The Distribution of Speed over all Directions
The Distribution of Kinetic Energy in Three Dimensions
As noted above, the distribution function of molecular energies for one dimension is
$f(v_x) = \sqrt{ \dfrac{m}{2\pi k_BT} } \exp \left(\dfrac{-mv_x^2}{2k_BT} \right) \nonumber$
To obtain a three-dimensional probability distribution, you multiply the distribution function for each of the three dimensions so that
$f(v_x, v_y, v_z) = \left(\dfrac{m}{2\pi k_BT} \right)^{3/2} \exp \left(\dfrac{-mv^2}{2k_BT} \right) \nonumber$
given $\langle v \rangle^2 = \langle v_x \rangle^2 + \langle v_y \rangle^2 + \langle v_z \rangle^2 \nonumber$
The Conversion of Energy Distribution to Speed Distribution
To convert the three-dimensional energy distribution to a speed distribution over all space, the energy distribution must be summed over all directions. This sum is usually described by imagining a "velocity space" in spherical polar coordinates. As noted above, in this new coordinate system, $v$ represents the magnitude of the velocity (or the speed) and all of the directional data is carried in the angles $\theta$ and $\phi$. The infinitesimal volume unit becomes
$dx\,dy\,dz\, = v^2 \sin( \theta) \,dv\,d\theta \,d\phi \nonumber$
You integrate over $\theta$ and $\phi$ to sum over all space, thus
$f(v) = \left(\dfrac{m}{2\pi k_BT} \right)^{3/2} \exp \left(\dfrac{-mv^2}{2k_BT} \right) \underbrace {\int_0^{\pi} \int_0^{2\pi} v^2 \sin \theta d \phi d \theta}_{=4\pi v^2} \nonumber$
This equation is rearranged to give
$f(v) = 4\pi v^2 \left(\dfrac{m}{2\pi k_BT} \right)^{3/2} \exp \left(\dfrac{-mv^2}{2k_BT} \right) \nonumber$
This function can be thought of as having three basic parts to it: a normalization constant ($N$), a velocity dependence ($v^2$), and an exponential term that contains the kinetic energy ($½ mv^2$).
Because the function represents the fraction of molecules with the speed $v$, the sum of the fractions for all possible velocities must be unity. This sum can be calculated as an integral. The normalization constant ensures that
$\int_0^{\infty} f(v) dv = 1 \nonumber$
Thus the normalization constant is
$N =4\pi \sqrt{\left( \dfrac{m}{2\pi k_BT} \right) ^3 } \nonumber$
Velocity distributions depend on temperature and mass
Higher temperatures allow a larger fraction of molecules to acquire greater amounts of kinetic energy, causing the Boltzmann plots to spread out. Figure 27.3.2 shows how the Maxwell-Boltzmann distribution is affected by temperature. At lower temperatures, the molecules have less energy. Therefore, the speeds of the molecules are lower and the distribution has a smaller range. As the temperature of the molecules increases, the distribution flattens out. Because the molecules have greater energy at higher temperature, the molecules are moving faster.
Notice how the left ends of the plots are anchored at zero velocity (there will always be a few molecules that happen to be at rest.) As a consequence, the curves flatten out as the higher temperatures make additional higher-velocity states of motion more accessible. The area under each plot is the same for a constant number of molecules.
Calculating the Average Speed
Using the Maxwell distribution as a distribution of probabilities, the average molecular speed in a sample of gas molecules can be determined.
\begin{align} \langle v \rangle & = \int _{-\infty}^{\infty} v \,f(v) dv \nonumber \ & = \int _{-\infty}^{\infty} v\, 4\pi \sqrt{\left( \dfrac{m}{2\pi k_BT} \right) ^3 } v^2 \text{exp} \left( \dfrac{-m v^2}{2 k_BT} \right)\ dv \nonumber \ & = 4\pi \sqrt{\left( \dfrac{m}{2\pi k_BT} \right) ^3 } \int _{-\infty}^{\infty} v^3 \text{exp} \left( \dfrac{-m v^2}{2 k_BT} \right)\ dv \nonumber \end{align} \nonumber
The following can be found in a table of integrals:
$\int_0^{\infty} x^{2n+1} e^{-ax^2} dx = \dfrac{n!}{2a^{n+1}} \nonumber$
So
$\langle v \rangle = 4\pi \sqrt{\left( \dfrac{m}{2\pi k_BT} \right) ^3 } \left[ \dfrac{1}{2 \left( \dfrac{m}{2 k_B T} \right) ^2 } \right] \nonumber$
Which simplifies to
$\langle v \rangle = \left( \dfrac{8 k_BT}{\pi m} \right) ^{1/2} \nonumber$
Note: the value of $\langle v \rangle$ is twice that of $\langle v_x \rangle$ which was derived in an earlier example!
$\langle v \rangle = 2\langle v_x \rangle \nonumber$
Example 27.3.1 :
What is the average value of the squared speed according to the Maxwell distribution law?
Solution:
\begin{align} \langle v^2 \rangle & = \int _{-\infty}^{\infty} v^2 \,f(v) dv \nonumber \ & = \int _{-\infty}^{\infty} v^2\, 4\pi \sqrt{\left( \dfrac{m}{2\pi k_BT} \right) ^3 } v^2 \text{exp} \left( \dfrac{-m v^2}{2 k_BT} \right)\ dv \nonumber \ & = 4\pi \sqrt{\left( \dfrac{m}{2\pi k_BT} \right) ^3 } \int _{-\infty}^{\infty} v^4 \text{exp} \left( \dfrac{-m v^2}{2 k_BT} \right)\ dv \nonumber \end{align} \nonumber
A table of integrals indicates that
$\int_0^{\infty} x^{2n} e^{-ax^2} dx = \dfrac{1 \cdot 3 \,cdot 5 \dots (2n-1)}{2^{n+1}a^n} \sqrt{\dfrac{\pi}{a}} \nonumber$
Substitution (noting that $n = 2$) yields
$\langle v^2 \rangle = 4\pi \sqrt{\left( \dfrac{m}{2\pi k_BT} \right) ^3 } \left[ \dfrac{1 \cdot 3}{2^3 \left( \dfrac{m}{2 k_BT} \right) ^2 } \sqrt{\dfrac{\pi}{\left( \dfrac{m}{2 k_BT} \right)}} \right] \nonumber$
which simplifies to
$\langle v^2 \rangle = \dfrac{3 k_BT}{ m} \nonumber$
Note: The square root of this average squared speed is called the root mean square (RMS) speed, and has the value
$v_{rms} = \sqrt{ \langle v^2 \rangle } = \left( \dfrac{3 k_BT}{ m} \right)^{1/2} \nonumber$
All molecules have the same kinetic energy ($½ mv^2$) at the same temperature, so the fraction of molecules with higher velocities will increase as m, and thus the molecular weight, decreases. Figure 27.3.3 shows the dependence of the Maxwell-Boltzmann distribution on molecule mass. On average, heavier molecules move more slowly than lighter molecules. Therefore, heavier molecules will have a smaller speed distribution, while lighter molecules will have a speed distribution that is more spread out.
Related Speed Expressions
Usually, we are more interested in the speeds of molecules rather than their component velocities. The Maxwell–Boltzmann distribution for the speed follows immediately from the distribution of the velocity vector, above. Note that the speed of an individual gas particle is:
$v =\sqrt{v_x^2+ v_y^2 = v_z^2} \nonumber$
Three speed expressions can be derived from the Maxwell-Boltzmann distribution:
• the most probable speed,
• the average speed, and
• the root-mean-square speed.
The most probable speed is the maximum value on the distribution plot (Figure 27.3.4 ). This is established by finding the velocity when the derivative of Equation $\ref{27.3.1}$ is zero
$\dfrac{df(v)}{dv} = 0 \nonumber$
which is
$\color{red} v_{mp}=\sqrt {\dfrac {2RT}{M}} \label{3a}$
Figure 27.3.4 : The Maxwell-Boltzmann distribution is shifted to higher speeds and is broadened at higher temperatures. from OpenStax. The speed at the top of the curve is called the most probable speed because the largest number of molecules have that speed.
The average speed is the sum of the speeds of all the molecules divided by the number of molecules.
$\color{red} v_{avg}= \bar{v} = \int_0^{\infty} v f(v) dv = \sqrt {\dfrac{8RT}{\pi M}} \nonumber$
The root-mean-square speed is square root of the average speed-squared.
$\color{red} v_{rms}= \bar{v^2} = \sqrt {\dfrac {3RT}{M}} \nonumber$
where
• $R$ is the gas constant,
• $T$ is the absolute temperature and
• $M$ is the molar mass of the gas.
It always follows that for gases that follow the Maxwell-Boltzmann distribution:
$v_{mp}< v_{avg}< v_{rms} \nonumber$
Problems
1. Using the Maxwell-Boltzman function, calculate the fraction of argon gas molecules with a speed of 305 m/s at 500 K.
2. If the system in problem 1 has 0.46 moles of argon gas, how many molecules have the speed of 305 m/s?
3. Calculate the values of $C_{mp}$, $C_{avg}$, and $C_{rms}$ for xenon gas at 298 K.
4. From the values calculated above, label the Boltzmann distribution plot with the approximate locations of (C_{mp}\), $C_{avg}$, and $C_{rms}$.
5. What will have a larger speed distribution, helium at 500 K or argon at 300 K? Helium at 300 K or argon at 500 K? Argon at 400 K or argon at 1000 K?
Answers
1. 0.00141
2. $3.92 \times 10^{20}$ argon molecules
3. cmp = 194.27 m/s, cavg = 219.21 m/s, crms = 237.93 m/s
4. As stated above, Cmp is the most probable speed, thus it will be at the top of the distribution curve. To the right of the most probable speed will be the average speed, followed by the root-mean-square speed.
5. Hint: Use the related speed expressions to determine the distribution of the gas molecules: helium at 500 K. helium at at 300 K. argon at 1000 K.
Sources
1. Dunbar, R.C. Deriving the Maxwell Distribution J. Chem. Ed. 1982, 59, 22-23.
2. Peckham, G.D.; McNaught, I.J.; Applications of the Maxwell-Boltzmann Distribution J. Chem. Ed. 1992, 69, 554-558.
3. Chang, R. Physical Chemistry for the Biosciences, 25-27.
4. HyperPhysics: http://hyperphysics.phy-astr.gsu.edu...maxspe.html#c3
Contributors and Attributions
• Stephen Lower, Professor Emeritus (Simon Fraser U.) Chem1 Virtual Textbook
• Adam Maley (Hope College)
• Patrick E. Fleming (Department of Chemistry and Biochemistry; California State University, East Bay)
• Stephanie Schaertel (Grand Valley State University)
• Tom Neils (Grand Rapids Community College, editing)
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(LibreTexts)/27%3A_The_Kinetic_Theory_of_Gases/27.03%3A_The_Distribution_of_Molecular_Speeds_is_Given_by_the_Maxwell-Boltzmann_Distribution.txt
|
In the derivation of an expression for the pressure of a gas, it is useful to consider the frequency with which gas molecules collide with the walls of the container. To derive this expression, consider the expression for the "collision volume".
$V_{col} = v_x \Delta t\ \cdot A\nonumber$
in which the product of the velocity $v_x$ and a time interval $\Delta t$ is multiplied by $A$, the area of the wall with which the molecules collide.
All of the molecules within this volume, and with a velocity such that the x-component exceeds $v_x$ (and is positive) will collide with the wall. That fraction of molecules is given by
$N_{col} = \dfrac{N}{V} \dfrac{\langle v_x \rangle \Delta t \cdot A}{2}\nonumber$
and the frequency of collisions with the wall per unit area per unit time is given by
$z_w = \dfrac{N}{V} \dfrac{\langle v_x \rangle}{2}\nonumber$
In order to expand this model into a more useful form, one must consider motion in all three dimensions. Considering that
$\langle v \rangle = \sqrt{\langle v_x \rangle +\langle v_y \rangle +\langle v_z \rangle}\nonumber$
and that
$\langle v_x \rangle = \langle v_y \rangle =\langle v_z \rangle\nonumber$
it can be shown that
$\langle v \rangle = 2 \langle v_x \rangle\nonumber$
or
$\langle v_x \rangle = \dfrac{1}{2} \langle v \rangle\nonumber$
and so
$z_w = \dfrac{1}{4} \dfrac{N}{V} \langle v \rangle\nonumber$
A different approach to determining $z_w$ is to consider a collision cylinder that will enclose all of the molecules that will strike an area of the wall at an angle $\theta$ and with a speed $v$ in the time interval $dt$. The volume of this collision cylinder is the product of its base area ($A$) times its vertical height ($v\text{cos}\theta dt$), as shown in figure $1$.
The number of molecules in this cylinder is $\rho·A·v·\text{cos}\theta dt$, where $\rho$ is the number density $\dfrac{N}{V}$. The fraction of molecules that are traveling at a speed between $v$ and $v + dv$ is $F(v)dv$. The fraction of molecules traveling within the solid angle bounded by $\theta$ and $\theta + d\theta$ and between $\phi$ and $\phi + d\phi$ is $\dfrac{\text{sin}\theta d\theta d\phi}{4\pi}$. Multiplying these three terms together results in the number of molecules colliding with the area $A$ from the specified direction during the time interval $dt$
$dN_w = \rho·A·v·\text{cos}\theta \, dt \, · \, F(v)dv \, · \, \dfrac{\text{sin}\theta d\theta d\phi}{4\pi}\nonumber$
This equation can be rearranged to obtain
$\dfrac{1}{A}\dfrac{dN_w}{dt} = \dfrac{\rho}{4\pi} vF(v)dv · \text{cos}\theta \, \text{sin}\theta \, d\theta d\phi = dz_w \nonumber$
Integrating this equation over all possible speeds and directions (on the front side of the wall only), we get
$z_w = \dfrac{\rho}{4\pi} \int_0^{\infty} vF(v)dv · \int_0^{\pi/2}\text{cos}\theta \, \text{sin}\theta \, d\theta \int_0^{2\pi} d\phi \nonumber$
The result is that
$z_w = \dfrac{1}{A}\dfrac{dN_w}{dt} = \dfrac{1}{4} \dfrac{N}{V} \langle v \rangle = \rho\dfrac{\langle v \rangle}{4}\label{27.4.1}$
Example 27.4.1
Calculate the collision frequency per unit area ($Z_w$) for oxygen at 25.0°C and 1.00 bar using equation $\ref{27.4.1}$:
$z_w = \dfrac{1}{4} \dfrac{N}{V} \langle v \rangle \nonumber$
Solution
N molecules = $N_A$ x $n$, so that
$\dfrac{N}{V} = \dfrac{(N_A) \cdot n}{V} = \dfrac{(N_A) \cdot P}{R \cdot T} \nonumber$
$\dfrac{(6.022 x 10^{23} \, mole^{-1})(1.00 \, bar)}{(0.08319 \, L \cdot bar \cdot mole^{-1} \cdot K^{-1})(298 \, K)} = 2.43 \times 10^{22} \, L^{-1} = 2.43 \times 10^{25} \, m^{-3} \nonumber$
and
$\langle v \rangle = \left({\dfrac{8RT}{\pi M}} \right)^{\dfrac {1}{2}} = \left({\dfrac{8(8.314 J \cdot K^{-1} \cdot mol^{-1})(298K)}{\pi \cdot (0.031999 \, kg)}} \right)^{\dfrac {1}{2}} = 444 \, m\cdot s^{-1} \nonumber$
Thus
$z_w = \dfrac{1}{4} (2.43 \times 10^{25} m^{-3})(444 \, m\cdot s^{-1}) \left({\dfrac{1 \, m}{100 \, cm}} \right)^2 = 2.70\times 10^{23} s^{-1} \cdot cm^{-2} \nonumber$
The factor of N/V is often referred to as the “number density” as it gives the number of molecules per unit volume. At 1 atm pressure and 298 K, the number density for an ideal gas is approximately 2.43 x 1019 molecule/cm3. (This value is easily calculated using the ideal gas law.) By comparison, the average number density for the universe is approximately 1 molecule/cm3.
Exercise 27.4.1
Calculate the collision frequency per unit area ($Z_w$) for hydrogen at 25.0°C and 1.00 bar using equation $\ref{27.4.1}$:
$z_w = \dfrac{1}{4} \dfrac{N}{V} \langle v \rangle \nonumber$
Answer
$\langle v \rangle = 1770 \, m\cdot s^{-1} \nonumber$ and $Z_w = 1.08\times 10^{24} s^{-1} \cdot cm^{-2} \nonumber$
Contributors and Attributions
• Patrick E. Fleming (Department of Chemistry and Biochemistry; California State University, East Bay)
• Tom Neils, Grand Rapids Community College
27.05: The Maxwell-Boltzmann Distribution Has Been Verified Experimentally
The Maxwell-Boltzman distruction distribution has been verified experimentally by a device called a velocity selector, which is essentially a series of spinning wheels with a hole through which the gas is effused. This ensures that only gas particles with a certain velocity will pass through all the holes as the wheels are spun at various rates. Thus it is possible to count the number of particles with various velocities and show that, indeed they do satisfy the Maxwell-Boltzmann distribution.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(LibreTexts)/27%3A_The_Kinetic_Theory_of_Gases/27.04%3A_The_Frequency_of_Collisions.txt
|
Collision energy
Consider two particles $A$ and $B$ in a system. The kinetic energy of these two particles is
$K_{AB} = \dfrac{\textbf{p}_A^2}{2m_A} + \dfrac{\textbf{p}_B^2}{2m_B} \label{27.6.1}$
We can describe kinetic energy in terms of center-of-mass $\left( \textbf{P} \right)$ and relative momentum $\left( \textbf{p} \right)$, which are given by
$\textbf{P} = \textbf{p}_A + \textbf{p}_B$
and
\begin{align*} \textbf{p} &= \text{relative velocity}\times \mu \[4pt] &= (\textbf{v}_A -\textbf{v}_B) \times \left(\dfrac{m_A m_B}{m_A + m_B}\right) \[4pt] &= \dfrac{m_B \textbf{p}_A - m_A \textbf{p}_B}{M}\end{align*} \nonumber
where
$M = m_A + m_B$
is the total mass of the two particles, and
$\mu = \dfrac{m_A m_B}{M} \nonumber$
is the reduced mass of the two particles.
Substituting these terms into equation $\ref{27.6.1}$, we find
$K_{AB} = \dfrac{\textbf{p}_A^2}{2m_A} + \dfrac{\textbf{p}_B^2}{2m_B} = \dfrac{\textbf{P}^2}{2M} + \dfrac{\textbf{p}^2}{2 \mu} \nonumber$
Note that the kinetic energy separates into a sum of a center-of-mass term and a relative momentum term.
Now the relative position of the two particles is $\textbf{r} = \textbf{r}_A - \textbf{r}_B$ so that the relative velocity is $\dot{\textbf{r}} = \dot{\textbf{r}}_A - \dot{\textbf{r}}_B$ or $\textbf{v} = \textbf{v}_A - \textbf{v}_B$. Thus, if the two particles are approaching each other such that $\textbf{v}_A = - \textbf{v}_B$, then $\textbf{v} = 2 \textbf{v}_A$. However, by equipartitioning the relative kinetic energy, which is mass independent, we get
$\left< \dfrac{\textbf{p}^2}{2 \mu} \right> = \dfrac{3}{2} k_B T \nonumber$
which is called the collision energy
Collision cross-section
Consider two molecules in a system. The probability that they will collide increases with the effective “size” of each particle. However, the size measure that is relevant is the apparent cross-section area of each particle. For simplicity, suppose the particles are spherical, which is not a bad approximation for small molecules. If we are looking at a sphere, what we perceive as the size of the sphere is the cross section area of a great circle. Recall that each spherical particle has an associated “collision sphere” that just encloses two particles at closest contact, i.e., at the moment of a collision, and that this sphere is a radius $d$, where $d$ is the diameter of each spherical particle. The cross-section of this collision sphere represents an effective cross section for each particle inside which a collision is imminent. The cross-section of the collision sphere is the area of a great circle, which is $\pi d^2$. We denote this apparent cross section area $\sigma$. Thus, for spherical particles $A$ and $B$ with diameters $d_A$ and $d_B$, the individual cross sections are
$\sigma_A = \pi d_A^2, \: \: \: \sigma_B = \pi d_B^2 \nonumber$
The collision cross section, $\sigma_{AB}$ is determined by an effective diameter $d_{AB}$ characteristic of both particles. The collision probability increases of both particles have large diameters and decreases if one of them has a smaller diameter than the other. Hence, a simple measure sensitive to this is the arithmetic average
$d_{AB} = \dfrac{1}{2} \left( d_A + d_B \right) \nonumber$
and the resulting collision cross section becomes
\begin{align} \sigma_{AB} &= \pi d_{AB}^2 \nonumber\ &= \pi \left( \dfrac{d_A + d_B}{2} \right)^2 \nonumber\ &= \dfrac{\pi}{4} \left( d_A^2 + 2d_A d_B + d_B^2 \right) \nonumber \ &= \dfrac{1}{4} \left( \sigma_A + 2 \sqrt{\sigma_A \sigma_B} + \sigma_B \right) \nonumber\ &= \dfrac{1}{2} \left[ \left( \dfrac{\sigma_A + \sigma_B}{2} \right) + \sqrt{\sigma_A \sigma_B} \right] \nonumber \end{align} \nonumber
which, interestingly, is an average of the two types of averages of the two individual cross sections, the arithmetic and geometric averages!
Average collision Frequency
Consider a system of particles with individual cross sections $\sigma$. A particle of cross section $\sigma$ that moves a distance $l$ in a time $\Delta t$ will sweep out a cylindrical volume (ignoring the spherical caps) of volume $\sigma l$ (Figure 27.6.1 ). If the system has a number density $\rho$, then the number of collisions that will occur is
$N_{\text{coll}} = \rho \sigma l \nonumber$
We define the collision frequency for a single molecule, $z_A$, also known as the average collision rate as $N_{\text{coll}}/ \Delta t$, i.e.,
$z_A = \dfrac{N_{\text{coll}}}{\Delta t} = \dfrac{\rho \sigma l}{\Delta t} = \rho \sigma \langle v \rangle \label{27.6.2}$
where $\langle v \rangle$ is the average speed of a particle
$\langle v \rangle = \sqrt{\dfrac{8 k_B T}{\pi m_A}} \nonumber$
Equation $\ref{27.6.2}$ is not quite correct because it is based on the assumption that only the molecule of interest is moving. If we take into account the fact that all of the particles are moving relative to one another, and assume that all of the particles are of the same type (say, type $A$), then performing the average over a Maxwell-Boltzmann speed distribution gives
$\langle v_r \rangle = \sqrt{\dfrac{8 k_B T}{\pi \mu}} \nonumber$
where $\mu = m_A/2$ is the reduced mass.
Thus,
$\langle v_r \rangle = \sqrt{2} \langle v \rangle \nonumber$
and
$z_A = \sqrt{2} \rho \sigma \langle v \rangle\ = \rho \sigma \langle v_r \rangle \nonumber$
The reciprocal of $z_A$ is a measure of the average time between collisions for a single molecule.
Mean Free Path
The mean free path is the distance a particle will travel, on average, before experiencing a collision event. This is defined as the product of the average speed of a particle and the time between collisions. The former is $\langle v \rangle$, while the latter is $1/z_A$. Hence, we have
$\lambda = \dfrac{\langle v\rangle}{\sqrt{2} \rho \sigma \langle v \rangle} = \dfrac{1}{\sqrt{2} \rho \sigma} \nonumber$
The mean free path can also be described using terms from the ideal gas law, because $\rho = \dfrac{P \cdot N_A}{R \cdot T}$:
$\lambda = \dfrac {R \cdot T}{\sqrt{2} \cdot N_A \cdot \sigma \cdot P} \nonumber$
Random Walks
In any system, a particle undergoing frequent collisions will have the direction of its motion changed with each collision and will trace out a path that appears to be random. In fact, if we treat the process as statistical, then, we are, in fact, treating each collision event as a random event, and the particle will change its direction at random times in random ways! Such a path might appear as shown in Figure $2$. Such a path is often referred to as a random walk path.
In order to analyze such paths, let us consider a random walk in one dimension. We’ll assume that the particle move a mean-free path length $\lambda$ between collisions and that each collision changes the direction of the particles motion, which in one dimension, means that the particle moves either to the right or to the left after each event. This can be mapped onto a metaphoric “coin toss” that can come up heads “H” or tails “T”, with “H” causing motion to the right, and “T” causing motion to the left. Let there be $N$ such coin tosses, let $i$ be the number of times “H” comes up and $j$ denote the number of times “T” comes up. Thus, the progress of the particle, which we define as net motion to the right, is given by $(i - j) \lambda$. Letting $k = i - j$, this is just $k \lambda$. Thus, we need to know what the probability is for obtaining a particular value of $k$ in a very large number $N$ of coin tosses. Denote this $P(k)$.
In $N$ coin tosses, the total number of possible sequences of “H” and “T” is $2^N$. However, the number of ways we can obtain $i$ heads and \)j\) tails, with $i + j = N$ is a binomial coefficient $N!/i!j!$. Now
$j = N - i = N - (j + k) = N - j - k \nonumber$
so that $j = (N - k )/2$. Similarly,
$i = N - j = N - (i - k) = N - i + k \nonumber$
so that $i = (N + k)/2$. Thus, the probability $P(k)$ is
$P(k) = \dfrac{N!}{2^N i! j!} = \dfrac{1}{2^N} \dfrac{N!}{\left(\dfrac{N + k}{2} \right) ! \left( \dfrac{N - k}{2} \right) !} \nonumber$
We now take the logarithm of both sides:
$\text{ln} \: P(k) = \: \text{ln} \: N! - \: \text{ln} \: 2^N - \: \text{ln} \: \left( \dfrac{N + k}{2} \right) ! - \: \text{ln} \: \left( \dfrac{N - k}{2} \right) ! \nonumber$
and use Stirling’s approximation:
$\text{ln} \: N! \approx N \: \text{ln} \: N - N \nonumber$
and write $\text{ln} \: P(k)$ as
\begin{align} \text{ln} \: P(k) &\approx N \: \text{ln} \: N - N - N \: \text{ln} \: 2 - \dfrac{1}{2} (N + k) \: \text{ln} \: \dfrac{1}{2} (N + k) + \dfrac{1}{2} (N + k) - \dfrac{1}{2} (N - k) \: \text{ln} \: \dfrac{1}{2} (N - k) + \dfrac{1}{2} (N - k) \nonumber\ &= N \: \text{ln} \: N - N \: \text{ln} \: 2 + \dfrac{1}{2} (N + k) \: \text{ln} \: \dfrac{1}{2} - \dfrac{1}{2} (N + k) \: \text{ln} \: (N + k) - \dfrac{1}{2} (N - k) \: \text{ln} \: \dfrac{1}{2} - \dfrac{1}{2} (N - k) \: \text{ln} \: (N - k) \nonumber\ &= N \: \text{ln} \: N - N \: \text{ln} \: 2 + \dfrac{1}{2} (N + k) \: \text{ln} \: 2 - \dfrac{1}{2} (N + k) \: \text{ln} \: (N + k) + \dfrac{1}{2} (N - k) \: \text{ln} \: 2 - \dfrac{1}{2} (N - k) \: \text{ln} \: (N - k) \nonumber\ &= N \: \text{ln} \: N - \dfrac{1}{2} \left[ (N + k) \: \text{ln} \: (N + k) + (N - k) \: \text{ln} \: (N - k) \right] \nonumber\end{align} \nonumber
Now, write
$\text{ln} \: (N + k) = \: \text{ln} \: N \left( 1 + \dfrac{k}{N} \right) = \: \text{ln} \: N + \: \text{ln} \: \left( 1 + \dfrac{k}{N} \right) \nonumber$
and
$\text{ln} \: (N - k) = \: \text{ln} \: N \left( 1 - \dfrac{k}{N} \right) = \: \text{ln} \: N + \: \text{ln} \: \left( 1 - \dfrac{k}{N} \right) \nonumber$
We now use the expansions
\begin{align} \text{ln} \: \left( 1 + \dfrac{k}{N} \right) &= \left( \dfrac{k}{N} \right) - \dfrac{1}{2} \left( \dfrac{k}{N} \right)^2 + \cdots \nonumber \ \text{ln} \: \left( 1 - \dfrac{k}{N} \right) &= - \left( \dfrac{k}{N} \right) - \dfrac{1}{2} \left( \dfrac{k}{N} \right)^2 + \cdots \nonumber\end{align} \nonumber
If we stop at the second-order term, then
\begin{align} \text{ln} \: P(k) &= N \: \text{ln} \: N - \dfrac{1}{2} (N + k) \left[ \: \text{ln} \: N + \left( \dfrac{k}{N} \right) - \dfrac{1}{2} \left( \dfrac{k}{N} \right)^2 \right] - \dfrac{1}{2} (N - k) \left[ \: \text{ln} \: N - \left( \dfrac{k}{N} \right) - \dfrac{1}{2} \left( \dfrac{k}{N} \right)^2 \right] \nonumber \ &= -\dfrac{1}{2} (N + k) \left[ \left( \dfrac{k}{N} \right) - \dfrac{1}{2} \left( \dfrac{k}{N} \right)^2 \right] + \dfrac{1}{2} (N - k) \left[ \left( \dfrac{k}{N} \right) + \dfrac{1}{2} \left( \dfrac{k}{N} \right)^2 \right] \nonumber \ &= \dfrac{1}{2} N \left( \dfrac{k}{N} \right)^2 - k \left( \dfrac{k}{N} \right) \nonumber\ & = \dfrac{k^2}{2N} - \dfrac{k^2}{N} = - \dfrac{k^2}{2N} \nonumber\end{align} \nonumber
so that
$P(k) = e^{-k^2/2N} \nonumber$
Now, if we let $x = k \lambda$ and $L = \sqrt{N} \lambda$, and if we let $x$ be a continuous random variable, then the corresponding probability distribution $P(x)$ becomes
$P(x) = \dfrac{1}{L \sqrt{2 \pi}} \: e^{-x^2/2L^2} = \dfrac{1}{\sqrt{2 \pi N \lambda^2}} \: e^{-x^2/2N \lambda^2} \label{27.6.3}$
which is a simple Gaussian distribution. Now, $N$ is the number of collisions, which is given by $z_At$, so we can write the probability distribution for the particle to diffuse a distance $x$ in time $t$ as
$P(x, t) = \dfrac{1}{\sqrt{2 \pi z_At \lambda^2}} \: e^{-x^2/2z_At \lambda^2} \nonumber$
Define $D = z_A \lambda^2/2$ as the diffusion constant, which has units of (length)$^{\text{2}}$/time. The distribution then becomes
$P(x, t) = \dfrac{1}{\sqrt{4 \pi D t }} \: e^{-x^2/4D t} \nonumber$
Note that this distribution satisfies the following equation:
$\dfrac{\partial}{\partial t} P(x, t) = D \dfrac{\partial^2}{\partial x^2} P(x, t) \nonumber$
which is called the diffusion equation. The diffusion equation is, in fact, more general than the Gaussian distribution in Equation $\ref{27.6.3}$. It is capable of predicting the distribution in any one-dimensional geometry subject to any initial distribution $P(x, 0)$ and any imposed boundary conditions.
In three dimensions, we consider the three spatial directions to be independent, hence, the probability distribution for a particle to diffuse to a location $\textbf{r} = (x, y, z)$ is just a product of the three one-dimensional distributions:
$\mathcal{P}(\textbf{r}) = P(x) \: P(y) \: P(z) = \dfrac{1}{(4 \pi D t) ^{3/2}} \: e^{-\left( x^2 + y^2 + z^2 \right)/4Dt} \nonumber$
and if we are only interested in diffusion over a distance $r$, we can introduce spherical coordinates, integrate over the angles, and we find that
$P(r, t) = \dfrac{4 \pi}{(4 \pi D t)^{3/2}} \: e^{-r^2/4Dt} \nonumber$
Total collision frequency per unit volume
In equation 27.6.18, $z_A$ represents the collision frequency for one specific molecule in a gas sample. If we wish to calculate the total collision frequency per unit volume, the number density of the molecules, $\rho$, must be taken into account. The total collision frequency in a sample that contains only A molecules, $Z_{AA}$, is
$Z_{AA} = \dfrac{1}{2}\rho z_A = \dfrac{1}{2} \sigma \langle v_r \rangle \rho^2 = \dfrac{\sigma \langle v\rangle \rho^2}{\sqrt{2}}\nonumber$
The factor of $\dfrac {1}{2}$ must be included to avoid double counting collisions between similar molecules. (This is identical reasoning to the fact that there is only one way to roll double 3 with two dice.)
If you have a gas sample that contains A molecules and B molecules, then
$Z_{AB} = \sigma_{AB} \langle v_r \rangle \rho_A \rho_B \label{27.6.4}$
where
$\sigma_{AB} = \pi \left (\dfrac{d_A + d_B}{2} \right)^2$ and $\langle v_r \rangle = \sqrt{\dfrac{8 k_B T}{\pi \mu}} \nonumber$
Example $1$
Calculate the frequency of hydrogen-hydrogen collisions in a 1.00 cubic centimeter container at 1.00 bar and 298 K.
Solution
The collisional frquency requires knowledge of (1) the number denisty, the average speed (Equation \ref{27.6.4}).
The value of $\sigma_{H_2}$ is 2.30 x 10-19 m2.
The number density:
$\rho = \left(\dfrac{N_AP_{H_2}}{RT} \right) = \left(\dfrac{(6.022x10^{23} mole^{-1})(1.00 ~\text{bar})}{(0.08314 L·bar·mol^{-1}·K^{-1})(298 K)} \right) = 2.43 \times 10^{22} L^{-1} = 2.43 \times 10^{25} m^{-3}$
The average speed:
$\langle \text v\rangle = \sqrt{\dfrac{8 \text{R T}}{\pi \text{M}}} = \sqrt{\left(\dfrac{8(8.314 J·K^{-1})(298 K)}{\pi(0.002016 kg)}\right)} = 1770 \dfrac{m}{s}$
These are substituted into Equation \ref{27.6.4} to get the collisional frequency
\begin{align*} Z_{H_2,H_2} &= \dfrac{(2.30 x 10^{-19}m^2)(1770 \dfrac{m}{s})(2.43 x 10^{25} m^{-3})^2}{\sqrt{2}} \[4pt] &= 1.7 x 10^{35} s^{-1}m^{-3} \[4pt] &= 1.7 \times 10^{29} s^{-1}cm^{-3} \end{align*}
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(LibreTexts)/27%3A_The_Kinetic_Theory_of_Gases/27.06%3A_The_Mean_Free_Path.txt
|
Now that we have a description of how often gas molecules will collide with one another, we can make an initial attempt to describe how the collision of gas molecules can lead to reactions between the molecules. This topic will be covered in much more detail in Chapter 30.
Collision Frequency, Collision Energy, and Effective Collisions
In example 27.6.1, we calculated that the number of collision between H2 molecules at one bar and 25ºC is around 108 $\dfrac{\text{moles}}{\text{dm·s}}$. Consider the elementary reaction
$\text{A} + \text{B} \longrightarrow \text{C}\nonumber$
If all of the collisions between A and B resulted in a reaction, then the rate of the reaction would be about 108 $\dfrac{\text{moles}}{\text{dm·s}}$. We know from experiment that most chemical reactions do not occur this quickly. It must be true, then, that not all collisions result in a reaction. It is intuitive that molecules traveling at faster speeds should be more likely to react because they have sufficient energy to overcome electronic repulsions, and to break existing bonds.
One way to approach this estimate is to use a modified version of the equation for the collision frequency with a wall. The reason for starting with this equation is that it is reasonable to assume that the faster a molecule is traveling, the more likely it is to hit the wall. If this is so, then the faster a molecule is traveling, the more likely it is to collide with other molecules so as to react.
Recall equation 26.4.1
$z_w = \dfrac{1}{4} \dfrac{N}{V} \langle v \rangle \nonumber$
which can be rewritten for the molecular level by substituting $\rho$ for $\dfrac{N}{V}$
$z_w = \dfrac{1}{4} \rho \langle v \rangle \nonumber$
This equation was obtained by carrying out the integration
$z_w = \dfrac{\rho}{4\pi} \int_0^{\infty} vF(v)dv \int_0^{\pi/2} \cos \theta \sin \theta d \theta \int_0^{2\pi} d \phi \label{27.7.1}$
which takes into account the fact that molecules will only hit the wall from one direction.
Recall from equation 27.3.1, the Maxwell-Boltzmann distribution of speeds
$f(v) = 4\pi v^2 \left(\dfrac{m}{2\pi k_BT} \right)^{3/2} \exp \left(\dfrac{-mv^2}{2k_BT} \right) \nonumber$
that $f(v)$ has a factor of $v^2$. Thus, equation $\ref{27.7.1}$ has a factor of $v^3$, meaning that the molecules colliding with the wall are traveling faster than the molecule in the bulk of the sample. The assumption is that the faster molecules are more likely to hit the wall in a given amount of time.
We must modify equation $\ref{27.7.1}$ to take into account the collision of molecules with each other, rather than with the wall. This is done by replacing the mass of a single molecule $m$ with the reduced mass of the two colliding molecules $\mu$. The resulting speed is the relative average speed $v_r$. The result of these assumptions is that the collision frequency of molecules A and B per unit volume in which the molecules collide with a relative speed between $v_r$ and $v_r + dv_r$ is
$dZ_{AB} = Av_r^3e^{-\mu v_r^2/2k_BT} dv_r \label{27.7.2}$
where A is a proportionality constant.
If we require the integral of this equation over all relative speeds to be equal to $Z_{AB}$, then
$Z_{AB} = \sigma_{AB}\rho_A\rho_B \left(\dfrac{8k_BT}{\pi \mu} \right)^{1/2} = A \int_0^{\infty} v_r^3 e^{-\mu v_r^2/2k_BT} dv_r \nonumber$
$Z_{AB} = \sigma_{AB}\rho_A\rho_B \left(\dfrac{8k_BT}{\pi \mu} \right)^{1/2} = 2A \left(\dfrac{k_BT}{\mu} \right)^{2} \nonumber \nonumber$
Rearranging to solve for A gives
$A = \sigma_{AB}\rho_A\rho_B \left(\dfrac{\mu}{k_BT} \right)^{3/2} \left(\dfrac{2}{\pi} \right)^{1/2} \label{27.7.3}$
Substituting equation $\ref{27.7.3}$ into equation $\ref{27.7.2}$ gives
$dZ_{AB} = \sigma_{AB}\rho_A\rho_B \left(\dfrac{\mu}{k_BT} \right)^{3/2} \left(\dfrac{2}{\pi} \right)^{1/2} v_r^3e^{-\mu v_r^2/2k_BT} dv_r \nonumber$
With this equation, we can describe the collision frequency per unit volume between A molecules and B molecules with relative speeds in the range of $v_r$ and $v_r + dv_r$. In this equation, the portion
$\left(\dfrac{\mu}{k_BT} \right)^{3/2} \left(\dfrac{2}{\pi} \right)^{1/2} v_r^3e^{-\mu v_r^2/2k_BT} dv_r \nonumber$
which is $v_rf(v_r)dv_r$, is the probability that the relative speed of the molecules will fall between $v_r$ and $v_r + dv_r$.
Contributors and Attributions
• Tom Neils, Grand Rapids Community College
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(LibreTexts)/27%3A_The_Kinetic_Theory_of_Gases/27.07%3A_The_Rate_of_a_Gas-Phase_Chemical_Reactions.txt
|
• 28.1: The Time Dependence of a Chemical Reaction is Described by a Rate Law
The rate of a chemical reaction (or the reaction rate) can be defined by the time needed for a change in concentration to occur. But there is a problem in that this allows for the definition to be made based on concentration changes for either the reactants or the products. Plus, due to stoichiometric concerns, the rates at which the concentrations are generally different!
• 28.2: Rate Laws Must Be Determined Experimentally
There are several methods that can be used to measure chemical reactions rates. A common method is to use spectrophotometry to monitor the concentration of a species that will absorb light. If it is possible, it is preferable to measure the appearance of a product rather than the disappearance of a reactant, due to the low background interference of the measurement.
• 28.3: First-Order Reactions Show an Exponential Decay of Reactant Concentration with Time
If the reaction follows a first order rate law, it can be expressed in terms of the time-rate of change of [A]. The solution of the differential equation suggests that a plot of log concentration as a function of time will produce a straight line.
• 28.4: Different Rate Laws Predict Different Kinetics
It is possible to determine the reaction order using data from a single experiment by plotting the concentration of the reactant as a function of time. Because of the characteristic shapes of such lines for zero-order, first-order, and second-order reactions, the graphs can be used to determine the reaction order of an unknown reaction.
• 28.5: Reactions can also be Reversible
Many chemical reactions are reversible, in that the products formed during the process react to re-form the original reactants. These reversible reactions eventually reach a state of dynamic equilibrium, in which the rate of the overall forward process is equal to the rate of the overall reverse process.
• 28.6: The Rate Constants of a Reversible Reaction Can Be Determined Using Relaxation Techniques
Many chemical reactions are complete in less than a few seconds, which makes the rate of reaction difficult to determine. In these cases, the relaxation methods can be used to determine the rate of the reaction.
• 28.7: Rate Constants Are Usually Strongly Temperature Dependent
In general, increases in temperature increase the rates of chemical reactions. It is easy to see why, since most chemical reactions depend on molecular collisions. And as we discussed in Chapter 2, the frequency with which molecules collide increases with increased temperature. But also, the kinetic energy of the molecules increases, which should increase the probability that a collision event will lead to a reaction. An empirical model was proposed by Arrhenius to account for this phenomenon.
• 28.8: Transition-State Theory Can Be Used to Estimate Reaction Rate Constants
Transition state theory was proposed in 1935 by Henry Erying, and further developed by Merrideth G. Evans and Michael Polanyi (Laidler & King, 1983), as another means of accounting for chemical reaction rates. It is based on the idea that a molecular collision that leads to reaction must pass through an intermediate state known as the transition state.
• 28.E: Chemical Kinetics I - Rate Laws (Exercises)
28: Chemical Kinetics I - Rate Laws
The Reaction Rate
The rate of a chemical reaction (or the reaction rate) can be defined by the time needed for a change in concentration to occur. But there is a problem in that this allows for the definition to be made based on concentration changes for either the reactants or the products. Plus, due to stoichiometric concerns, the rates at which the concentrations are generally different! Toward this end, the following convention is used.
For a general reaction
$a A + b B \rightarrow c C + d D \nonumber$
the reaction rate can be defined by any of the ratios
$\text{rate} = - \dfrac{1}{a} \dfrac{\Delta [A]}{dt} = - \dfrac{1}{b} \dfrac{\Delta[B]}{dt} = + \dfrac{1}{c} \dfrac{\Delta [C]}{dt} = + \dfrac{1}{d} \dfrac{ \Delta [D]}{dt} \nonumber$
Or for infinitesimal time intervals
$\text{rate} = - \dfrac{1}{a} \dfrac{d[A]}{dt} = - \dfrac{1}{b} \dfrac{d[C]}{dt} = + \dfrac{1}{c} \dfrac{d[C]}{dt} = + \dfrac{1}{d} \dfrac{d[D]}{dt} \nonumber$
Example $1$:
Under a certain set of conditions, the rate of the reaction
$\ce{N_2 + 3 H_2 \rightarrow 2 NH_3} \nonumber$
the reaction rate is $6.0 \times 10^{-4}\, M/s$. Calculate the time-rate of change for the concentrations of N2, H2, and NH3.
Solution:
Due to the stoichiometry of the reaction,
$\text{rate} = - \dfrac{d[N_2]}{dt} = - \dfrac{1}{3} \dfrac{d[H_2]}{dt} = + \dfrac{1}{2} \dfrac{d[NH_3]}{dt} \nonumber$
so
\begin{align*} \dfrac{d[N_2]}{dt} &= -6.0 \times 10^{-4} \,M/s \[4pt] \dfrac{d[H_2]}{dt} &= -2.0 \times 10^{-4} \,M/s \[4pt] \dfrac{d[NH_3]}{dt} &= 3.0 \times 10^{-4} \,M/s \end{align*} \nonumber
Note: The time derivatives for the reactants are negative because the reactant concentrations are decreasing, and those of products are positive since the concentrations of products increase as the reaction progresses.
The Rate Law
As shown above, the rate of the reaction can be followed experimentally by measuring the rate of the loss of a reactant or the rate of the production of a product. The rate of the reaction is often related to the concentration of some or all of the chemical species present at a given time. An equation called the rate law is used to show this relationship. The rate law cannot be predicted by looking at the balanced chemical reaction but must be determined by experiment. For example, the rate law for the reaction
$\ce{Cl2 (g) + CO (g) → Cl2CO (g) } \nonumber$
was experimentally determined to be
$\text{rate} = k[Cl_2]^{3/2}[CO] \nonumber$
In this equation, $k$ is the rate constant, and $[\ce{Cl_2}]$ and $[\ce{CO}]$ are the molar concentrations of Cl2 and of CO. Each exponent is called the order of the given species. Thus, the rate law is second order in Cl2 and first order in CO. The sum of the individual reactant orders is called the reaction order. This reaction has a reaction order of two and a half.
In the next section, we will discuss methods to experimentally determine the rate law.
Contributors and Attributions
• Patrick E. Fleming (Department of Chemistry and Biochemistry; California State University, East Bay)
• Tom Neils (Grand Rapids Community College)
28.02: Rate Laws Must Be Determined Experimentally
There are several methods that can be used to measure chemical reactions rates. A common method is to use spectrophotometry to monitor the concentration of a species that will absorb light. If it is possible, it is preferable to measure the appearance of a product rather than the disappearance of a reactant, due to the low background interference of the measurement. However, high-quality kinetic data can be obtained either way.
The Stopped-Flow Method
The stopped-flow method involves using flow control (which can be provided by syringes or other valves) to control the flow of reactants into a mixing chamber where the reaction takes place. The reaction mixture can then be probed spectrophotometrically. Stopped-flow methods are commonly used in physical chemistry laboratory courses (Progodich, 2014).
Some methods depend on measuring the initial rate of a reaction, which can be subject to a great deal of experimental uncertainty due to fluctuations in instrumentation or conditions. Other methods require a broad range of time and concentration data. These methods tend to produce more reliable results as they can make use of the broad range of data to smooth over random fluctuations that may affect measurements. Both approaches (initial rates and full concentration profile data methods) will be discussed below.
28.03: First-Order Reactions Show an Exponential Decay of Reactant Concentration with Time
A first order rate law would take the form
$\dfrac{d[A]}{dt} = k[A] \nonumber$
Again, separating the variables by placing all of the concentration terms on the left and all of the time terms on the right yields
$\dfrac{d[A]}{[A]} =-k\,dt \nonumber$
This expression is also easily integrated as before
$\int_{[A]=0}^{[A]} \dfrac{d[A]}{[A]} =-k \int_{t=0}^{t=t}\,dt \nonumber$
Noting that
$\dfrac{dx}{x} = d (\ln x) \nonumber$
The form of the integrated rate law becomes
$\ln [A] - \ln [A]_o = kt \nonumber$
or
$\ln [A] = \ln [A]_o - kt \label{In1}$
This form implies that a plot of the natural logarithm of the concentration is a linear function of the time. And so a plot of ln[A] as a function of time should produce a linear plot, the slope of which is -k, and the intercept of which is ln[A]0.
Example $1$:
Consider the following kinetic data. Use a graph to demonstrate that the data are consistent with first order kinetics. Also, if the data are first order, determine the value of the rate constant for the reaction.
Time (s) 0 10 20 50 100 150 200 250 300
[A] (M) 0.873 0.752 0.648 0.414 0.196 0.093 0.044 0.021 0.010
Solution
The plot looks as follows:
From this plot, it can be seen that the rate constant is 0.0149 s-1. The concentration at time $t = 0$ can also be inferred from the intercept.
It should also be noted that the integrated rate law (Equation \ref{In1}) can be expressed in exponential form:
$[A] = [A]_o e^{-kt} \nonumber$
Because of this functional form, 1st order kinetics are sometimes referred to as exponential decay kinetics. Many processes, including radioactive decay of nuclides follow this type of rate law.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(LibreTexts)/28%3A_Chemical_Kinetics_I_-_Rate_Laws/28.01%3A_The_Time_Dependence_of_a_Chemical_Reaction_Is_Described_by_a_Rate_Law.txt
|
Difference rate laws predict different kinetics
Zero-order Kinetics
If the reaction follows a zeroth order rate law, it can be expressed in terms of the time-rate of change of [A] (which will be negative since A is a reactant):
$-\dfrac{d[A]}{dt} = k \nonumber$
In this case, it is straightforward to separate the variables. Placing time variables on the right and [A] on the left
$d[A] = - k \,dt \nonumber$
In this form, it is easy to integrate. If the concentration of A is [A]0 at time t = 0, and the concentration of A is [A] at some arbitrary time later, the form of the integral is
$\int _{[A]_o}^{[A]} d[A] = - k \int _{t_o}^{t}\,dt \nonumber$
which yields
$[A] - [A]_o = -kt \nonumber$
or
$[A] = [A]_o -kt \nonumber$
This suggests that a plot of concentration as a function of time will produce a straight line, the slope of which is –k, and the intercept of which is [A]0. If such a plot is linear, then the data are consistent with 0th order kinetics. If they are not, other possibilities must be considered.
Second Order Kinetics
If the reaction follows a second order rate law, the some methodology can be employed. The rate can be written as
$-\dfrac{d[A]}{dt} = k [A]^2 \label{eq1A}$
The separation of concentration and time terms (this time keeping the negative sign on the left for convenience) yields
$-\dfrac{d[A]}{[A]^2} = k \,dt \nonumber$
The integration then becomes
$- \int_{[A]_o}^{[A]} \dfrac{d[A]}{[A]^2} = \int_{t=0}^{t}k \,dt \label{eq1}$
And noting that
$- \dfrac{dx}{x^2} = d \left(\dfrac{1}{x} \right) \nonumber$
the result of integration Equation \ref{eq1} is
$\dfrac{1}{[A]} -\dfrac{1}{[A]_o} = kt \nonumber$
or
$\dfrac{1}{[A]} = \dfrac{1}{[A]_o} + kt \nonumber$
And so a plot of $1/[A]$ as a function of time should produce a linear plot, the slope of which is $k$, and the intercept of which is $1/[A]_0$.
Other 2nd order rate laws are a little bit trickier to integrate, as the integration depends on the actual stoichiometry of the reaction being investigated. For example, for a reaction of the type
$A + B \rightarrow P \nonumber$
That has rate laws given by
$-\dfrac{d[A]}{dt} = k [A][B] \nonumber$
and
$-\dfrac{d[B]}{dt} = k [A][B] \nonumber$
the integration will depend on the decrease of [A] and [B] (which will be related by the stoichiometry) which can be expressed in terms the concentration of the product [P].
$[A] = [A]_o – [P] \label{eqr1}$
and
$[B] = [B]_o – [P]\label{eqr2}$
The concentration dependence on $A$ and $B$ can then be eliminated if the rate law is expressed in terms of the production of the product.
$\dfrac{d[P]}{dt} = k [A][B] \label{rate2}$
Substituting the relationships for $[A]$ and $[B]$ (Equations \ref{eqr1} and \ref{eqr2}) into the rate law expression (Equation \ref{rate2}) yields
$\dfrac{d[P]}{dt} = k ( [A]_o – [P]) ([B] = [B]_o – [P]) \label{rate3}$
Separation of concentration and time variables results in
$\dfrac{d[P]}{( [A]_o – [P]) ([B] = [B]_o – [P])} = k\,dt \nonumber$
Noting that at time $t = 0$, $[P] = 0$, the integrated form of the rate law can be generated by solving the integral
$\int_{[A]_o}^{[A]} \dfrac{d[P]}{( [A]_o – [P]) ([B]_o – [P])} = \int_{t=0}^{t} k\,dt \nonumber$
Consulting a table of integrals reveals that for $a \neq b$[1],
$\int \dfrac{dx}{(a-x)(b-x)} = \dfrac{1}{b-a} \ln \left(\dfrac{b-x}{a-x} \right) \nonumber$
Applying the definite integral (as long as $[A]_0 \neq [B]_0$) results in
$\left. \dfrac{1}{[B]_0-[A]_0} \ln \left( \dfrac{[B]_0-[P]}{[A]_0-[P]} \right) \right |_0^{[A]} = \left. k\, t \right|_0^t \nonumber$
$\dfrac{1}{[B]_0-[A]_0} \ln \left( \dfrac{[B]_0-[P]}{[A]_0-[P]} \right) -\dfrac{1}{[B]_0-[A]_0} \ln \left( \dfrac{[B]_0}{[A]_0} \right) =k\, t \label{finalint}$
Substituting Equations \ref{eqr1} and \ref{eqr2} into Equation \ref{finalint} and simplifying (combining the natural logarithm terms) yields
$\dfrac{1}{[B]_0-[A]_0} \ln \left( \dfrac{[B][A]_o}{[A][B]_o} \right) = kt \nonumber$
For this rate law, a plot of $\ln([B]/[A])$ as a function of time will produce a straight line, the slope of which is
$m = ([B]_0 – [A]_0)k. \nonumber$
In the limit at $[A]_0 = [B]_0$, then $[A] = [B]$ at all times, due to the stoichiometry of the reaction. As such, the rate law becomes
$\text{rate} = k [A]^2 \nonumber$
and integrate direct like in Equation \ref{eq1A} and the integrated rate law is (as before)
$\dfrac{1}{[A]} = \dfrac{1}{[A]_o} + kt \nonumber$
Example $2$: Confirming Second Order Kinetics
Consider the following kinetic data. Use a graph to demonstrate that the data are consistent with second order kinetics. Also, if the data are second order, determine the value of the rate constant for the reaction.
time (s) 0 10 30 60 100 150 200
[A] (M) 0.238 0.161 0.098 0.062 0.041 0.029 0.023
Solution
The plot looks as follows:
From this plot, it can be seen that the rate constant is 0.2658 M-1 s-1. The concentration at time $t = 0$ can also be inferred from the intercept.
[1] This integral form can be generated by using the method of partial fractions. See (House, 2007) for a full derivation.
Example $3$
Dinitrogen pentoxide (N2O5) decomposes to NO2 and O2 at relatively low temperatures in the following reaction:
$2N_{2}O_{5}\left ( soln \right ) \rightarrow 4NO_{2}\left ( soln \right )+O_{2}\left ( g \right )$
This reaction is carried out in a CCl4 solution at 45°C. The concentrations of N2O5 as a function of time are listed in the following table, together with the natural logarithms and reciprocal N2O5 concentrations. Plot a graph of the concentration versus t, ln concentration versus t, and 1/concentration versus t and then determine the rate law and calculate the rate constant.
Time (s) [N2O5] (M) ln[N2O5] 1/[N2O5] (M−1)
0 0.0365 −3.310 27.4
600 0.0274 −3.597 36.5
1200 0.0206 −3.882 48.5
1800 0.0157 −4.154 63.7
2400 0.0117 −4.448 85.5
3000 0.00860 −4.756 116
3600 0.00640 −5.051 156
Given: balanced chemical equation, reaction times, and concentrations
Asked for: graph of data, rate law, and rate constant
Strategy:
A Use the data in the table to separately plot concentration, the natural logarithm of the concentration, and the reciprocal of the concentration (the vertical axis) versus time (the horizontal axis). Compare the graphs with those in Figure 13.4.2 to determine the reaction order.
B Write the rate law for the reaction. Using the appropriate data from the table and the linear graph corresponding to the rate law for the reaction, calculate the slope of the plotted line to obtain the rate constant for the reaction.
Solution:
A Here are plots of [N2O5] versus t, ln[N2O5] versus t, and 1/[N2O5] versus t:
The plot of ln[N2O5] versus t gives a straight line, whereas the plots of [N2O5] versus t and 1/[N2O5] versus t do not. This means that the decomposition of N2O5 is first order in [N2O5].
B The rate law for the reaction is therefore
$rate = k \left [N_{2}O_{5} \right ]$
Calculating the rate constant is straightforward because we know that the slope of the plot of ln[A] versus t for a first-order reaction is −k. We can calculate the slope using any two points that lie on the line in the plot of ln[N2O5] versus t. Using the points for t = 0 and 3000 s,
$slope= \dfrac{ln\left [N_{2}O_{5} \right ]_{3000}-ln\left [N_{2}O_{5} \right ]_{0}}{3000\;s-0\;s} = \dfrac{\left [-4.756 \right ]-\left [-3.310 \right ]}{3000\;s} =4.820\times 10^{-4}\;s^{-1}$
Thus k = 4.820 × 10−4 s−1.
Contributors
• Anonymous
Modified by Joshua Halpern (Howard University), Scott Sinex, and Scott Johnson (PGCC)
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(LibreTexts)/28%3A_Chemical_Kinetics_I_-_Rate_Laws/28.04%3A_Different_Rate_Laws_Predict_Different_Kinetics.txt
|
Reversible and irreversible reactions are prevalent in nature and are responsible for reactions such as the breakdown of ammonia.
Introduction
It was believed that all chemical reactions were irreversible until 1803, when French chemist Claude Louis Berthollet introduced the concept of reversible reactions. Initially he observed that sodium carbonate and calcium chloride react to yield calcium carbonate and sodium chloride; however, after observing sodium carbonate formation around the edges of salt lakes, he realized that large amount of salts in the evaporating water reacted with calcium carbonate to form sodium carbonate, indicating that the reverse reaction was occurring.
Chemical reactions are represented by chemical equations. These equations typically have a unidirectional arrow ($\rightarrow$) to represent irreversible reactions. Other chemical equations may have a bidirectional harpoons ($\rightleftharpoons$) that represent reversible reactions (not to be confused with the double arrows $\leftrightarrow$ used to indicate resonance structures). To review the fundamentals of chemical reactions, click here: Chemical Reactions
Irreversible Reactions
A fundamental concept of chemistry is that chemical reactions occurred when reactants reacted with each other to form products. These unidirectional reactions are known as irreversible reactions, reactions in which the reactants convert to products and where the products cannot convert back to the reactants. These reactions are essentially like baking. The ingredients, acting as the reactants, are mixed and baked together to form a cake, which acts as the product. This cake cannot be converted back to the reactants (the eggs, flour, etc.), just as the products in an irreversible reaction cannot convert back into the reactants.
An example of an irreversible reaction is combustion. Combustion involves burning an organic compound—such as a hydrocarbon—and oxygen to produce carbon dioxide and water. Because water and carbon dioxide are stable, they do not react with each other to form the reactants. Combustion reactions take the following form:
$C_xH_y + O_2 \rightarrow CO_2 + H_2O$
Reversible Reactions
In reversible reactions, the reactants and products are never fully consumed; they are each constantly reacting and being produced. A reversible reaction can take the following summarized form:
$A + B \underset{k_{-1}} {\overset{k_1} {\rightleftharpoons}} C + D$
This reversible reaction can be broken into two reactions.
Reaction 1: $A + B \xrightarrow{k_1}C+D$
Reaction 2: $C + D \xrightarrow{k_{-1}}A+B$
These two reactions are occurring simultaneously, which means that the reactants are reacting to yield the products, as the products are reacting to produce the reactants. Collisions of the reacting molecules cause chemical reactions in a closed system. After products are formed, the bonds between these products are broken when the molecules collide with each other, producing sufficient energy needed to break the bonds of the product and reactant molecules.
Below is an example of the summarized form of a reversible reaction and a breakdown of the reversible reaction N2O4 ↔ 2NO2
Reaction 1 and Reaction 2 happen at the same time because they are in a closed system.
Blue: Nitrogen Red: Oxygen
Reaction 1 Reaction 2
Imagine a ballroom. Let reactant A be 10 girls and reactant B be 10 boys. As each girl and boy goes to the dance floor, they pair up to become a product. Once five girls and five boys are on the dance floor, one of the five pairs breaks up and moves to the sidelines, becoming reactants again. As this pair leaves the dance floor, another boy and girl on the sidelines pair up to form a product once more. This process continues over and over again, representing a reversible reaction.
Unlike irreversible reactions, reversible reactions lead to equilibrium: in reversible reactions, the reaction proceeds in both directions whereas in irreversible reactions the reaction proceeds in only one direction. To learn more about this phenomenon, click here: Chemical Equilibrium
If the reactants are formed at the same rate as the products, a dynamic equilibrium exists. For example, if a water tank is being filled with water at the same rate as water is leaving the tank (through a hypothetical hole), the amount of water remaining in the tank remains consistent.
Connection to Biology
There are four binding sites on a hemoglobin protein. Hemoglobin molecules can either bind to carbon dioxide or oxygen. As blood travels through the alveoli of the lungs, hemoglobin molecules pick up oxygen-rich molecules and bind to the oxygen. As the hemoglobin travels through the rest of the body, it drops off oxygen at the capillaries for the organ system to use oxygen. After expelling the oxygen, it picks up carbon dioxide. Because this process is constantly carried out through the body, there are always hemoglobin molecules picking or expelling oxygen and other hemoglobin molecules that are picking up or expelling carbon dioxide. Therefore, the hemoglobin molecules, oxygen, and carbon dioxide are reactants while the hemoglobin molecules with oxygen or carbon dioxide bound to them are the products. In this closed system, some reactants convert into products as some products are changing into reactants, making it similar to a reversible reaction.
Contributors and Attributions
• Heather Yee (UCD), Mandeep Sohal (UCD)
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(LibreTexts)/28%3A_Chemical_Kinetics_I_-_Rate_Laws/28.05%3A_Reactions_Can_Also_Be_Reversible.txt
|
The traditional experimental methods described above all assume the possibility of following the reaction after its components have combined into a homogeneous mixture of known concentrations. But what can be done if the time required to complete the mixing process is comparable to or greater than the time needed for the reaction to run to completion?
Flow methods
Flow instruments are a rapid mixing devices used to study the chemical kinetics of fast reactions in solution. There are different flavors that can be implement depending on the nature of the reaction as discussed below.
Continuous Flow Approach
For reactions that take place in milliseconds, the standard approach since the 1950s has been to employ a flow technique of some kind. An early example was used to study fast gas-phase reactions in which one of the reactants is a free radical such as OH that can be produced by an intense microwave discharge acting on a suitable source gas mixture. This gas, along with the other reactant being investigated, is made to flow through a narrow tube at a known velocity.
If the distance between the point at which the reaction is initiated and the product detector is known, then the time interval can be found from the flow rate. By varying this distance, the time required to obtain the maximum yield can then be determined. Although this method is very simple in principle, it can be complicated in practice.
Stopped Flow Approach
Owing to the rather large volumes required, continuous flow method is more practical for the study of gas-phase reactions than for solutions, for which the stopped-flow method described below is generally preferred. These are by far the most common means of studying fast solution-phase reactions over time intervals of down to a fraction of a millisecond. The use of reasonably simple devices is now practical even in student laboratory experiments. These techniques make it possible to follow not only changes in the concentrations of reactants and products, but also the buildup and decay of reaction intermediates.
The basic stopped-flow apparatus consists of two or more coupled syringes that rapidly inject the reactants into a small mixing chamber and then through an observation cell that can be coupled to instruments that measure absorption, fluorescence, light scattering, or other optical or electrical properties of the solution. As the solution flows through the cell, it empties into a stopping syringe that, when filled, strikes a backstop that abruptly stops the flow. The volume that the stopping syringe can accept is adjusted so that the mixture in the cell has just become uniform and has reached a steady state; at this point, recording of the cell measurement begins and its change is followed.
Quenched Flow Approach
In a quenched-flow instrument, the reaction is stopped after a certain amount of time has passed after mixing. The stopping of the reaction is called quenching and it can be achieved by various means, for example by mixing with another solution, which stops the reaction (chemical quenching), quickly lowering the temperature (freeze quenching) or even by exposing the sample to light of a certain wavelength (optical quenching).
Of course, there are many reactions that cannot be followed by changes in light absorption or other physical properties that are conveniently monitored. In such cases, it is often practical to quench (stop) the reaction after a desired interval by adding an appropriate quenching agent. For example, an enzyme-catalyzed reaction can be stopped by adding an acid, base, or salt solution that denatures (destroys the activity of) the protein enzyme. Once the reaction has been stopped, the mixture is withdrawn and analyzed in an appropriate manner.
The quenched-flow technique works something like the stopped-flow method described above, with a slightly altered plumbing arrangement. The reactants A and B are mixed and fed directly through the diverter valve to the measuring cell, which is not shown in this diagram. After a set interval that can vary from a few milliseconds to 200 sec or more, the controller activates the quenching syringe and diverter valve, flooding the cell with the quenching solution.
Relaxation Methods
To investigate reactions that are complete in less than a millisecond, one can start with a pre-mixed sample in which one of active reactants is generated in situ. Alternatively, a rapid change in pressure or temperature can alter the composition of a reaction that has already achieved equilibrium.
Flash Photolysis
Many reactions are known which do not take place without light of wavelength sufficiently short to supply the activation energy needed to break a bond, often leading to the creation of a highly reactive radical. A good example is the combination of gaseous Cl2 with H2, which proceeds explosively when the system is illuminated with visible light. In flash photolysis, a short pulse of light is used to initiate a reaction whose progress can be observed by optical or other means.
Photolysis refers to the use of light to decompose a molecule into simpler units, often ions or free radicals. In contrast to thermolysis (decomposition induced by high temperature), photolysis is able to inject energy into a molecule almost instantaneously and can be much "cleaner," meaning that there are fewer side reactions that often lead to complex mixtures of products. Photolysis can also be highly specific; the wavelength of the light that triggers the reaction can often be adjusted to activate one particular kind of molecule without affecting others that might be present.
Norrish and Porter
All this had been known for a very long time, but until the mid-1940's there was no practical way of studying the kinetics of the reactions involving the highly reactive species produced by photolysis. In 1945, Ronald Norrish of Cambridge University and his graduate student George Porter conceived the idea of using a short-duration flash lamp to generate gas-phase CH2 radicals, and then following the progress of the reaction of these radicals with other species by means of absorption spectroscopy.
In a flash photolysis experiment, recording of the absorbance of the sample cell contents is timed to follow the flash by an interval that can be varied in order to capture the effects produced by the product or intermediate as it is formed or decays. Norrish and Porter shared the 1967 Nobel Prize in Chemistry for this work.
Many reactions, especially those that take place in solution, occur too rapidly to follow by flow techniques, and can therefore only be observed when they are already at equilibrium. The classical examples of such reactions are two of the fastest ones ever observed, the dissociation of water
$2 H_2O \rightarrow H_3O^+ + OH^-$
and the formation of the triiodide ion in aqueous solution
$I^– + I_2 \rightarrow I_3^–$
Reactions of these kinds could not be studied until the mid-1950s when techniques were developed to shift the equilibrium by imposing an abrupt physical change on the system.
Temperature Jumps
The rate constants of reversible reactions can be measured using a relaxation method. In this method, the concentrations of reactants and products are allowed to achieve equilibrium at a specific temperature. Once equilibrium has been achieved, the temperature is rapidly changed, and then the time needed to achieve the new equilibrium concentrations of reactants and products is measured. For example, if the reaction
$\text{A} \overset{k_1}{\underset{k_{-1}}{\rightleftharpoons}} \text{B} \label{relax}$
is endothermic, then according to the Le Chatelier principle, subjecting the system to a rapid jump in temperature will shift the equilibrium state to one in which the product B has a higher concentration. The composition of the system will than begin to shift toward the new equilibrium composition at a rate determined by the kinetics of the process.
For the general case illustrated here, the quantity "x" being plotted is a measurable quantity such as light absorption or electrical conductivity that varies linearly with the composition of the system. In a first-order process, x will vary with time according to
$x_t = x_o e^{-kt}$
After the abrupt perturbation at time to, the relaxation time t* is defined as the half-time for the return to equilibrium — that is, as the time required for $x_o$ to decrease by $Δx/e = Δx/2.718$. The derivation of t* and the relations highlighted in yellow can be found in most standard kinetics textbooks. Temperature jumps are likely most commonly used.
The rate law for the reversible reaction in Equation $\ref{relax}$ can be written as
$\dfrac{d \left[ \text{B} \right]}{dt} = k_1 \left[ \text{A} \right] - k_{-1} \left[ \text{B} \right] \label{19.33}$
Consider a system comprising $\text{A}$ and $\text{B}$ that is allowed to achieve equilibrium concentrations at a temperature, $T_1$. After equilibrium is achieved, the temperature of the system is instantaneously lowered to $T_2$ and the system is allowed to achieve new equilibrium concentrations of $\text{A}$ and $\text{B}$, $\left[ \text{A} \right]_{\text{eq},2}$ and $\left[ \text{B} \right]_{\text{eq}, 2}$. During the transition time from the first equilibrium state to the second equilibrium state, we can write the instantaneous concentration of $\text{A}$ as
$\left[ \text{A} \right] = \left[ \text{B} \right]_{\text{eq}, 1} - \left[ \text{B} \right] \label{19.34}$
The rate of change of species $\text{B}$ can then be written as
$\dfrac{d \left[ \text{B} \right]}{dt} = k_1 \left( \left[ \text{B} \right]_{\text{eq}, 1} - \left[ \text{B} \right] \right) - k_{-1} \left[ \text{B} \right] = k_1 \left[ \text{B} \right]_{\text{eq}, 1} - \left( k_1 + k_{-1} \right) \left[ \text{B} \right] \label{19.35}$
At equilibrium, $d \left[ \text{B} \right]/dt = 0$ and $\left[ \text{B} \right] = \left[ \text{B} \right]_{\text{eq}, 2}$, allowing us to write
$k_1 \left[ \text{B} \right]_{\text{eq}, 1} = \left( k_1 + k_{-1} \right) \left[ \text{B} \right]_{\text{eq}, 2} \label{19.36}$
Using the above equation, we can rewrite the rate equation as
$\dfrac{ d \text{B}}{\left( \left[ \text{B} \right]_{\text{eq}, 2} - \left[ \text{B} \right] \right)} = \left( k_1 + k_{-1} \right) dt \label{19.37}$
Integrating yields
$-\text{ln} \left( \left[ \text{B} \right] - \left[ \text{B} \right]_{\text{eq}, 2} \right) = -\left( k_1 + k_{-1} \right) t + C \label{19.38}$
We can rearrange the above equation in terms of $\text{B}$
$\left[ \text{B} \right] = Ce^{- \left( k_1 + k_{-1} \right) t} + \left[ \text{B} \right]_{\text{eq}, 2} \label{19.39}$
At $t = 0$, $\left[ \text{B} \right] = \left[ \text{B} \right]_{\text{eq}, 1}$, so $C = \left[ \text{B} \right]_{\text{eq}, 1} - \left[ \text{B} \right]_{\text{eq}, 2}$. Plugging the the value of $C$, we arrive at
$\left[ \text{B} \right] - \left[ \text{B} \right]_{\text{eq}, 2} = \left( \left[ \text{B} \right]_{\text{eq}, 1} - \left[ \text{B} \right]_{\text{eq}, 2} \right) e^{-\left( k_1 + k_{-1} \right) t} \label{19.40}$
which can also be expressed as
$\Delta \left[ \text{B} \right] = \Delta \left[ \text{B} \right]_0 e^{-\left( k_1 + k_{-1} \right) t} = \Delta \left[ \text{B} \right]_0 e^{-t/\tau} \label{19.41}$
where $\Delta \left[ \text{B} \right]$ is the difference in the concentration of $\text{B}$ from the final equilibrium concentration after the perturbation, and $\tau$ is the relaxation time. A plot of $\text{ln} \left( \Delta \left[ \text{B} \right]/\Delta \left[ \text{B} \right]_0 \right)$ versus $t$ will be linear with a slope of $-\left( k_1 + k_{-1} \right)$, where $k_1$ and $k_{-1}$ are the rate constants at temperature, $T_2$.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(LibreTexts)/28%3A_Chemical_Kinetics_I_-_Rate_Laws/28.06%3A_The_Rate_Constants_of_a_Reversible_Reaction_Can_Be_Determined_Using_Relaxation_Techniques.txt
|
In general, increases in temperature increase the rates of chemical reactions. It is easy to see why, since most chemical reactions depend on molecular collisions. And as we discussed in Chapter 2, the frequency with which molecules collide increases with increased temperature. But also, the kinetic energy of the molecules increases, which should increase the probability that a collision event will lead to a reaction. An empirical model was proposed by Arrhenius to account for this phenomenon. The Arrhenius model (Arrhenius, 1889) can be expressed as
$k = A e^{-E_a/RT} \nonumber$
Although the model is empirical, some of the parameters can be interpreted in terms of the energy profile of the reaction. $E_a$, for example, is the activation energy, which represents the energy barrier that must be overcome in a collision to lead to a reaction.
If the rate constant for a reaction is measure at two temperatures, the activation energy can be determined by taking the ratio. This leads to the following expression for the Arrhenius model:
$\ln \left( \dfrac{k_1}{k_2} \right) = - \dfrac{E_a}{R} \left( \dfrac{1}{T_2} - \dfrac{1}{T_1} \right) \label{Arrhenius}$
Example $1$:
For a given reaction, the rate constant doubles when the temperature is increased form 25 °C to 35 °C. What is the Arrhenius activation energy for this reaction?
Solution
The energy of activation can be calculated from the Arrhenius Equation (Equation \ref{Arrhenius}).
$\ln \left( \dfrac{2k_1}{k_1} \right) = - \dfrac{E_a}{8.314 \, \dfrac{J}{mol\, K}} \left( \dfrac{1}{308\,K} - \dfrac{1}{298\,K} \right) \nonumber$
From this reaction:
$E_a = 52.9\, kJ/mol \nonumber$
Preferably, however, the rate constant is measured at several temperatures, and then the activation energy can be determined using all of the measurements, by fitting them to the expression
$\ln (k) = - \dfrac{E_a}{RT} + \ln (A) \nonumber$
This can be done graphically by plotting the natural logarithm of the rate constant as a function of $1/T$ (with the temperature measured in K). The result should be a straight line (for a well-behaved reaction!) with a slope of $–E_a/R$.
There are some theoretical models (such as collision theory and transition state theory) which suggest the form of the Arrhenius model, but the model itself is purely empirical. A general feature, however, of the theoretical approaches is to interpret the activation energy as an energy barrier which a reaction must overcome in order to lead to a chemical reaction.
28.08: Transition-State Theory Can Be Used to Estimate Reaction Rate Constants
Transition State Theory
In Figure 28.8.1 , the point at which we evaluate or measure $E_a$ serves as a dividing line (also called a dividing surface) between reactants and products. At this point, we do not have $\text{A} + \text{B}$, and we do not have $\text{C}$. Rather, what we have is an activated complex of some kind called a transition state between reactants and products. The value of the reaction coordinate at the transition state is denoted $q^\ddagger$. Recall our notation $\text{x}$ for the complete set of coordinates and momenta of all of the atoms in the system. Generally, the reaction coordinate $q$ is a function $q \left( \text{x} \right)$ of all of the coordinates and momenta, although typically, $q \left( \text{x} \right)$ is a function of a subset of the coordinates and, possibly, the momenta.
As an example, let us consider two atoms $\text{A}$ and $\text{B}$ undergoing a collision. An appropriate reaction coordinate could simply be the distance $r$ between $\text{A}$ and $\text{B}$. This distance is a function of the positions $\textbf{r}_\text{A}$ and $\textbf{r}_\text{B}$ of the two atoms, in that
$q = r = \left| \textbf{r}_\text{A} - \textbf{r}_\text{B} \right| \label{20.24}$
When $\text{A}$ and $\text{B}$ are molecules, such as proteins, $q \left( \text{x} \right)$ is a much more complicated function of $\text{x}$.
Now, recall that the mechanical energy $\mathcal{E} \left( \text{x} \right)$ is given by
$\mathcal{E} \left( \text{x} \right) = \sum_{i=1}^N \frac{\textbf{p}_i^2}{2m_1} + U \left( \textbf{r}_1, \ldots, \textbf{r}_N \right) \label{20.25}$
and is a sum of kinetic and potential energies. Transition state theory assumes the following:
1. The system is classical, and the time dependence of the coordinates and momenta is determined by Newton’s laws of motion $m_i \ddot{\textbf{r}}_i = \textbf{F}_i \label{20.26}$
2. We start a trajectory obeying this equation of motion with an initial condition $\text{x}$ that makes $q \left( \text{x} \right) = q^\ddagger$ and such that $\dot{q} \left( \text{x} \right) > 0$ so that the reaction coordinate proceeds initial to the right, i.e., toward products.
3. We follow the motion $\text{x}_t$ of the coordinates and momenta in time starting from this initial condition $\text{x}$, which gives us a unique function $\text{x}_t \left( \text{x} \right)$.
4. If $q \left( \text{x}_t \left( \text{x} \right) \right) > q^\ddagger$ at time $t$, then the trajectory is designated as “reactive” and contributes to the reaction rate.
Define a function $\theta \left( y \right)$, which is $1$ if $y \geq 0$ and $0$ if $y < 0$. The function $\theta \left( y \right)$ is known as a step function.
We now define a flux of reactive trajectories $k \left( t \right)$ using statistical mechanics
$k \left( t \right) = \frac{1}{h Q_r} \int_{q \left( \text{x} \right) = q^\ddagger} d \text{x} \: e^{-\beta \mathcal{E} \left( \text{x} \right)} \left| \dot{q} \left( \text{x} \right) \right| \theta \left( q \left( \text{x}_t \left( \text{x} \right) \right) - q^\ddagger \right) \label{20.27}$
where $h$ is Planck’s constant. Here $Q_r$ is the partition function of the reactants
$Q_r = \int d \text{x} \: e^{-\beta \mathcal{E} \left( \text{x} \right)} \theta \left( q^\ddagger - q \left( \text{x} \right) \right) \label{20.28}$
The meaning of Equation $\ref{20.27}$ is an ensemble average over a canonical ensemble of the product $\left| \dot{q} \left( \text{x} \right) \right|$ and $\theta \left( q \left( \text{x}_t \left( \text{x} \right) \right) - q^\ddagger \right)$. The first factor in this product $\left| \dot{q} \left( \text{x} \right) \right|$ forces the initial velocity of the reaction coordinate to be positive, i.e., toward products, and the step function $\theta \left( q \left( \text{x}_t \left( \text{x} \right) \right) - q^\ddagger \right)$ requires that the trajectory of $q \left( \text{x}_t \left( \text{x} \right) \right)$ be reactive, otherwise, the step function will give no contribution to the flux. The function $k \left( t \right)$ in Equation $\ref{20.27}$ is known as the reactive flux. In the definition of $Q_r$ the step function $\theta \left( q^\ddagger - q \left( \text{x} \right) \right)$ measures the total number of microscopic states on the reactive side of the energy profile.
A plot of some examples of reactive flux functions $k \left( t \right)$ is shown in Figure 28.8.2 . These functions are discussed in greater detail in J. Chem. Phys. 95, 5809 (1991). These examples all show that $k \left( t \right)$ decays at first but then finally reaches a plateau value. This plateau value is taken to be the true rate of the reaction under the assumption that eventually, all trajectories that will become reactive will have done so after a sufficiently long time. Thus,
$k = \underset{t \rightarrow \infty}{\text{lim}} k \left( t \right) \label{20.29}$
gives the true rate constant. On the other hand, a common approximation is to take the value $k \left( 0 \right)$ as an estimate of the rate constant, and this is known as the transition state theory approximation to $k$, i.e.,
\begin{align} k^\text{(TST)} &= k \left( 0 \right) \ &= \frac{1}{Q_r} \int_{q \left( \text{x} \right) = q^\ddagger} d \text{x} \: e^{-\beta \mathcal{E} \left( \text{x} \right)} \left| \dot{q} \left( \text{x} \right) \right| \theta \left( q \left( \text{x} \right) - q^\ddagger \right) \end{align} \label{20.30}
However, note that since we require $\dot{q} \left( \text{x} \right)$ to initially be toward products, then by definition, at $t = 0$, $q \left( \text{x} \right) \geq q^\ddagger$, and the step function in the above expression is redundant. In addition, if $\dot{q} \left( \text{x} \right)$ only depends on momenta (or velocities) and not actually on coordinates, which will be true if $q \left( \text{x} \right)$ is not curvilinear (and is true for some curvilinear coordinates $q \left( \text{x} \right)$), and if $q \left( \text{x} \right)$ only depends on coordinates, then Equation $\ref{20.30}$ reduces to
$k^\text{(TST)} = \frac{1}{h Q_r} \int d \text{x}_\textbf{p} e^{-\beta \sum_{i=1}^N \textbf{p}_i^2/2m_i} \left| \dot{q} \left( \textbf{p}_1, \ldots, \textbf{p}_N \right) \right| \int_{q \left( \textbf{r}_1, \ldots, \textbf{r}_N \right) = q^\ddagger} d \text{x}_\textbf{r} e^{-\beta U \left( \textbf{r}_1, \ldots, \textbf{r}_N \right)} \label{20.31}$
The integral
$Z^\ddagger = \int_{q \left( \textbf{r}_1, \ldots, \textbf{r}_N \right) = q^\ddagger} d \text{x}_\textbf{r} e^{-\beta U \left( \textbf{r}_1, \ldots, \textbf{r}_N \right)} \nonumber$
counts the number of microscopic states consistent with the condition $q \left( \textbf{r}_1, \ldots, \textbf{r}_N \right) = q^\ddagger$ and is, therefore, a kind of partition function, and is denoted $Q^\ddagger$. On the other hand, because it is a partition function, we can derive a free energy $\Delta F^\ddagger$ from it
$F^\ddagger \propto -k_B T \: \text{ln} \: Z^\ddagger \label{20.32}$
Similarly, if we divide $Q_r$ into its ideal-gas and configurational contributions
$Q_r = Q_r^\text{(ideal)} Z_r \label{20.33}$
then we can take
$Z_r = e^{-\beta F_r} \label{20.34}$
where $F_r$ is the free energy of the reactants. Finally, setting $\dot{q} = p/\mu$, where $\mu$ is the associated mass, and $p$ is the corresponding momentum of the reaction coordinate, then, canceling most of the momentum integrals between the numerator and $Q_r^\text{(ideal)}$, the momentum integral we need is
$\int_0^\infty e^{-\beta p^2/2 \mu} \frac{p}{\mu} = k_B T \label{20.35}$
which gives the final expression for the transition state theory rate constant
$k^\text{(TST)} = \frac{k_B T}{h} e^{-\beta \left( F^\ddagger - F_r \right)} = \frac{k_B T}{h} e^{-\beta \Delta F^\ddagger} \label{20.36}$
Figure 28.8.2 actually shows $k \left( t \right)/ k^\text{(TST)}$, which must start at $1$. As the figure shows, in addition, for $t > 0$, $k \left( t \right) < k^\text{(TST)}$. Hence, $k^\text{(TST)}$ is always an upper bound to the true rate constant. Transition state theory assumes that any trajectory that initially moves toward products will be a reactive trajectory. For this reason, it overestimates the reaction rate. In reality, trajectories can cross the dividing surface several or many times before eventually proceeding either toward products or back toward reactants.
Figure 28.8.3 shows that one can obtain trajectories of both types. Here, the dividing surface lies at $q = 0$. Left, toward $q = -1$ is the reactant side, and right, toward $q = 1$ is the product side. Because some trajectories return to reactants and never become products, the true rate is always less than $k^\text{(TST)}$, and we can write
$k = \kappa k^\text{(TST)} \label{20.37}$
where the factor $\kappa < 1$ is known as the transmission factor. This factor accounts for multiple recrossings of the dividing surface and the fact that some trajectories do not become reactive ones.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(LibreTexts)/28%3A_Chemical_Kinetics_I_-_Rate_Laws/28.07%3A_Rate_Constants_Are_Usually_Strongly_Temperature_Dependent.txt
|
• 29.1: A Mechanism is a Sequence of Elementary Reactions
The mechanism of a reaction is a series of steps leading from the starting materials to the products. After each step, an intermediate is formed. The intermediate is short-lived, because it quickly undergoes another step to form the next intermediate. These simple steps are called elementary reactions. Because an overall reaction is composed of a series of elementary reaction, the overall rate of the reaction is somehow dependent on the rates of those smaller reactions.
• 29.2: The Principle of Detailed Balance
The principle of detailed balance is formulated for kinetic systems which are decomposed into elementary processes (collisions, or steps, or elementary reactions): At equilibrium, each elementary process should be equilibrated by its reverse process.
• 29.3: Multiple Mechanisms are often Indistinguishable
The great value of chemical kinetics is that it can give us insights into the actual reaction pathways (mechanisms) that reactants take to form the products of reactions. Analyzing a reaction mechanism to determine the type of rate law that is consistent (or not consistent) with the specific mechanism can give us significant insight.
• 29.4: The Steady-State Approximation
When a reaction mechanism has several steps of comparable rates, the rate-determining step is often not obvious. However, there is an intermediate in some of the steps. An intermediate is a species that is neither one of the reactants, nor one of the products. The steady-state approximation is a method used to derive a rate law. The method is based on the assumption that one intermediate in the reaction mechanism is consumed as quickly as it is generated. Its concentration remains the same in a
• 29.5: Rate Laws Do Not Imply Unique Mechanism
It is standard practice to develop a multi-step reaction mechanism from the experimentally derived rate law. The fact that a mechanism consistent with the rate law has been proposed does not preclude the possibility that the reaction proceeds according to a different mechanism. Thus, it is usual practice to carry out further experiments, such as trapping reaction intermediates, to gain enough evidence to accept one of the possible valid mechanisms.
• 29.6: The Lindemann Mechanism
The Lindemann mechanism was one of the first attempts to understand unimolecular reactions. Lindemann mechanisms have been used to model gas phase decomposition reactions. Although the net formula for a decomposition may appear to be first-order (unimolecular) in the reactant, a Lindemann mechanism may show that the reaction is actually second-order (bimolecular).
• 29.7: Some Reaction Mechanisms Involve Chain Reactions
Chain reactions usually consist of many repeating elementary steps, each of which has a chain carrier. Once started, chain reactions continue until the reactants are exhausted. Fire and explosions are some of the phenomena associated with chain reactions. The chain carriers are some intermediates that appear in the repeating elementary steps. These are usually free radicals.
• 29.8: A Catalyst Affects the Mechanism and Activation Energy
Homogeneous catalysis refers to reactions in which the catalyst is in solution with at least one of the reactants whereas heterogeneous catalysis refers to reactions in which the catalyst is present in a different phase, usually as a solid, than the reactants.
• 29.9: The Michaelis-Menten Mechanism for Enzyme Catalysis
Leonor Michaelis and Maude Menten proposed the following reaction mechanism for enzymatic reactions. In the first step, the substrate binds to the active site of the enzyme. In the second step, the substrate is converted into the product and released from the substrate.
• 29.E: Chemical Kinetics II- Reaction Mechanisms (Exercises)
29: Chemical Kinetics II- Reaction Mechanisms
The mechanism of a reaction is a series of steps leading from the starting materials to the products. After each step, an intermediate is formed. The intermediate is short-lived, because it quickly undergoes another step to form the next intermediate. These simple steps are called elementary reactions. Because an overall reaction is composed of a series of elementary reaction, the overall rate of the reaction is somehow dependent on the rates of those smaller reactions. But how are the two related? Let's look at two cases. We'll keep it simple and both cases will be two-step reactions.
Internal Links
• Elementary Steps
• Elementary Reactions
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(LibreTexts)/29%3A_Chemical_Kinetics_II-_Reaction_Mechanisms/29.01%3A_A_Mechanism_is_a_Sequence_of_Elementary_Reactions.txt
|
The principle of detailed balance is formulated for kinetic systems which are decomposed into elementary processes (collisions, or steps, or elementary reactions): At equilibrium, each elementary process should be equilibrated by its reverse process. Lewis put forward this general principle in 1925:
Corresponding to every individual process there is a reverse process, and in a state of equilibrium the average rate of every process is equal to the average rate of its reverse process.[1]
According to Ter Haar, [2] the essence of the detailed balance is:
...at equilibrium the number of processes destroying situation $A$ and creating situation $B$ will be equal to the number of processes producing $A$ and destroying $B$
The principle of detailed balance was explicitly introduced for collisions by Ludwig Boltzmann. In 1872, he proved his H-theorem using this principle. The arguments in favor of this property are founded upon microscopic reversibility. In 1901, R. Wegscheider introduced the principle of detailed balance for chemical kinetics. In particular, he demonstrated that the irreversible cycles
$A \rightarrow B \rightarrow C ... Y \rightarrow Z \label{1}$
are impossible and found explicitly the relations between kinetic constants that follow from the principle of detailed balance. This system is more accurately described thusly:
$A \rightleftharpoons B \rightleftharpoons C ... Y \rightleftharpoons Z \label{2}$
29.03: Multiple Mechanisms are Often Indistinguishable
The great value of chemical kinetics is that it can give us insights into the actual reaction pathways (mechanisms) that reactants take to form the products of reactions. Analyzing a reaction mechanism to determine the type of rate law that is consistent (or not consistent) with the specific mechanism can give us significant insight. For example, the reaction
$A+ B \rightarrow C \nonumber$
might be proposed to follow one of two mechanistic pathways:
$\underbrace{A + A \xrightarrow{k_1} A_2}_{\text{step 1}} \nonumber$
$\underbrace{ A_2 + B \xrightarrow{k_2} C}_{\text{step 2}} \nonumber$
or
$\underbrace{A \xrightarrow{k_1} A^*}_{\text{step 1}} \nonumber$
$\underbrace{ A^* + B \xrightarrow{k_2} C}_{\text{step 2}} \nonumber$
The first rate law will predict that the reaction should be second order in $A$, whereas the second mechanism predicts that it should be first order in $A$ (in the limit that the steady state approximation, discussed in the following sections, can be applied to $A_2$ and $A^*$). Based on the observed rate law being first or second order in A, one can rule out one of the rate laws. Unfortunately, this kind of analysis cannot confirm a specific mechanism. Other evidence is needed to draw such conclusions, such as the spectroscopic observation of a particular reaction intermediate that can only be formed by a specific mechanism.
In order analyze mechanisms and predict rate laws, we need to build a toolbox of methods and techniques that are useful in certain limits. The next few sections will discuss this kind of analysis, specifically focusing on
• the Rate Determining Step approximation,
• the Steady State approximation, and
• the Equilibrium approximation.
Each type of approximation is important in certain limits, and they are oftentimes used in conjunction with one another to predict the final forms of rate laws.
29.04: The Steady-State Approximation
One of the most commonly used and most attractive approximations is the steady state approximation. This approximation can be applied to the rate of change of concentration of a highly reactive (short lived) intermediate that holds a constant value over a long period of time. The advantage here is that for such an intermediate ($I$),
$\dfrac{d[I]}{dt} = 0 \nonumber$
So long as one can write an expression for the rate of change of the concentration of the intermediate $I$, the steady state approximation allows one to solve for its constant concentration. For example, if the reaction
$A +B \rightarrow C \label{total}$
is proposed to follow the mechanism
\begin{align} A + A &\xrightarrow{k_1} A_2 \[4pt] A_2 + B &\xrightarrow{k_2} C + A \end{align} \nonumber
The time-rate of change of the concentration of the intermediate $A_2$ can be written as
$\dfrac{d[A_2]}{dt} = k_1[A]^2 - k_2[A_2][B] \nonumber$
In the limit that the steady state approximation can be applied to $A_2$
$\dfrac{d[A_2]}{dt} = k_1[A]^2 - k_2[A_2][B] \approx 0 \nonumber$
or
$[A_2] \approx \dfrac{k_1[A]^2}{k_2[B]} \nonumber$
So if the rate of the overall reaction is expressed as the rate of formation of the product $C$,
$\dfrac{d[C]}{dt} = k_2[A_2][B] \nonumber$
the above expression for $[A_2]$ can be substituted
$\dfrac{d[C]}{dt} = k_2 \left ( \dfrac{k_1[A]^2}{k_2[B]} \right) [B] \nonumber$
of
$\dfrac{d[C]}{dt} = k_1[A]^2 \nonumber$
and the reaction is predicted to be second order in $[A]$.
Alternatively, if the mechanism for Equation \ref{total} is proposed to be
\begin{align} A &\xrightarrow{k_1} A^* \[4pt] A^* + B &\xrightarrow{k_2} C \end{align} \nonumber
then the rate of change of the concentration of $A^*$ is
$\dfrac{[A^*]}{dt} = k_1[A] - k_2[A^*][B] \nonumber$
And if the steady state approximation holds, then
$[A^*] \approx \dfrac{k_1[A]}{k_2[B]} \nonumber$
So the rate of production of $C$ is
\begin{align} \dfrac{d[C]}{dt} &= k_2[A^*][B] \[4pt] &= \bcancel{k_2} \left( \dfrac{k_1[A]}{\bcancel{k_2} \cancel{[B]}} \right) \cancel{[B]} \end{align} \nonumber
or
$\dfrac{d[C]}{dt} = k_1[A] \nonumber$
and the rate law is predicted to be first order in $A$. In this manner, the plausibility of either of the two reaction mechanisms is easily deduced by comparing the predicted rate law to that which is observed. If the prediction cannot be reconciled with observation, then the scientific method eliminates that mechanism from consideration.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(LibreTexts)/29%3A_Chemical_Kinetics_II-_Reaction_Mechanisms/29.02%3A_The_Principle_of_Detailed_Balance.txt
|
Because a proposed mechanism can only be valid if it is consistent with the rate law found experimentally, the rate law plays a central role in the investigation of chemical reaction mechanisms. The discussion above introduces the problems and methods associated with collecting rate data and with finding an empirical rate law that fits experimental concentration-versus-time data. We turn now to finding the rate laws that are consistent with a particular proposed mechanism. For simplicity, we consider reactions in closed constant-volume systems.
In principle, numerical integration can be used to predict the concentration at any time of each of the species in any proposed reaction mechanism. This prediction can be compared to experimental observations to see whether they are consistent with the proposed mechanism. To do the numerical integration, it is necessary to know the initial concentrations of all of the chemical species and to know, or assume, values of all of the rate constants. The initial concentrations are known from the procedure used to initiate the reaction. However, the rate constants must be determined by some iterative procedure in which initial estimates of the rate constants are used to predict concentration-versus-time data that can be compared to the experimental results to produce refined estimates.
In practice, we tailor our choice of reaction conditions so that we can use various approximations to test whether a proposed mechanism can explain the data. We now consider the most generally useful of these approximations.
In this discussion, we assume that the overall reaction goes to completion; that is, at equilibrium the concentration of the reactant whose concentration is limiting has become essentially zero. If the overall reaction involves more than one elementary step, then an intermediate compound is involved. A valid mechanism must include this intermediate, and more than one differential equation may be needed to characterize the time rate of change of all of the species involved in the reaction. We focus on conditions and approximations under which the rate of appearance of the final products in a multi-step reaction mechanism can be described by a single differential equation, the rate law.
We examine the application of these approximations to a particular reaction mechanism. When we understand the application of these approximations to this mechanism, the ways in which they can be used in other situations are clear.
Consider the following sequence of elementary steps
$\ce{ A + B <=>[k_1][k_2] C ->[k_3] D} \nonumber$
whose kinetics are described by the following simultaneous differential equations:
\begin{align*} \frac{d[A]}{dt}=\frac{d[B]}{dt}&={-k}_1[A][B]+k_2\left[C\right] \[4pt] \frac{d\left[C\right]}{dt} &=k_1[A][B]-k_2\left[C\right]-k_3\left[C\right] \[4pt] \frac{d\left[D\right]}{dt} &=k_3\left[C\right] \end{align*}
The general analytical solution for this system of coupled differential equations can be obtained, but it is rather complex, because $\left[C\right]$ increases early in the reaction, passes through a maximum, and then decreases at long times. In principle, experimental data could be fit to these equations. The numerical approach requires that we select values for $k_1$, $k_2$, $k_3$, ${[A]}_0$, ${[B]}_0$, ${\left[C\right]}_0$, and ${\left[D\right]}_0$, and then numerically integrate to get $[A]$, $[B]$, $\left[C\right]$, and $\left[D\right]$ as functions of time. In principle, we could refine our estimates of $k_1$, $k_2$, and $k_3$ by comparing the calculated values of one or more concentrations to the experimental ones. In practice, the approximate treatments we consider next are more expedient.
When we begin a kinetic study, we normally have a working hypothesis about the reaction mechanism, and we design our experiments to simplify the differential equations that apply to it. For the present example, we will assume that we always arrange the experiment so that ${\left[C\right]}_0=0$ and ${\left[D\right]}_0=0$. In consequence, at all times:
${[A]}_0=[A]+\left[C\right]+\left[D\right]. \nonumber$
Also, we restrict our considerations to experiments in which ${[B]}_0\gg {[A]}_0$. This exemplifies the use of flooding. The practical effect is that the concentration of $B$ remains effectively constant at its initial value throughout the entire reaction, which simplifies the differential equations significantly. In the present instance, setting ${[B]}_0\gg {[A]}_0$ means that the rate-law term $k_1[A][B]$ can be replaced, to a good approximation, by $k_{obs}[A]$, where $k_{obs}=k_1{[B]}_0$.
Once we have decided upon the reaction conditions we are going to use, whether the resulting concentration-versus-time data can be described by a single differential equation depends on the relative magnitudes of the rate constants in the several steps of the overall reaction. Particular combinations of relationships that lead to simplifications are often referred to by particular names; we talk about a combination that has a rate-determining step, or one that involves a prior equilibrium, or one in which a steady-state approximation is applicable. To see what we mean by these terms, let us consider some particular relationships that can exist among the rate constants in the mechanism above.
Case I
Suppose that $k_1[A][B]\gg k_2\left[C\right]$ and $k_3\gg k_2$. We often describe this situation by saying, rather imprecisely, that the reaction to convert $C$ to $D$ is very fast and that the reaction to convert $C$ back to $A$ and $B$ is very slow—compared to the reaction that forms $C$ from $A$ and $B$. When $C$ is produced in these circumstances, it is converted to $D$ so rapidly that we never observe a significant concentration of $C$ in the reaction mixture. The formation of a molecule of $C$ is tantamount to the formation of a molecule of $D$, and the reaction produces $D$ at essentially the same rate that it consumes $A$ or $B$. We say that the first step, $A+B\to C$, is the rate-determining step in the reaction. We have
$-\frac{d[A]}{dt}=\ -\frac{d[B]}{dt}\approx \frac{d\left[D\right]}{dt} \nonumber$
The assumption that $k_1[A][B]\gg k_2\left[C\right]$ means that we can neglect the smaller term in the equation for ${d[A]}/{dt}$, giving the approximation
$\frac{d[A]}{dt}=\ \frac{d[B]}{dt}=\ -\frac{d\left[D\right]}{dt}=\ -k_1[A][B] \nonumber$
Letting $\left[D\right]=x$ and recognizing that our assumptions make $\left[C\right]\approx 0$, the mass-balance condition, ${[A]}_0=[A]+\left[C\right]+\left[D\right]$, becomes $[A]={[A]}_0-x$. Choosing ${[B]}_0\gg {[A]}_0$ means that $k_1[B]\approx k_1{[B]}_0=k_{I,obs}$. The rate equation becomes first-order:
$\frac{dx}{dt}=k_{I,obs}\left(\ {[A]}_0-x\right) \nonumber$
Since $k_{I,obs}$ is not strictly constant, it is a pseudo-first-order rate constant. The disappearance of $A$ is said to follow a pseudo-first-order rate equation.
The concept of a rate-determining step step is an approximation. In general, the consequence we have in mind when we invoke this approximation is that no intermediate species can accumulate to a significant concentration if it is produced by the rate-determining step or by a step that occurs after the rate-determining step. We do not intend to exclude the accumulation of a species that is at equilibrium with another product. Thus, in the mechanism
$\ce{ A ->[k] B <=> C} \nonumber$
we suppose that the conversion of $A$ to $B$ is rate-determining and that the interconversion of $B$ and $C$ is so rapid that their concentrations always satisfy the equilibrium relationship
$K=\dfrac{[C]}{[B]}. \nonumber$
For the purpose at hand, we do not consider $B$ to be an intermediate; $B$ is a product that happens to be at equilibrium with the co-product, $C$.
Case II
Suppose that $k_1[A][B]\gg k_3\left[C\right]$. In this case $A+B\to C$ is fast compared to the rate at which $C$ is converted to $D$, and we say that $C\to D$ is the rate-determining step. We can now distinguish three sub-cases depending upon the way $\left[C\right]$ behaves during the course of the reaction.
Case IIa: Suppose that $k_1[A][B]\gg k_3\left[C\right]$ and $k_3\gg k_2$. Then $A+B\to C$ is rapid and essentially quantitative. That is, within a short time of initiating the reaction, all of the stoichiometrically limiting reactant is converted to $C$. Letting $\left[D\right]=x$ and recognizing that our assumptions make $[A]\approx 0$, the mass-balance condition,
${[A]}_0=[A]+\left[C\right]+\left[D\right] \nonumber$
becomes
$\left[C\right]={[A]}_0-x. \nonumber$
After a short time, the rate at which $D$ is formed becomes
$\frac{d\left[D\right]}{dt}=k_3\left[C\right] \nonumber$ or $\frac{dx}{dt}=k_3\left(\ {[A]}_0-x\right) \nonumber$
The disappearance of $C$ and the formation of $D$ follow a first-order rate law.
Case IIb: If the forward and reverse reactions in the first elementary process are rapid, then this process may be effectively at equilibrium during the entire time that $D$ is being formed. (This is the case that $k_1[A][B]\gg k_3\left[C\right]$ and $k_2\gg k_3$.) Then, throughout the course of the reaction, we have
$K_{eq}={\left[C\right]}/{[A][B]} \nonumber$
Letting $\left[D\right]=x$ and making the further assumption that $[A]\gg \left[C\right]\approx 0$ throughout the reaction, the mass-balance condition, ${[A]}_0=[A]+\left[C\right]+\left[D\right]$, becomes $[A]={[A]}_0-x$. Substituting into the equilibrium-constant expression, we find
$\left[C\right]=K_{eq}{[B]}_0\ \left({[A]}_0-x\right) \nonumber$
Substituting into ${d\left[D\right]}/{dt}=k_3\left[C\right]$ we have
$\frac{dx}{dt}=k_3K_{eq}{[B]}_0\ \left({[A]}_0-x\right)=k_{IIa,obs}\left({[A]}_0-x\right) \nonumber$
where $k_{IIa,obs}=k_3K_{eq}{[B]}_0$. The disappearance of A and the formation of D follow a pseudo-first-order rate equation. The pseudo-first-order rate constant is a composite quantity that is directly proportional to ${[B]}_0$.
Case IIc: If we suppose that the first step is effectively at equilibrium during the entire time that $D$ is being produced (as in case IIb) but that $\left[C\right]$ is not negligibly small compared to $[A]$, we again have $K_{eq}={\left[C\right]}/{[A][B]}$. With $\left[D\right]=x$, the mass-balance condition becomes $[A]={[A]}_0-\left[C\right]-x$. Eliminating $[A]$ between the mass-balance and equilibrium-constant equations gives
$\left[C\right]=\frac{K_{eq}{[B]}_0\left({[A]}_0-x\right)}{1+K_{eq}{[B]}_0} \nonumber$
so that ${d\left[D\right]}/{dt}=k_3\left[C\right]$ becomes
$\frac{dx}{dt}=\left(\frac{{k_3K}_{eq}{[B]}_0}{1+K_{eq}{[B]}_0}\right)\left({[A]}_0-x\right)=k_{IIc,obs}\left({[A]}_0-x\right) \nonumber$
The formation of $D$ follows a pseudo-first-order rate equation. (The disappearance of $A$ is also pseudo-first-order, but the pseudo-first-order rate constant is different.) As in Case IIb, the pseudo-first-order rate constant, $k_{IIc,obs}$, is a composite quantity, but now its dependence on ${[B]}_0$ is more complex. The result for Case IIc reduces to that for Case IIb if $K_{eq}{[B]}_0\ll 1$.
Case III
In the cases above, we have assumed that one or more reactions are intrinsically much slower than others are. The differential equations for this mechanism can also become much simpler if all three reactions proceed at similar rates, but do so in such a way that the concentration of the intermediate is always very small, $\left[C\right]\approx 0$. If the concentration of $C$ is always very small, then we expect the graph of $\left[C\right]$ versus time to have a slope, ${d\left[C\right]}/{dt}$, that is approximately zero. In this case, we have
$\frac{d\left[C\right]}{dt}=k_1[A][B]-k_2\left[C\right]-k_3\left[C\right]\approx 0 \nonumber$
so that $\left[C\right]=\frac{k_1[A][B]}{k_2+k_3} \nonumber$
With $\left[D\right]=x$, ${d\left[D\right]}/{dt}=k_3\left[C\right]$ becomes
$\frac{dx}{dt}=\left(\frac{k_1k_3{[B]}_0}{k_2+k_3}\right)\left({[A]}_0-x\right)=k_{III,obs}\left({[A]}_0-x\right) \nonumber$
As in the previous cases, the disappearance of $A$ and the formation of $D$ follow a pseudo-first-order rate equation. The pseudo-first-order rate constant is again a composite quantity, which depends on ${[B]}_0$ and the values of all of the rate constants.
Case III illustrates the steady-state approximation, in which we assume that the concentration of an intermediate species is much smaller than the concentrations of other species that affect the reaction rate. Under these circumstances, we can be confident that the time-derivative of the intermediate’s concentration is negligible compared to the reaction rate, so that it is a good approximation to set it equal to zero. The idea is simply that, if the concentration is always small, its time-derivative must also be small. If the graph of the intermediate’s concentration versus time is always much lower than that of other participating species, then its slope will be much less.
Equating the time derivative of the steady-state intermediate’s concentration to zero produces an algebraic expression that involves the intermediate’s concentration. Solving this expression for the concentration of the steady-state intermediate makes it possible to greatly simplify the set of simultaneous differential equations that is predicted by the mechanism. When there are multiple intermediates to which the approximation is applicable, remarkable simplifications can result. This often happens when the mechanism involves free-radical intermediates.
The name “steady-state approximation” is traditional. When we use it, we do so on the understanding that the “state” which is approximately “steady” is the concentration of the intermediate, not the state of the system. Since a net reaction is occurring, the state of the system is distinctly not constant.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(LibreTexts)/29%3A_Chemical_Kinetics_II-_Reaction_Mechanisms/29.05%3A_Rate_Laws_Do_Not_Imply_Unique_Mechanism.txt
|
The Lindemann mechanism, sometimes called the Lindemann-Hinshelwood mechanism, is a schematic reaction mechanism. Frederick Lindemann discovered the concept in 1921 and Cyril Hinshelwood developed it. It breaks down a stepwise reaction into two or more elementary steps, then it gives a rate constant for each elementary step. The rate law and rate equation for the entire reaction can be derived from this information. Lindemann mechanisms have been used to model gas phase decomposition reactions. Although the net formula for a decomposition may appear to be first-order (unimolecular) in the reactant, a Lindemann mechanism may show that the reaction is actually second-order (bimolecular).
A Lindemann mechanism typically includes an activated reaction intermediate, labeled A* (where A can be any element or compound). The activated intermediate is produced from the reactants only after a sufficient activation energy is applied. It then either deactivates from A* back to A, or reacts with another (dis)similar reagent to produce yet another reaction intermediate or the final product.
General Mechanism
The schematic reaction $A + M \rightarrow P$ is assumed to consist of two elementary steps:
STEP 1: Bimolecular activation of $A$
$A + M \rightleftharpoons A^* + M \label{1}$
with
• forward activation reaction rate: $k_1$
• reverse deactivation reaction rate: $k_{-1}$
STEP 2: Unimolecular reaction of $A*$
$A^* \overset{k_2}{\rightarrow} P \label{2}$
with
• forward reaction rate: $k_2$
Assuming that the concentration of intermediate $A^*$ is held constant according to the quasi steady-state approximation, what is the rate of formation of product $P$?
Solution
First, find the rates of production and consumption of intermediate $A^*$. The rate of production of $A^*$ in the first elementary step (Equation $\ref{1}$) and $A^*$ is consumed both in the reverse first step and in the forward second step. The respective rates of consumption of $A^*$ are:
$\dfrac{d[A^*]}{dt} = \underset{\text{(forward first step)}}{k_1 [A] [M]} - \underset{\text{(reverse first step)}}{k_{-1} [A^*] [M]} - \underset{\text{(forward second step)}}{k_2 [A^*]} \label{3}$
According to the steady-state approximation,
$\dfrac{d[A^*]}{dt} \approx 0 \label{4}$
Therefore the rate of production of $A^*$ (first term in Equation $\ref{3}$) equals the rate of consumption (second and third terms in Equation $\ref{3}$):
$k_1 [A] [M] = k_{-1} [A^*] [M] + k_2 [A^*] \label{6}$
Solving for $[A^*]$, it is found that
$[A^*] = \dfrac{k_1 [A] [M]}{k_{-1} [M] + k_2} \label{7}$
The overall reaction rate is (Equation $\ref{2}$)
$\dfrac{d[P]}{dt} = k_2 [A^*] \label{8}$
Now, by substituting the calculated value for $[A^*]$ (Equation $\ref{7}$ into Equation $\ref{8}$), the overall reaction rate can be expressed in terms of the original reactants $A$ and $M$ as follows:
$\dfrac{d[P]}{dt} = \dfrac{k_1k_2 [A] [M]}{k_{-1} [M] + k_2} \label{9}$
The rate law for the Lindemann mechanism is not a simple first or second order reaction. However, under certain conditions (discussed below), Equation $\ref{9}$ can be simplified.
Three Principles of the Lindemann Mechanism
1. Energy is transferred by collision (forward reaction of Equation $\ref{1}$)
2. There is a time delay $\Delta t$ between collision and reaction (Equation $\ref{2}$)
3. Molecules may be de-activated by another collision during $\Delta t$ (reverse reaction of Equation $\ref{1}$)
Example 29.6.1 : Dissociation of $N_2O_5$
The decomposition of dinitrogen pentoxide to nitrogen dioxide and nitrogen trioxide
$N_2O_5 \rightarrow NO_2 + NO_3 \nonumber$
is postulated to take place via two elementary steps, which are similar in form to the schematic example given above:
1. $N_2O_5 + N_2O_5 \rightleftharpoons N_2O_5^* + N_2O_5$
2. $N_2O_5^* \rightarrow NO_2 + NO_3$
Using the quasi steady-state approximation soluioten (Equation 9) with $[M]=[N_2O_5]$, then rate equation is:
$\text{Rate} = k_2 [N_2O_5]^* = \dfrac{k_1k_2 [N_2O_5]^2}{k_{-1}[N_2O_5] + k_2} \nonumber$
Experiment has shown that the rate is observed as first-order in the original concentration of $N_2O_5$ sometimes, and second order at other times.
• If $k_2 >> k_{-1}$, then the rate equation may be simplified by assuming that $k_{-1} \approx 0$. Then the rate equation is $\text{Rate} = k_1[N_2O_5]^2 \nonumber$ which is second order.
• If $k_2 << k_{-1}$, then the rate equation may be simplified by assuming $k_2 \approx 0$. Then the rate equation is $\text{Rate} = \dfrac{k_1k_2[N_2O_5] }{k_{-1}} = k_{obs} [N_2O_5] \nonumber$ which is first order with $k_{obs} = \dfrac{k_2k_2}{k_{-1}}. \nonumber$
Exercise 29.6.1
The following first order rate constants for the gas phase decomposition of $N_2O_5$ have been obtained as a function of number density at 298 K.
$k_{obs} (s^{-1})$ $7.81 \times 10^{-3}$ $12.5 \times 10^{-3}$ $15.6 \times 10^{-3}$
$[N_2O_5] (mol/m^{-3})$ 10 25 50
Confirm that these data are consistent with the Lindemann mechanism and derive a rate constant and a ratio of two rate constants for elementary reactions in the mechanism. What are the units of the two quantities.
Lindemann Mechanism
Consider the isomerization of methylisonitrile gas, $CH_3 NC$, to acetonitrile gas, $CH_3 CN$:
$CH_3 NC \overset{k}{\longrightarrow} CH_3 CN \nonumber$
If the isomerization is a unimolecular elementary reaction, we should expect to see $1^{st}$ order rate kinetics. Experimentally, however, $1^{st}$ order rate kinetics are only observed at high pressures. At low pressures, the reaction kinetics follow a $2^{nd}$ order rate law:
$\dfrac{d \left[ CH_3 NC \right]}{dt} = -k \left[ CH_3 NC \right]^2 \label{21.30}$
To explain this observation, J.A. Christiansen and F.A. Lindemann proposed that gas molecules first need to be energized via intermolecular collisions before undergoing an isomerization reaction. The reaction mechanism can be expressed as the following two elementary reactions
\begin{align} \text{A} + \text{M} &\overset{k_1}{\underset{k_{-1}}{\rightleftharpoons}} \text{A}^* + \text{M} \ \text{A}^* &\overset{k_2}{\rightarrow} \text{B} \end{align} \nonumber
where $\text{M}$ can be a reactant molecule, a product molecule or another inert molecule present in the reactor. Assuming that the concentration of $\text{A}^*$ is small, or $k_1 \ll k_2 + k_{-1}$, we can use a steady-state approximation to solve for the concentration profile of species $\text{B}$ with time:
$\dfrac{d \left[ \text{A}^* \right]}{dt} = k_1 \left[ \text{A} \right] \left[ \text{M} \right] - k_{-1} \left[ \text{A}^* \right]_{ss} \left[ \text{M} \right] - k_2 \left[ \text{A}^* \right]_{ss} \approx 0 \label{21.31}$
Solving for $\left[ \text{A}^* \right]$,
$\left[ \text{A}^* \right] = \dfrac{k_1 \left[ \text{M} \right] \left[ \text{A} \right]}{k_2 + k_{-1} \left[ \text{M} \right]} \label{21.32}$
The reaction rates of species $\text{A}$ and $\text{B}$ can be written as
$-\dfrac{d \left[ \text{A} \right]}{dt} = \dfrac{d \left[ \text{B} \right]}{dt} = k_2 \left[ \text{A}^* \right] = \dfrac{k_1 k_2 \left[ \text{M} \right] \left[ \text{A} \right]}{k_2 + k_{-1} \left[ \text{M} \right]} = k_\text{obs} \left[ \text{A} \right] \label{21.33}$
where
$k_\text{obs} = \dfrac{k_1 k_2 \left[ \text{M} \right]}{k_2 + k_{-1} \left[ \text{M} \right]} \label{21.34}$
At high pressures, we can expect collisions to occur frequently, such that $k_{-1} \left[ \text{M} \right] \gg k_2$. Equation $\ref{21.33}$ then becomes
$-\dfrac{d \left[ \text{A} \right]}{dt} = \dfrac{k_1 k_2}{k_{-1}} \left[ \text{A} \right] \label{21.35}$
which follows $1^{st}$ order rate kinetics.
At low pressures, we can expect collisions to occurs infrequently, such that $k_{-1} \left[ \text{M} \right] \ll k_2$. In this scenario, Equation $\ref{21.33}$ becomes
$-\dfrac{d \left[ \text{A} \right]}{dt} = k_1 \left[ \text{A} \right] \left[ \text{M} \right] \label{21.36}$
which follows second order rate kinetics, consistent with experimental observations.
29.07: Some Reaction Mechanisms Involve Chain Reactions
A large number of reactions proceed through a series of steps that can collectively be classified as a chain reaction. The reactions contain steps that can be classified as
• initiation step – a step that creates the intermediates from stable species
• propagation step – a step that consumes an intermediate, but creates a new one
• termination step – a step that consumes intermediates without creating new ones
These types of reactions are very common when the intermediates involved are radicals. An example, is the reaction
$H_2 + Br_2 \rightarrow 2HBr \nonumber$
The observed rate law for this reaction is
$\text{rate} = \dfrac{k [H_2][Br_2]^{3/2}}{[Br_2] + k'[HBr]} \label{exp}$
A proposed mechanism is
$Br_2 \ce{<=>[k_1][k_{-1}]} 2Br^\cdot \label{step1}$
$2Br^\cdot + H_2 \ce{<=>[k_2][k_{-2}]} HBr + H^\cdot \label{step2}$
$H^\cdot + Br_2 \xrightarrow{k_3} HBr + Br^\cdot \label{step3}$
Based on this mechanism, the rate of change of concentrations for the intermediates ($H^\cdot$ and $Br^\cdot$) can be written, and the steady state approximation applied.
$\dfrac{d[H^\cdot]}{dt} = k_2[Br^\cdot][H_2] - k_{-2}[HBr][H^\cdot] - k_3[H^\cdot][Br_2] =0 \nonumber$
$\dfrac{d[Br^\cdot]}{dt} = 2k_1[Br_2] - 2k_{-1}[Br^\cdot]^2 - k_2[Br^\cdot][H_2] + k_{-2}[HBr][H^\cdot] + k_3[H^\cdot][Br_2] =0 \nonumber$
Adding these two expressions cancels the terms involving $k_2$, $k_{-2}$, and $k_3$. The result is
$2 k_1 [Br_2] - 2k_{-1} [Br^\cdot]^2 = 0 \nonumber$
Solving for $Br^\cdot$
$Br^\cdot = \sqrt{\dfrac{k_1[Br_2]}{k_{-1}}} \nonumber$
This can be substituted into an expression for the $H^\cdot$ that is generated by solving the steady state expression for $d[H^\cdot]/dt$.
$[H^\cdot] = \dfrac{k_2 [Br^\cdot] [H_2]}{k_{-2}[HBr] + k_3[Br_2]} \nonumber$
so
$[H^\cdot] = \dfrac{k_2 \sqrt{\dfrac{k_1[Br_2]}{k_{-1}}} [H_2]}{k_{-2}[HBr] + k_3[Br_2]} \nonumber$
Now, armed with expressions for $H^\cdot$ and $Br^\cdot$, we can substitute them into an expression for the rate of production of the product $HBr$:
$\dfrac{[HBr]}{dt} = k_2[Br^\cdot] [H_2] + k_3 [H^\cdot] [Br_2] - k_{-2}[H^\cdot] [HBr] \nonumber$
After substitution and simplification, the result is
$\dfrac{[HBr]}{dt} = \dfrac{2 k_2 \left( \dfrac{k_1}{k_{-1}}\right)^{1/2} [H_2][Br_2]^{1/2}}{1+ \dfrac{k_{-1}}{k_3} \dfrac{[HBr]}{[Br_2]} } \nonumber$
Multiplying the top and bottom expressions on the right by $[Br_2]$ produces
$\dfrac{[HBr]}{dt} = \dfrac{2 k_2 \left( \dfrac{k_1}{k_{-1}}\right)^{1/2} [H_2][Br_2]^{3/2}}{[Br_2] + \dfrac{k_{-1}}{k_3} [HBr] } \nonumber$
which matches the form of the rate law found experimentally (Equation \ref{exp})! In this case,
$k = 2k_2 \sqrt{ \dfrac{k_1}{k_{-1}}} \nonumber$
and
$k'= \dfrac{k_{-2}}{k_3} \nonumber$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(LibreTexts)/29%3A_Chemical_Kinetics_II-_Reaction_Mechanisms/29.06%3A_The_Lindemann_Mechanism.txt
|
As can be seen from the Arrhenius equation, the magnitude of the activation energy, $E_a$, determines the value of the rate constant, $k$, at a given temperature and thus the overall reaction rate. Catalysts provide a means of reducing $E_a$ and increasing the reaction rate. Catalysts are defined as substances that participate in a chemical reaction but are not changed or consumed. Instead they provide a new mechanism for a reaction to occur which has a lower activation energy than that of the reaction without the catalyst.
Homogeneous catalysis refers to reactions in which the catalyst is in solution with at least one of the reactants whereas heterogeneous catalysis refers to reactions in which the catalyst is present in a different phase, usually as a solid, than the reactants. Figure 29.8.1 shows a comparison of energy profiles of a reaction in the absence and presence of a catalyst.
Consider a non-catalyzed elementary reaction
$\text{A} \overset{k}{\longrightarrow} \text{P} \nonumber$
which proceeds at rate $k$ at a certain temperature. The reaction rate can be expressed as
$\dfrac{d \left[ \text{A} \right]}{dt} = -k \left[ \text{A} \right] \nonumber$
In the presence of a catalyst $\text{C}$, we can write the reaction as
$\text{A} + \text{C} \overset{k_\text{cat}}{\longrightarrow} \text{P} + \text{C} \nonumber$
and the reaction rate as
$\dfrac{d \left[ \text{A} \right]}{dt} = -k \left[ \text{A} \right] - k_\text{cat} \left[ \text{A} \right] \left[ \text{C} \right] \nonumber$
where the first term represents the uncatalyzed reaction and the second term represents the catalyzed reaction. Because the reaction rate of the catalyzed reaction is often magnitudes larger than that of the uncatalyzed reaction (i.e. $k_\text{cat} \gg k$), the first term can often be ignored.
Example of Homogenous Catalysis: Acid Catalysis
A common example of homogeneous catalysts are acids and bases. For example, given an overall reaction is $\text{S} \rightarrow \text{P}$. If $k$ is the rate, then
$\dfrac{d \left[ \text{P} \right]}{dt} = k \left[ \text{S} \right] \nonumber$
The purpose of an enzyme is to enhance the rate of production of the product $\text{P}$. The equations of the acid-catalyzed reaction are
\begin{align} \text{S} + \text{A}H &\overset{k_1}{\underset{k_{-1}}{\rightleftharpoons}} \text{S}H^+ + \text{A}^- \ \text{S}H^+ + H_2 O &\overset{k_2}{\rightarrow} \text{P} + H_3 O^+ \ H_3 O^+ + \text{A}^- &\overset{k_3}{\underset{k_{-3}}{\rightleftharpoons}} \text{A}H + H_2 O \end{align} \nonumber
The full set of kinetic equations is
\begin{align} \dfrac{d \left[ \text{S} \right]}{dt} &= -k_1 \left[ \text{S} \right] \left[ \text{A} H \right] + k_{-1} \left[ \text{S} H^+ \right] \left[ \text{A}^- \right] \ \dfrac{ d \left[ \text{A} H \right]}{dt} &= -k_1 \left[ \text{S} \right] \left[ \text{A} H \right] + k_{-1} \left[ \text{S} H^+ \right] \left[ \text{A}^- \right] - k_{-3} \left[ \text{A} H \right] + k_3 \left[ H_3 O^+ \right] \left[ \text{A}^- \right] \ \dfrac{d \left[ \text{S} H^+ \right]}{dt} &= k_1 \left[ \text{S} \right] \left[ \text{A} H \right] - k_{-1} \left[ \text{S} H^+ \right] \left[ \text{A}^- \right] - k_2 \left[ \text{S} H^+ \right] \ \dfrac{d \left[ \text{A}^- \right]}{dt} &= k_1 \left[ \text{S} \right] \left[ \text{A} H \right] - k_{-1} \left[ \text{S} H^+ \right] \left[ \text{A}^- \right] -k_2 \left[ \text{A}^- \right] \left[ H_3 O^+ \right] + k_{-3} \left[ \text{A} H \right] \ \dfrac{d \left[ \text{P} \right]}{dt} &= k_2 \left[ \text{S} H^+ \right] \ \dfrac{d \left[ H_3 O^+ \right]}{dt} &= -k_2 \left[ \text{S} H^+ \right] - k_3 \left[ H_3 O^+ \right] \left[ \text{A}^- \right] + k_{-3} \left[ \text{A} H \right] \end{align} \nonumber
We cannot easily solve these, as they are nonlinear. However, let us consider two cases $k_2 \gg k_{-1} \left[ \text{A}^- \right]$ and $k_2 \ll k_{-1} \left[ \text{A}^- \right]$. In both cases, $\text{S} H^+$ is consumed quickly, and we can apply a steady-state approximation:
$\dfrac{d \left[ \text{S} H^+ \right]}{dt} = k_1 \left[ \text{S} \right] \left[ \text{A} H \right] - k_{-1} \left[ \text{A}^- \right] \left[ \text{S} H^+ \right] - k_2 \left[ \text{S} H^+ \right] = 0 \nonumber$
Rearranging in terms of $\text{S} H^+$ yields
$\left[ \text{S} H^+ \right] = \dfrac{k_1 \left[ \text{S} \right] \left[ \text{A} H \right]}{k_{-1} \left[ \text{A}^- \right] + k_2} \nonumber$
and the rate of production of $\text{P}$ can be written as
$\dfrac{d \left[ \text{P} \right]}{dt} = k_2 \left[ \text{S} H^+ \right] = \dfrac{k_1 k_2 \left[ \text{S} \right] \left[ \text{A} H \right]}{k_{-1} \left[ \text{A}^- \right] + k_2} \nonumber$
In the case where $k_2 \gg k_{-1} \left[ \text{A}^- \right]$, Equation 29.8.17 can be written as
$\dfrac{d \left[ \text{P} \right]}{dt} = k_1 \left[ \text{S} \right] \left[ \text{A} H \right] \nonumber$
which is known as a general acid-catalyzed reaction. On the other hand, if $k_2 \ll k_{-1} \left[ \text{A}^- \right]$, we can use an equilibrium approximation to write the rate of production of $\text{P}$ as
$\dfrac{d \left[ \text{P} \right]}{dt} = \dfrac{k_1 k_2 \left[ \text{S} \right] \left[ \text{A} H \right]}{k_{-1} \left[ \text{A}^- \right]} = \dfrac{k_1 k_2}{k_{-1} K} \left[ \text{S} \right] \left[ H^+ \right] \nonumber$
where $K$ is the acid dissociation constant:
$K = \dfrac{ \left[ \text{A}^- \right] \left[ H^+ \right]}{\left[ \text{A} H \right]} \nonumber$
In this case, the reaction is hydrogen ion-catalyzed.
Example of Heterogeneous Catalysis: Surface Catalysis of Gas-Phase Reactions
Many gas-phase reactions are catalyzed on a solid surface. For a first-order, unimolecular reaction, the reaction mechanism can be written as
$\text{A} \left( g \right) + \text{S} \left( s \right) \overset{k_1}{\underset{k_{-1}}{\rightleftharpoons}} \text{AS} \left( s \right) \nonumber$
$\text{AS} \left( s \right) \overset{k_2}{\rightarrow} \text{P} \left( g \right) + \text{S} \left( g \right) \nonumber$
where the first step is reversible adsorption of the gas molecule, $\text{A}$, onto active sites on the catalyst surface, $\text{S}$, to form a transition state, $\text{AS}$, and the second step is the conversion of adsorbed $\text{A}$ molecules to species $\text{P}$. Applying the steady-state approximation to species $\text{AS}$, we can write
$\dfrac{d \left[ \text{AS} \right]}{dt} = k_1 \left[ \text{A} \right] \left[ \text{S} \right] - k_{-1} \left[ \text{AS} \right]_{ss} - k_2 \left[ \text{AS} \right]_{ss} = 0 \nonumber$
Because the concentration of total active sites on the catalyst surface is fixed at $\left[ \text{S} \right]_0$, the concentration of adsorbed species on the catalyst surface, $\left[ \text{AS} \right]$ can be written as
$\left[ \text{AS} \right] = \theta \left[ \text{S} \right]_0 \nonumber$
and $\left[ \text{S} \right]$ can be written as
$\left[ \text{S} \right] = \left( 1 - \theta \right) \left[ \text{S} \right]_0 \nonumber$
where $\theta$ is the fractional surface coverage of species $\text{A}$ on the catalyst surface. We can now write Equation 29.8.23 as
$k_1 \left[ \text{A} \right] \left( 1 - \theta \right) \left[ \text{S} \right]_0 - \left( k_{-1} + k_2 \right) \theta \left[ \text{S} \right]_0 = 0 \nonumber$
Rearranging the above equation in terms of $\theta$ yields
$\theta = \dfrac{k_1 \left[ \text{A} \right]}{k_1 \left[ \text{A} \right] + k_{-1} + k_2} \nonumber$
The rate of production of $\text{P}$ can be written as
$\dfrac{d \left[ \text{P} \right]}{dt} = k_2 \left[ \text{AS} \right]_{ss} = k_2 \theta \left[ \text{S} \right]_0 = \dfrac{k_1 k_2}{k_1 \left[ \text{A} \right] + k_{-1} + k_2} \left[ \text{A} \right] \left[ \text{S} \right]_0 \nonumber$
From the above equation, we can observe the importance of having high surface areas for catalytic reactions.
For bimolecular gas-phase reactions, two generally-used mechanisms to explain reactions kinetics are the Langmuir-Hinshelwood and Eley-Rideal mechanisms, shown in Figure 29.8.2 . In the Langmuir-Hinshelwood mechanism, $\text{A}$ and $\text{B}$ both adsorb onto the catalyst surface, at which they react to form a product. The reaction mechanism is
$\text{A} \left( g \right) + \text{S} \left( s \right) \overset{k_1}{\underset{k_{-1}}{\rightleftharpoons}} \text{AS} \left( s \right) \nonumber$
$\text{B} \left( g \right) + \text{S} \left( s \right) \overset{k_2}{\underset{k_{-2}}{\rightleftharpoons}} \text{BS} \left( s \right) \nonumber$
$\text{AS} \left( s \right) + \text{BS} \left( s \right) \overset{k_3}{\rightarrow} \text{P} \nonumber$
The rate law for the Langmuir-Hinshelwood mechanism can be derived in a similar manner to that for unimolecular catalytic reactions by assuming that the total number of active sites on the catalyst surface is fixed. In the Eley-Rideal mechanism, only one species adsorbs onto the catalyst surface. An example of such a reaction is the partial oxidation of ethylene into ethylene oxide, as shown in Figure 29.8.3 . In this reaction, diatomic oxygen adsorbs onto the catalytic surface where it reacts with ethylene molecules in the gas phase.
The reactions for the Eley-Rideal mechanism can be written as
$\text{A} \left( g \right) + \text{S} \left( s \right) \overset{k_1}{\underset{k_{-1}}{\rightleftharpoons}} \text{AS} \left( s \right) \nonumber$
$\text{AS} \left( s \right) + \text{B} \left( g \right) \overset{k_2}{\rightarrow} \text{P} \left( g \right) + \text{S} \left( s \right) \nonumber$
Assuming that $k_{-1} \gg k_1$, we can apply a steady-state approximation to species $\text{AS}$:
$\dfrac{d \left[ \text{AS} \right]}{dt} = 0 = k_1 \left[ \text{A} \right] \left[ \text{S} \right] - k_{-1} \left[ \text{AS} \right]_{ss} - k_2 \left[ \text{AS} \right]_{ss} \left[ \text{B} \right] \nonumber$
As in the case of unimolecular catalyzed reactions, we can express the concentrations of $\text{AS}$ and $\text{S}$ in terms of a fraction of the total number of active sites, $\text{S}_0$ and rewrite the above equation as
$0 = k_1 \left[ \text{A} \right] \left( 1 - \theta \right) \left[ \text{S} \right]_0 - k_{-1} \left[ \text{S} \right]_0 - k_2 \theta \left[ \text{S} \right]_0 \left[ \text{B} \right] \nonumber$
Solving for $\theta$ yields
$\theta = \dfrac{k_1 \left[ \text{A} \right]}{k_1 \left[ \text{A} \right] + k_{-1} + k_2 \left[ \text{B} \right]} \nonumber$
Furthermore, if $k_2 \ll k_1$ and $k_{-1}$, we can simplify $\theta$ to
$\theta = \dfrac{k_1 \left[ \text{A} \right]}{k_1 \left[ \text{A} \right] + k_{-1}} \nonumber$
The rate of production of $\text{P}$ can be expressed as
$\dfrac{d \left[ \text{P} \right]}{dt} = k_2 \left[ \text{AS} \right]_{ss} \left[ \text{B} \right] = k_2 \theta \left[ \text{S} \right]_0 \left[ \text{B} \right] = \dfrac{k_1 k_2 \left[ \text{A} \right]}{k_1 \left[ \text{A} \right] + k_{-1}} \left[ \text{S} \right]_0 \left[ \text{B} \right] \nonumber$
We can also write the above expression in terms of the equilibrium constant, $K$, which is equal to$k_1/k_{-1}$
$\dfrac{d \left[ \text{P} \right]}{dt} = K k_2 \left[ \text{B} \right] \dfrac{K \left[ \text{A} \right]}{K \left[ \text{A} \right] + 1} \nonumber$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(LibreTexts)/29%3A_Chemical_Kinetics_II-_Reaction_Mechanisms/29.08%3A_A_Catalyst_Affects_the_Mechanism_and_Activation_Energy.txt
|
Enzymes are biological catalysts and functional proteins. Enzymes contain specificity in its protein structure in order to have its specialized function. It usually contains more than one subunit and they are critical to sustain life. Enzymes can increase the chemical reactions in living cells. However, enzymes are not consumed in the reaction and their main function is to assist in bringing the substrates together so they can undergo normal reaction faster.
The first enzyme was found in the process of fermentation in milk and alcohol during the nineteenth century. Later in the early 1830s, the term enzyme was used to replace the term ferment. Some scientists believe that ferments must contain living cells and some think ferments could be non-living cells. Finally, in the 1920s, Sumner purified the structure of enzyme and then properties of enzyme then was more clearly understood. Until today, enzymes are still the popular research field that many people are subject to study.
Michaelis-Menten Kinetics
In biological systems, enzymes act as catalysts and play a critical role in accelerating reactions, anywhere from $10^3$ to $10^{17}$ times faster than the reaction would normally proceed. Enzymes are high-molecular weight proteins that act on a substrate, or reactant molecule, to form one or more products. In 1913, Leonor Michaelis and Maude Menten proposed the following reaction mechanism for enzymatic reactions:
$\text{E} + \text{S} \overset{k_1}{\underset{k_{-1}}{\rightleftharpoons}} \text{ES} \overset{k_2}{\rightarrow} \text{E} + \text{P} \nonumber$
where $\text{E}$ is the enzyme, $\text{ES}$ is the enzyme-substrate complex, and $\text{P}$ is the product. In the first step, the substrate binds to the active site of the enzyme. In the second step, the substrate is converted into the product and released from the substrate. For this mechanism, we can assume that the concentration of the enzyme-substrate complex, $\text{ES}$, is small and employ a steady-state approximation:
$\dfrac{d \left[ \text{ES} \right]}{dt} = k_1 \left[ \text{E} \right] \left[ \text{S} \right] - k_{-1} \left[ \text{ES} \right]_{ss} - k_2 \left[ \text{ES} \right]_{ss} \approx 0 \label{Eq22}$
Furthermore, because the enzyme is unchanged throughout the reaction, we express the total enzyme concentration as a sum of enzyme and enzyme-substrate complex:
$\left[ \text{E} \right]_0 = \left[ \text{ES} \right] + \left[ \text{E} \right] \label{Eq23}$
Plugging Equation $\ref{Eq23}$ into Equation $\ref{Eq22}$, we obtain
$0 = k_1 \left( \left[ \text{E} \right]_0 - \left[ \text{ES} \right]_{ss} \right) \left[ \text{S} \right] - k_{-1} \left[ \text{ES} \right]_{ss} + k_2 \left[ \text{ES} \right]_{ss} \label{Eq24}$
Solving for $\left[ \text{ES} \right]_{ss}$
$\left[ \text{ES} \right]_{ss} = \dfrac{k_1 \left[ \text{E} \right]_0 \left[ \text{S} \right]}{k_1 \left[ \text{S} \right] + k_{-1} + k_2} = \dfrac{\left[ \text{E} \right]_0 \left[ \text{S} \right]}{\left[ \text{S} \right] + \dfrac{k_{-1} + k_2}{k_1}} \label{Eq25}$
We can then write the reaction rate of the product as
$\dfrac{d \left[ \text{P} \right]}{dt} = k_2 \left[ \text{ES} \right]_{ss} = \dfrac{k_2 \left[ \text{E} \right]_0 \left[ \text{S} \right]}{\left[ \text{S} \right] + \dfrac{k_{-1} + k_2}{k_1}} = \dfrac{k_2 \left[ \text{E} \right]_0 \left[ \text{S} \right]}{\left[ \text{S} \right] + K_M} \label{Eq26}$
where $K_M$ is the Michaelis constant. Equation $\ref{Eq26}$ is known as the Michaelis-Menten equation. The result for Michaelis-Menten kinetics equivalent to that for a unimolecular gas phase reaction catalyzed on a solid surface. In the limit where there is a large amount of substrate present $\left( \left[ \text{S} \right] \gg K_M \right)$ Equation $\ref{Eq26}$ reduces to
$\dfrac{d \left[ \text{P} \right]}{dt} = r_\text{max} = k_2 \left[ \text{E} \right]_0 \label{Eq27}$
which is a $0^{th}$ order reaction, since $\left[ \text{E} \right]_0$ is a constant. The value $k_2 \left[ \text{E} \right]_0$ represents the maximum rate, $r_\text{max}$, at which the enzymatic reaction can proceed. The rate constant, $k_2$, is also known as the turnover number, which is the number of substrate molecules converted to product in a given time when all the active sites on the enzyme are occupied. Figure 29.9.4 displays the dependence of the reaction rate on the substrate concentration, $\left[ \text{S} \right]$. This plot is known as the Michaelis-Menten plot. Examining the figure, we can see that the reaction rate reaches a maximum value of $k_2 \left[ \text{E} \right]_0$ at large values of $\left[ \text{S} \right]$.
Another commonly-used plot in examining enzyme kinetics is the Lineweaver-Burk plot, in with the inverse of the reaction rate, $1/r$, is plotted against the inverse of the substrate concentration $1/\left[ \text{S} \right]$. Rearranging Equation $\ref{Eq26}$,
$\dfrac{1}{r} = \dfrac{K_M + \left[ \text{S} \right]}{k_2 \left[ \text{E} \right]_0 \left[ \text{S} \right]} = \dfrac{K_M}{k_2 \left[ \text{E} \right]_0} \dfrac{1}{\left[ \text{S} \right]} + \dfrac{1}{k_2 \left[ \text{E} \right]_0} \label{Eq28}$
The Lineweaver-Burk plot results in a straight line with the slope equal to $K_M/k_2 \left[ \text{E} \right]_0$ and $y$-intercept equal to $1/k_2 \left[ \text{E} \right]_0$.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(LibreTexts)/29%3A_Chemical_Kinetics_II-_Reaction_Mechanisms/29.09%3A_The_Michaelis-Menten_Mechanism_for_Enzyme_Catalysis.txt
|
Collision Frequency using the Hard Sphere Model
For the bimolecular gas-phase reaction
$\ce{Q(g) \, + \, B(g) \rightarrow products} \nonumber$
the reaction rate is
$\text{rate} = -\dfrac{d[Q]}{dt} = k[Q][B] \nonumber$
If it is assumed that every collision between Q and B particles results in products, the rate at which molecules collide is equal to the frequency of collisions per unit volume, $Z_{QB}$ (Equation 27.6.4)
$\text{rate} = Z_{QB} = \sigma_{QB} \langle v_r \rangle \rho_Q \rho_B \label{30.1.1}$
Also known as the collision frequency, $Z_{QB}$, has units of molecules per volume per time, molecules·m-3·s-1. It is possible to obtain a rough estimate of the value of the rate constant, k, from the collision frequency.
Equation $\ref{30.1.1}$ shows the rate on a molecular scale. Generally, $Z_{QB}$ is divided by Avogadro's number, $N_0$, to obtain the collision frequency on a molar scale
$\text{rate} = -\dfrac{d[Q]}{dt} = \dfrac{Z_{QB}}{N_0} = \dfrac{\sigma_{QB} \langle v_r \rangle}{N_0} \rho_Q \rho_B \nonumber$
To convert the number densities to molar concentrations, we need to realize that
$\dfrac{\rho_Q}{N_0} = [Q] \nonumber$
then
$-\dfrac{d[Q]}{dt} = \dfrac{Z_{QB}}{N_0} = \dfrac{\sigma_{QB} \langle v_r \rangle}{N_0} (N_0[Q])(N_0[B]) \nonumber$
$-\dfrac{d[Q]}{dt} = \dfrac{Z_{QB}}{N_0} = N_0\sigma_{QB} \langle v_r \rangle [Q][B] \nonumber$
$-\dfrac{d[Q]}{dt} = \dfrac{Z_{QB}}{N_0} = N_0 \sigma_{QB} \sqrt{\dfrac{8k_{B}T}{\pi\mu_{QB}}} [Q][B] \nonumber$
Thus
$Z_{QB} = N_0^2\sigma_{QB} \sqrt{\frac{8k_BT}{\pi\mu_{QB}}} [Q][B] \nonumber$
and
$k = N_0\sigma_{QB}\sqrt{\dfrac{8k_{B}T}{\pi\mu_{QB}}} \nonumber$
where:
• $\langle v_r \rangle$ is the mean relative speed of molecules, which is equal to $\sqrt{\dfrac{8k_{B}T}{\pi\mu_{QB}}}$
• $\rho_Q$ and $\rho_B$ are the number densities of Q molecules and B molecules
• NQ and NB are the numbers of Q molecules and B molecules
• $\sigma_{QB}$ is the averaged sum of the collision cross sections of molecules Q and B, $\sigma_{QB} = \pi \left (\dfrac{d_Q + d_B}{2} \right)^2$. The collision cross section represents the collision region presented by one molecule to another. $\sigma_{QB}$ is often written as $\pi d_{QB}^2$.
The units of $\sigma$ are $m^2$, the units of $N_0$ are $\text{mole}^{-1}$, and the units of $\langle v_r \rangle$ are $\dfrac{m}{s}$. Thus, the units of k are $\dfrac{m^3}{\text{mole}·s}$
As noted earlier, using this hard-sphere collision theory value for the rate constant is a rough estimate, mainly because the temperature dependence of k is improperly represented in the hard-sphere collision theory prediction. The next sections will describe an initial attempt to correct that error.
Successful Collisions
For a successful collision to occur, the reactant molecules must collide with enough kinetic energy to overcome the repulsions of the electron clouds and to break the existing bonds. To take the energy dependence of a successful collision into account, we will introduce a new reaction cross-section, $\sigma_r (v_r)$, which takes into account the speed of the reactants. Thus
$k(v_r) = v_r\sigma_r(v_r) \nonumber$
The rate constant can be calculated by averaging over a distribution of all speeds, $f(v_r)$
$k=\int_0^\infty k(v_r)f(v_r)dv_r = \int_0^\infty v_rf(v_r)dv_r\sigma_r(v_r) \label{30.1.2}$
From equation 27.7.4, we know that $v_rf(v_r)dv_r$ is
$v_rf(v_r)dv_r = \left(\dfrac{\mu}{k_BT} \right)^{3/2} \left(\dfrac{2}{\pi} \right)^{1/2} v_r^3e^{-\mu v_r^2/2k_BT} dv_r \label{30.1.3}$
Equation $\ref{30.1.3}$ presents the rate constant as a function of speed. If we want to compare this version of $k$ with the common, Arrhenius form of $k$, the dependent variable must be changed from relative speed to relative kinetic energy, $E_r$. The relationship is
$E_r = \dfrac{1}{2} \mu v_r^2 \, \text{which rearranges to} \, v_r = \left(\dfrac{1}{2 \mu E_r} \right)^{1/2} \nonumber$
Thus,
$dv_r = \left(\dfrac{1}{2 \mu E_r} \right)^{1/2} dE_r \nonumber$
Substituting $v_r$ and $dv_r$ into equation $\ref{30.1.3}$ gives
$v_rf(v_r)dv_r = \left(\dfrac{\mu}{k_BT} \right)^{3/2} \left(\dfrac{2}{\pi} \right)^{1/2} \left(\dfrac{1}{2 \mu E_r} \right)^{3/2} e^{-E_r/k_BT} \left(\dfrac{1}{2 \mu E_r} \right)^{1/2} dE_r \nonumber$
Substituting this equation into equation $\ref{30.1.2}$ and simplifying gives
$k = \left(\dfrac{2}{k_BT} \right)^{3/2} \left(\dfrac{1}{\mu\pi}\right)^{1/2} \int_0^{\infty} dE_r E_r e^{-E_r/k_BT} \sigma_r(E_r) \label{30.1.4}$
We can then assume that the energy dependent reaction cross section $\sigma_r(E_r)$ will include only those molecules which undergo effective collisions with a kinetic energy that is greater than or equal to a minimum sufficient energy, $E_0$. Thus $\sigma_r(E_r)$ is equal to 0 if $E_r < E_0$ and is equal to $\sigma_{QB}$ if $E_r \geq E_0$.
Thus,
$k = \left(\dfrac{2}{k_BT} \right)^{3/2} \left(\dfrac{1}{\mu\pi}\right)^{1/2} \int_{E_0}^{\infty} dE_r E_r e^{-E_r/k_BT} \sigma_{QB} \nonumber$
$= \left(\dfrac{8k_BT}{\mu\pi}\right)^{1/2} \sigma_{QB} e^{-E_r/k_BT} \left(1 + \dfrac{E_0}{k_BT}\right) \nonumber$
$= \langle v_r \rangle\sigma_{QB} e^{-E_r/k_BT} \left(1 + \dfrac{E_0}{k_BT}\right) \nonumber$
This model results in a value for k that takes into account the temperature and a minimum energy requirement to determine the fraction of successful collisions. However, it is not yet equivalent to the Arrhenius equation, in which $k$ is proportional to $e^{-E_r/k_BT}$ and not $e^{-E_r/k_BT} \left(1 + \dfrac{E_0}{k_BT}\right)$.
Contributor
• Tom Neils, Grand Rapids Community College
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(LibreTexts)/30%3A_Gas-Phase_Reaction_Dynamics/30.01%3A_The_Rate_of_Bimolecular_Gas-Phase_Reaction_Can_Be_Calculated_Using_Hard-Sphere_Collision_Theory_an.txt
|
In the previous section, it was assumed that all collisions with sufficient energy would lead to a reaction between the Q and B particles. This is an unrealistic assumption because not all collisions occur with a proper alignment of the particles as shown in Figure $1$.
Thus, the energy-dependent reaction cross-section,$\sigma_r(E_r)$, introduced previously is inaccurate and must be modified to take into account the inefficient collisions. One modification is to employ the line-of-centers (loc) model for $\sigma_r(E_r)$. This model incorporates the angle of collision relative to the line drawn between the centers of the two colliding particles, as shown in Figure $2$.
In this model, and effective collision occurs when $E_{loc} > E_0$ where $E_{loc}$ takes into account the fact that all particle collisions are not head-on collisions. If we define vr as the relative velocity of approach of particles Q and B, then vr = $\vec{v}_Q-\vec{v}_B$. The relative kinetic energy, $E_r$, is then $\dfrac{1}{2}\mu v_r^2$. From Figure $2$ we can see that the fraction of $E_r$ that can be applied to the collision, ($E_{loc})$, is dependent upon $b$, the impact parameter, which is the perpendicular distance between the extrapolated paths traveled by the centers of the particles before the collision occurs. If $b$ is 0, then $E_{loc}$ = $E_r$, but for any other value of $b$, $E_{loc}$ < $E_r$. If $b$ is greater than the sum of the radii of Q and B, the particles will not collide, and $E_{loc}$ = 0. The calculation for determining the exact relationship between the $\sigma_r(E_r)$ and $E_r$ for this line of center model rather complicated, but the result is that $\sigma_r(E_r)$ is equal to 0 if $E_r < E_0$ and is equal to $\sigma_{QB} \left( 1 - \dfrac{E_0}{E_r} \right)$ if $E_r \geq E_0$.
When compared to the simple hard-sphere collision theory, we see that
$\sigma_r(E_r) = \sigma_{QB} \, \text {if} \, E_r \geq E_0 \, (\text{hard-sphere theory}) \label{30.2.1}$
$\sigma_r(E_r) = \sigma_{QB} \left( 1 - \dfrac{E_0}{E_r} \right) \, \text {if} \, E_r \geq E_0 \, (\text{line of centers theory}) \label{30.2.2}$
If we substitute Equation $\ref{30.2.2}$ into Equation $30.1.4$ we get
\begin{align*} k &= \left(\dfrac{2}{k_BT} \right)^{3/2} \left(\dfrac{1}{\mu\pi}\right)^{1/2} \int_{E_0}^{\infty} dE_r E_r e^{-E_r/k_BT} \sigma_{QB} \left( 1 - \dfrac{E_0}{E_r} \right) \[4pt] &= \left(\dfrac{8k_BT}{\mu\pi}\right)^{1/2} \sigma_{QB} e^{-E_r/k_BT} \[4pt] &= \langle v_r \rangle\sigma_{QB} e^{-E_r/k_BT} \end{align*} \nonumber
When compared to the simple hard-sphere collision theory, we see that
\begin{align*} k &= \langle v_r \rangle\sigma_{QB} e^{-E_r/k_BT} \left(1 + \dfrac{E_0}{k_BT}\right) (\text{hard-sphere theory}) \[4pt] &= \langle v_r \rangle\sigma_{QB} e^{-E_r/k_BT} (\text{line of centers theory})\end{align*} \nonumber
The line of centers theory expresses $k$ in the same terms as the Arrhenius equation, yet experimental values of $k$ still differ from those predicted by the line of centers model. The errors come about because $\sigma_r(E_r)$ is not accurately described by $\sigma_{QB} \left( 1 - \dfrac{E_0}{E_r} \right)$. More work needs to be done to improve the model for describing $A$, the Arrhenius factor.
30.03: The Rate Constant for a Gas-Phase Chemical Reaction May Depend on the Orientations of the Collidin
In the previous section, the simple hard-sphere model for collisions was modified to take into account the fact that not every collision of particles occurred with sufficient energy to result in a reaction. The line of centers model assumed that all colliding particles were spheres, yet we know that this is definitely not the case. Thus, we need to modify the collision model to factor in the orientation of non-spherical particles. Figure \(1\) shows an example of properly oriented particles and an example of improperly oriented molecules.
Because proper orientation of colliding molecules is necessary, the hard-sphere collision model overestimates the number of effective collisions. This is one of the factors that leads an incorrect value for \(A\) estimated by the hard-sphere collision model.
30.04: The Internal Energy of the Reactants Can Affect the Cross Section of a Reaction
A further modification of our collision model involves including the internal energy of the colliding gas particles. Recall that the internal energy for a polyatomic gas particle includes electronic, vibrational, and, possibly, rotational energy. It is possible for polyatomic molecules to be in a high enough vibrational state that their vibrational energy alone is greater than $E_0$. Such molecules would not require any additional translational energy to react. Thus, for a constant total energy, the value of $\sigma_r$ depends on the vibrational and, to a smaller extent, rotational state of the particle. During a collision between reacting particles, energy can be exchanged between the different degrees of freedom of the particle, so we will need to modify our model to take into account these energy exchanges that occur during a collision. We will see in section 30.5 that using a center-of-mass coordinate system to describe the reaction collision will allow us to incorporate internal energy into our model.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(LibreTexts)/30%3A_Gas-Phase_Reaction_Dynamics/30.02%3A_A_Reaction_Cross_Section_Depends_Upon_the_Impact_Parameter.txt
|
We will apply a center-of-mass coordinate system to the bimolecular reaction of ideal gases
$\ce{B(g) + Q(g) -> F(g) + G(g)} \nonumber$
to develop an improved model for reaction kinetics that produces theoretical values that more closely represent experimental results. In this model, the reactant molecules will travel at velocities $\vec{v}_B$ and $\vec{v}_Q$ before the collision. The product molecules will travel at velocities $\vec{v}_F$ and $\vec{v}_G$ after the collision.
The Velocity and Kinetic Energy of the Center of Mass
The center of mass of two objects must lie along a straight line drawn between the centers of the two objects. Because the two molecules are heading toward each other, the line between them will be changing in size and can be best represented with the vector, $\vec{r}$, where $\vec{r} = \vec{r}_B - \vec{r}_Q$. The pre-reaction center of mass, $\textbf{R}$ is defined as
$\textbf{R} = \dfrac{m_B \textbf{r}_B + m_Q \textbf{r}_Q}{M} \nonumber$
where $m_B$ and $m_Q$ are the masses of the individual reactants, and $M$ is the total mass, $m_B + m_Q$.
Velocity is $\dfrac{d\vec{r}}{dt}$, and so the velocity of the center of mass, $\vec{v}_{cm}$ is
$\vec{v}_{cm} = \dfrac{m_B \vec{v}_B + m_Q \vec{v}_Q}{M} \nonumber$
The kinetic energy of the reacting system is
$\text{KE}_{react} = \dfrac{1}{2}m_Bv_B^2 \, + \, \dfrac{1}{2}m_Qv_Q^2 \nonumber$
This equation can be rewritten as
$\text{KE}_{react} = \dfrac{1}{2}Mv_{cm}^2 \, + \, \dfrac{1}{2}\mu v_r^2 \label{30.5.1}$
Here, $\mu$ is the reduced mass and the realtive speed of the colliding particles is $v_r = |\vec{v}_r| = | \vec{v}_B | - | \vec{v}_Q |$. Because we are assuming we have ideal gases, KEreact is a constant for the center of mass.
The Reaction
As shown in equation $\ref{30.5.1}$, the kinetic energy of the moving reactants consists of two parts, one due to the relative motion of the two reactant particles, and one due to the motion of the center of mass. The energy due to the motion of the center of mass will not contribute to the reaction, as will soon be shown. Only the energy due to the relative motion of the reacting molecules $\frac{1}{2}\mu v_r^2$ will contribute to overcoming the energy barrier of the energy of the reaction. And only a fraction of that energy will contribute because the molecules are not colliding head-on. Figure $1$ gives a view of the progress of the reaction at several times.
Removing the Center of Mass Term
Once $\ce{F}$ and $\ce{G}$ have formed, the post-reaction center of mass, $\textbf{R}$ is defined as
$\textbf{R} = \dfrac{m_F \vec{r}_F + m_G \vec{r}_G}{M}\nonumber$
where $m_F$ and $m_G$ are the masses of the individual reactants, and $M$ is the total mass, $m_F + m_G$. That means that the velocity of the center of mass, $\vec{v}_{cm}$ is
$\vec{v}_{cm} = \dfrac{m_F \vec{v}_F + m_G \vec{v}_G}{M} \nonumber$
The kinetic energy of the reacting system is
$\text{KE}_{react} = \dfrac{1}{2}m_Fv_F^2 \, + \, \dfrac{1}{2}m_Gv_G^2 \nonumber$
This equation can be rewritten as
$\text{KE}_{react} = \dfrac{1}{2}Mv_{cm}^2 \, + \, \dfrac{1}{2}\mu_P v_{P_r}^2 \nonumber$
Because the products have different individual masses than the reactants, the reduced mass $\mu_P$ and the relative speed $v_{P_r} \nonumber$ of the products must be denoted as having different values from those of the reactants. The total mass is conserved, however, as is $\vec{v}_{cm}$, the velocity of the center of mass. Because linear momentum must be conserved,
$m_B\vec{v}_B + m_Q\vec{v}_Q = m_F\vec{v}_F + m_G\vec{v}_G = M\vec{v}_{cm}\nonumber$
Because mass is conserved and velocity of the center of mass is constant, the energy contribution of the motion of the center of mass is also constant, and thus does not contribute to the kinetic energy used to attain a successful reaction collision.
Estimating the Total Internal Energy
Using the center of mass model for a reaction, we have found the kinetic energy terms for the reactants, $\frac{1}{2}\mu v_r^2$, and products, $\frac{1}{2}\mu_Pv_{P_r}^2$. Thus the law of conservation of energy tells us
$E_{Reactant_{(internal)}} + \dfrac{1}{2}\mu v_r^2 = E_{Product_{(internal)}} + \dfrac{1}{2}\mu_Pv_{P_r}^2 \nonumber$
This relationship can be rewritten as
$E_{Reactant_{(internal)}} + E_{Reactant_{(translational)}} = E_{Product_{(internal)}} + E_{Product_{(translational)}} \nonumber$
or
$E_{R_{(int)}} + E_{R_{(trans)}} = E_{P_{(int)}} + E_{P_{(trans)}} \nonumber$
The velocity of the products can be defined based on the laws of conservation of energy and momentum. Unfortunately, the angle between the velocity vector of the reactants $\vec v_r$ and the velocity vector of the products $\vec v_{P_r}$cannot be determined because products molecules can theoretically disperse from the collision in any direction.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(LibreTexts)/30%3A_Gas-Phase_Reaction_Dynamics/30.05%3A_A_Reactive_Collision_Can_Be_Described_in_a_Center-of-Mass_Coordinate_System.txt
|
Crossed molecular beam experiments are chemical experiments where two beams of atoms or molecules are collided together to study the dynamics of the chemical reaction. These experiments can detect individual reactive collisions as well as determine the distribution of velocities and the scattering angle of the reaction products.[1]
Technique
In a crossed molecular beam apparatus, two collimated beams of gas-phase atoms or molecules, each dilute enough to ignore collisions within each beam, intersect in a vacuum chamber, as shown in Figure \(\PageIndex{1a}\). The direction and velocity of the resulting product molecules are then measured. These data are frequently coupled with mass spectrometric data to yield information about the partitioning of energy among translational, rotational, and vibrational modes of the product molecules.[2]
History
The crossed molecular beam technique was developed by Dudley Herschbach and Yuan T. Lee, for which they were awarded the 1986 Nobel Prize in Chemistry.[3] While the technique was demonstrated in 1953 by Taylor and Datz of Oak Ridge National Laboratory,[4] Herschbach and Lee refined the apparatus and began probing gas-phase reactions in unprecedented detail.
Early crossed beam experiments investigated alkali metals such as potassium, rubidium, and cesium. When the scattered alkali metal atoms collided with a hot metal filament, they ionized, creating a small electric current. Because this detection method is nearly perfectly efficient, the technique was quite sensitive.[2] Unfortunately, this simple detection system only detects alkali metals. New techniques for detection were needed to analyze main group elements.
Detecting scattered particles through a metal filament gave a good indication of angular distribution but has no sensitivity to kinetic energy. In order to gain insight into the kinetic energy distribution, early crossed molecular beam apparatuses used a pair of slotted disks placed between the collision center and the detector. By controlling the rotation speed of the disks, only particles with a certain known velocity could pass through and be detected.[2] With information about the velocity, angular distribution, and identity of the scattered species, useful information about the dynamics of the system can be derived.
Later improvements included the use of quadrupole mass filters to select only the products of interest,[5] as well as time-of-flight mass spectrometers to allow easy measurement of kinetic energy. These improvements also allowed the detection of a vast array of compounds, marking the advent of the "universal" crossed molecular beam apparatus.
The inclusion of supersonic nozzles (figure \(\PageIndex{1b}\)) to collimate the gases expanded the variety and scope of experiments, and the use of lasers to excite the beams (either before impact or at the point of reaction) further broadened the applicability of this technique.[2]
30.07: Reactions Can Produce Vibrationally Excited Molecules
As shown at the end of section 30.5, using the center of mass reaction model allows us to state that
$E_{R_{(int)}} + E_{R_{(trans)}} = E_{P_{(int)}} + E_{P_{(trans)}} \label{30.7.1}$
where $E_{R_{(int)}}$ and $E_{P_{(int)}}$ represent the rotational, vibrational, and electronic energies collectively described as the internal energy of the reactants and the products, respectively. If we apply equation $\ref{30.7.1}$ to the well-studied gas-phase reaction
$\ce{F(g) + D_2(g) -> DF(g) + D(g)} \nonumber$
we can discuss the distribution of a fixed total collision energy between $E_{P_{(int)}}$ and $E_{P_{(trans)}}$. Specifically, if we assume that $\ce{F(g)}$ and $\ce{D(g)}$ are in their ground electronic states, that $\ce{D2(g)}$ is in its ground rotational, vibrational, and electronic states, and that $\ce{DF(g)}$ is in its ground rotational and electronic states, we can focus on the possible vibrational states of the $\ce{DF(g)}$ that can be populated. Figure $1$ shows the potential energy curve of the process that is described below.
Expanding Equation \ref{30.7.1} to describe this assumed process, we get
$E_{R_{(rot)}} + E_{R_{(vib)}} + E_{R_{(elec)}} + E_{R_{(trans)}} = E_{P_{(rot)}} + E_{P_{(vib)}} + E_{P_{(elec)}} + E_{P_{(trans)}} \label{30.7.2}$
Let's assume that $\ce{D2}$ and $\ce{DF}$ are harmonic oscillators with $\tilde{\nu}_{D_2}$ = 2990 cm-1 and $\tilde{\nu}_{DF}$ = 2907 cm-1 .
If D2 and DF are in their ground electronic states, then $E_{R_{(elec)}} = -D_e(D_2)$ and $E_{P_{(elec)}} = -D_e(DF)$. From experiments, we know that
$-D_e(DF) - (-D_e(D_2)) = -140 kJ/mol \nonumber$
We also know from experiments, that the activation energy for this reaction is about 6 kJ/mol1, so if we assume that the relative translational energy of the reactants is 7.1 kJ/mol, they will have sufficient energy to overcome the activation energy barrier.
With these data, we can write Equation $\ref{30.7.2}$ as
$0 + E_{R_{(vib)}} - D_e(D_2) + 7.1 \dfrac{kJ}{mol} = 0 + E_{P_{(vib)}} - D_e(DF) + E_{P_{(trans)}} \nonumber$
then
$E_{R_{(vib)}} + 7.1 \dfrac{kJ}{mol} = E_{P_{(vib)}} + (-D_e(DF) - (-D_e(D_2)) + E_{P_{(trans)}} \nonumber$
then
$E_{R_{(vib)}} + 7.1 \dfrac{kJ}{mol} = E_{P_{(vib)}} - 140 \dfrac{kJ}{mol} + E_{P_{(trans)}}\nonumber$
Because $\ce{D2}$ is in its ground vibrational state, $E_{R_{(vib)}} = \dfrac{1}{2}h\nu_{D_2} = 17.9 \dfrac{kJ}{mol}$
Putting this altogether,
$17.9 \dfrac{kJ}{mol} + 140 \dfrac{kJ}{mol} + 7.1 \dfrac{kJ}{mol} - E_{P_{(vib)}} = E_{P_{(trans)}} \nonumber$
$165 \dfrac{kJ}{mol} - E_{P_{(vib)}} = E_{P_{(trans)}} \label{30.7.3}$
Equation $\ref{30.7.3}$ tells us that the reaction will occur only if $E_{P_{(vib)}}$ is less than 165 kJ/mol because $E_{P_{(trans)}}$ must be a positive value.
If $\ce{DF(g)}$ is a harmonic oscillator, then
\begin{align*} E_{P_{(vib)}} &= \left( v + \dfrac{1}{2} \right) h\nu_{DF} \[4pt] &= \left( v + \dfrac{1}{2} \right)(34.8\, kJ/mol) < 165\, kJ/mol \end{align*} \nonumber
Vibrational states, $v$ = 0, 1, 2, 3, and 4 will result in $E_{P_{(vib)}}$ $\leq$ 165 kJ/mol. Thus $\ce{DF(g)}$ molecules in these five vibrational states will be produced by the reaction. Note that under these set of assumptions, the $\ce{DF}$ molecules produced in different vibrational states will have correspondingly different $E_{P_{(trans)}}$.
1Average of experimental Ea values listed at https://kinetics.nist.gov/kinetics/ accessed 9/22/2021
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(LibreTexts)/30%3A_Gas-Phase_Reaction_Dynamics/30.06%3A_Reactive_Collisions_Can_be_Studied_Using_Crossed_Molecular_Beam_Machines.txt
|
Calculation of the speed distribution for DF molecules based upon the vibrational state of the molecule
This section describes how data from crossed molecular beam experiments allow us to describe the velocity and angular distribution of particles produced by the simple bi-molecular collision
$\ce{F(g) + D_2(g) -> DF(g) + D(g)} \label{react1}$
As noted in the previous section, the internal vibrational energy of the products will affect the velocity and distribution because if $\ce{DF(g)}$ is a harmonic oscillator, then
$E_{P_{(trans)}} + E_{P_{(vib)}} = 165 \dfrac{kJ}{mol} \nonumber$
and
$E_{P_{(vib)}} = \left( v + \dfrac{1}{2} \right)(34.8 \dfrac{kJ}{mol}) \nonumber$
Thus
\begin{align} E_{P_{(trans)}} + E_{P_{(vib)}} &= \dfrac{1}{2}\mu_Pv_{P_r} + \left( v + \dfrac{1}{2} \right)\left(34.8 \dfrac{kJ}{mol}\right) \nonumber \[4pt] &= 165 \dfrac{kJ}{mol} \label{30.8.1} \end{align}
The reduced mass of the products $\mu_P$ is
$\dfrac{(21.01 \, g/mol)(2.014 \, g/mol)}{21.01 \, g/mol + 2.014 \, g/mol} = 1.84 \, g/mol = 0.00184 \, kg/mol \nonumber$
so that solving for the product velocity in Equation $\ref{30.8.1}$ for $v_{P_r}$ gives
$v_{P_r} = \sqrt{\left(\dfrac{2}{0.00184 \, kg/mol}\right) \left(1.65 \times 10^5 \, \dfrac{J}{mol} - \left(v + \dfrac{1}{2}\right)(3.48 \times 10^4) \dfrac{J}{mol} \right) } \nonumber$
It can be shown that the corresponding relative velocity of the $\ce{DF}$ molecule and the center of mass,$|\vec v_{DF} - \vec v_{cm} |$, is equal to $\dfrac{m_D}{M}v_{P_r}$ for this reaction, with $M = m_{D_2} + m_F$. Table $1$ shows the values of $v_{P_r}$ and $|\vec v_{DF} - \vec v_{cm} |$ for vibrational states $v = 0-4$.
Table $1$
Vibrational state, $v$ relative velocity of the products, $v_{P_r}$ in m/s $|\vec v_{DF} - \vec v_{cm} |$, in m/s
0 12,700 1111
1 11,100 971
2 9270 811
3 6930 606
4 3200 280
Analysis of Crossed Molecular Beam Data
This link takes you to the 1986 Nobel Prize acceptance lecture of Yuan Tseh Lee, in which he describes the crossed molecular beam studies that his group carried out on the reaction \ref{react1}. On page 3 of the lecture, you will see a center-of-mass velocity flux contour map for the reaction. This map measures the speed distribution and the angular dispersion distribution as a result of collisions between one $D_2$ molecule approaching from the left and one F atom approaching from the right. Moving away from the center of the graph in any direction represents an increase in speed. The dashed line circles represent the maximum speed that a molecule in that given vibrational state can obtain. Notice that the highest vibrational state is closest to the nucleus. Thus, the map is consistent with theory, which claims that as the vibrational energy increases, the translational energy decreases.
The contour areas on the map represent a constant number of DF(g) molecules. The more closely spaced the contour lines, the greater the number of $\ce{DF(g)}$ molecules with that velocity in that vibrational state. An estimate of the population distribution is that roughly half the molecules are in the $v = 3$ state, roughly 25% of the molecules occupy the $v = 2$ state, and roughly 25% of the molecules occupy the $v = 4$ state. Very few molecules occupy the $v = 0$ or $v = 1$ states. The fact that there are molecules with velocities between the dashed lines shows that these $\ce{DF}$ molecules are in various rotational states within each vibrational state. If $E_{rot}$ = 0 and $J = 0$, the molecular velocity distribution would be centered on each dashed line. But because these molecules do exist in various rotational states, their $E_{rot}$ is greater than 0, and their translational energy is in between the energies of those molecules in rotational ground states. The distribution of vibrational states among the $\ce{DF}$ product molecules more closely resembles a normal distribution (i.e., Gaussian-like) than a Boltzmann distribution.
Example $1$
Determine the populations of the five lowest vibrational states of $\ce{DF}(v)$ relative to $\ce{DF}(v=3)$, assuming that the distribution is in thermal equilibrium at 298 K. You can assume that the $\ce{DF}$ molecules act as harmonic oscillators with $\tilde{v_{DF}}$ = 2907 cm-1.
Solution
If the $\ce{DF(g)}$ molecules are in thermal equilibrium, the population of vibrational states will follow a Boltzmann distribution. Therefore,
$N(v) = \Large{e}^{\small{-\dfrac{(v + 1/2)h\nu_{DF}}{k_BT}}}\nonumber$
and
$N(v =3) = \Large{e}^{\small{-\dfrac{(3 + 1/2)h\nu_{DF}}{k_BT}}}\nonumber$
so
$\dfrac{N(v)}{N(v=3)} = \Large{e}^{\small{-\dfrac{(v -3)h\nu_{DF}}{k_BT}}}\nonumber$
$v$ $\dfrac{N(v)}{N(v=3)}$
0 2.03 x 1018
1 1.60 x 1012
2 1.26 x 106
3 1
4 7.90 x 10-7
It is clear that when the molecules are at thermal equilibrium, the population of the vibrational states decreases with an increasing $v$, which is not the population pattern found from the experiment.
The map also shows that a majority of $\ce{DF(g)}$ molecules generally head back towards the direction from which the F atom originally approached the collision. Some even head directly back from whence they came. This type of collision is called a rebound reaction. We will see in the next section that not all reactions are rebound reactions.
A different style of contour map is shown in figure $1$. In this view, the density of the dots and the intensity of their color represent the number of particles being scattered at specific angles. This map does not show the vibrational state of the DF molecules.
30.09: Not All Gas-Phase Chemical Reactions Are Rebound Reactions
Stripping Reactions
In the previous section we showed that the reaction
$\ce{D_2(g) + F(g) -> DF(g) + D(g)} \nonumber$
is a rebound reaction because the vast majority of $\ce{DF}$ molecules bounce off from the collision with $\ce{D2}$ back toward the general direction from which they came. In this section we will look at stripping reactions, reactions in which a majority of the product molecules continue moving on after the collision in the same direction that the precursor reactant molecules were going.
In stripping reactions, the experimental hard-sphere collision cross section is generally found to be larger than the theoretical estimate. The experimental cross-section is so large that it would be possible for two particles to remain inside the collision area and yet pass by each other without colliding. It has been determined experimentally that an electron transfer between the two particles occurs before they collide. The resulting ions are then drawn toward each other because of the Coulomb potential created by the opposite charges. One such reaction involves $\ce{K(g)}$ and $\ce{I2(g)}$. The mechanism is:
\begin{align} \ce{K(g) + I_2(g)} & \ce{->} \ce{K^+(g)} + \ce{I_2^{-}(g)} \tag{step 1} \[4pt] \ce{K^+(g) + I_2^{-}(g)} &\ce{->} \ce{KI(g)} + \ce{I(g)} \tag{step 2} \end{align}
After the collision, the $\ce{KI}$ molecule moves off in the same general direction as the incoming $\ce{K}$ atom.
Meta-stable Intermediates
There are other bimolecular gas-phase collision reactions in which the product molecules disperse in both forward and reverse directions after the reaction. There is no explanation for these types of post-collision scattering patterns if we insist on using a simple hard-sphere collision model. However, if the colliding particles form a single atom-molecule structure that lasts long enough for the structure to undergo many rotations before splitting apart into products, it is reasonable for the newly formed products to scatter in the forward and reverse direction.
The discovery of these various reaction types was made possible by the development and use of the crossed molecular beam instruments described in Section 30.6.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(LibreTexts)/30%3A_Gas-Phase_Reaction_Dynamics/30.08%3A_The_Velocity_and_Angular_Distribution_of_the_Products_of_a_Reactive_Collision_Provide_a_Molecular_.txt
|
A potential energy surface (PES) describes the potential energy of a system, especially a collection of atoms, in terms of certain parameters, normally the positions of the atoms. The surface might define the energy as a function of one or more coordinates; if there is only one coordinate, the surface is called a potential energy curve or energy profile. It is helpful to use the analogy of a landscape: for a system with two degrees of freedom (e.g. two bond lengths), the value of the energy (analogy: the height of the land) is a function of two bond lengths (analogy: the coordinates of the position on the ground). The Potential Energy Surface represents the concept that each geometry (both external and internal) of the atoms of the molecules in a chemical reaction has associated with it a unique potential energy. This creates a smooth energy “landscape” and chemistry can be viewed from a topology perspective of particles evolving as they pass through potential energy "valleys" and "passes".
The PES concept finds application in fields such as chemistry and physics, especially in the theoretical sub-branches of these subjects. It can be used to theoretically explore properties of structures composed of atoms, for example, finding the minimum energy shape of a molecule or computing the rates of a chemical reaction.
Potential Energy Curves (2-D Potential Energy Surfaces)
The energy of a system of two atoms depends on the distance between them. At large distances the energy is zero, meaning “no interaction”. At distances of several atomic diameters, attractive forces dominate, whereas at very close approaches the force is repulsive, causing the energy to rise. The attractive and repulsive effects are balanced at the minimum point in the curve. Plots that illustrate this relationship are quite useful in defining certain properties of a chemical bond (Figure $1$).
The internuclear distance at which the potential energy minimum occurs defines the bond length. This is more correctly known as the equilibrium bond length because thermal motion causes the two atoms to vibrate about this distance. In general, the stronger the bond, the smaller the bond length.
Attractive forces operate between all atoms, but unless the potential energy minimum is at least of the order of RT, the two atoms will not be able to withstand the disruptive influence of thermal energy long enough to result in an identifiable molecule. Thus we can say that a chemical bond exists between the two atoms in H2. The weak attraction between argon atoms does not allow Ar2 to exist as a molecule, but it does give rise to the van der Waals force that holds argon atoms together in their liquid and solid forms.
Potential, Kinetic, and Total Energy for a System
Potential energy and kinetic energy quantum theory tell us that an electron in an atom possesses kinetic energy $K$ as well as potential energy $V$, so the total energy $E$ is always the sum of the two: $E = V + K$. The relation between them is surprisingly simple: $K = –0.5 V$. This means that when a chemical bond forms (an exothermic process with $ΔE < 0$), the decrease in potential energy is accompanied by an increase in the kinetic energy (embodied in the momentum of the bonding electrons), but the magnitude of the latter change is only half as much, so the change in potential energy always dominates. The bond energy $–ΔE$ has half the magnitude of the decrease in potential energy.
Mathematical Definition and Computation of a Potential Energy Surface
The geometry of a set of atoms can be described by a vector, r, whose elements represent the atom positions. The vector $r$ could be the set of the Cartesian coordinates of the atoms, or could also be a set of inter-atomic distances and angles. Given $r$, the energy as a function of the positions, $V(r)$, is the value of $V(r)$ for all values of $r$ of interest. Using the landscape analogy from the introduction, $V(r)$ gives the height on the "energy landscape" so that the concept of a potential energy surface arises. An example is the PES for water molecule (Figure $2$) that shows the energy minimum corresponding to optimized molecular structure for water- O-H bond lengths of 0.0958 nm and H-O-H bond angle of 104.5°
The Dimensionality of a Potential Energy Surface
To define an atom’s location in 3-dimensional space requires three coordinates (e.g., $x$, $y$,and $z$ or $r$, $\theta$ and $phi$ in Cartesian and Spherical coordinates) or degrees of freedom. However, a reaction and hence the corresponding PESs do not depend on the absolute position of the reaction, only the relative positions (internal degrees). Hence both translation and rotation of the entire system can be removed (each with 3 degrees of freedom, assuming non-linear geometries). So the dimensionality of a PES is
$3N-6\nonumber$
where $N$ is the number of atoms involves in the reaction, i.e., the number of atoms in each reactant). The PES is a hypersurface with many degrees of freedom and typically only a few are plotted at any one time for understanding. See Calculate Number of Vibrational Modes to get a more detailed picture of how this applies to calculating the number of vibrations in a molecule
To study a chemical reaction using the PES as a function of atomic positions, it is necessary to calculate the energy for every atomic arrangement of interest. Methods of calculating the energy of a particular atomic arrangement of atoms are well known. For very simple chemical systems or when simplifying approximations are made about inter-atomic interactions, it is sometimes possible to use an analytically derived expression for the energy as a function of the atomic positions. An example is
$\ce{H + H_2 -> H_2 + H}\label{30.10.1}$
a system that is described by a function of the three $\ce{H-H}$ distances. For more complicated systems, calculation of the energy of a particular arrangement of atoms is often too computationally expensive for large-scale representations of the surface to be feasible.
Applications of Potential Energy Surfaces
A PES is a conceptual tool for aiding the analysis of molecular geometry and chemical reaction dynamics. Once the necessary points are evaluated on a PES, the points can be classified according to the first and second derivatives of the energy with respect to position, which respectively are the gradient and the curvature. Stationary points (or points with a zero gradient) have physical meaning: energy minima correspond to physically stable chemical species and saddle points correspond to transition states, the highest energy point on the reaction coordinate (which is the lowest energy pathway connecting a chemical reactant to a chemical product).
A Hypothetical Endothermic Reaction PES
Figure $3$ shows an example of a PES for a hypothetical reaction system, with the corresponding 2-D energy contour map projected on the plane below. The contour map shows isolines of potential energy, using color to differentiate between high and low energies. In this figure, the highest potential energy is indicated with red, and as the color changes along the ROYGBIV scale, the potential energy decreases until violet is reached, indicating the lowest potential energy. In the PES, the $z$ axis represents the potential energy, with the energy of the plane in the PES defined as 0 kJ. The color scheme of the PES has the same meaning as that of the 2-D contour map - red is high potential energy and violet is low potential energy.
Before the reaction, the reactants are found at (or near) the energy minimum on the right. (The deep well in the PES and the smallest, violet oval in the contour map.) As the reaction proceeds, the reactants are shown following the minimum energy pathway toward the products, as designated by the dashed red line. The point "2" designates the transition state at the saddle point, the highest energy point of the reaction process. The reaction then moves on to form the products, which sit in the potential energy well on the left. (Designated by the shallower well on the PES and by the dark blue oval on the left in the contour map.) Because the potential energy of the reactants is greater than the potential energy of the products, this reaction is an endothermic reaction. The point "1" represents a second, higher energy saddle point which is the transition state for an alternative reaction that is possible for this set of chemicals.
The Exchange Reaction HA + HBHC⇒ HAHB+ HC
A specific application of a PES is the mapping of the reaction shown in equation $\ref{30.10.1}$, the exchange of hydrogen atoms in an H2 molecule. In this map, the individual H atoms are labeled as
$H_A + H_BH_C \rightarrow H_AH_B + H_C \nonumber$
We must take into account the collision angle between the $H_A$ atom and the $H_BH_C$ molecule. If we fix this collision angle at 180°, we are able to plot a PES that is dependent on the two parameters of $R_{BC}$ and $R_{AB}$. The resulting PES (figure $\PageIndex{4a}$) and energy contour map (figure $\PageIndex{4b}$) show us that if the particles are far enough apart, the potential energy of the reaction system starts out being described by the potential energy curve for the $H_BH_C$ molecule, and ends up being described by the potential energy curve of the $H_AH_B$ molecule. Individually, these curves both have appearances similar to the curve shown in Figure $1$. The PES and the energy contour map show the symmetrical nature of the potential energy changes that occur in this exchange process that involves products that are equivalent to the reactants.
As the $H_A$ atom more closely approaches the $H_BH_C$ molecule, the interactions between these particles begin to affect the potential energy of the system. There are many possible potential energy pathways that the reaction could follow. Figure $5$ shows the energy contour map for three possible reaction pathways:
In pathway 1, the $H_BH_C$ bond length $R_{BC}$ is held constant as the distance $R_{AB}$ decreases. This type of interaction would lead to a continually increasing potential energy for the system as the $H_A$ atom moves closer and closer to the $H_BH_C$ molecule. Eventually, the new $H_AH_B$ molecule would form, and the $H_C$ atom would break off and would move farther and farther away.
Pathway 2 shows a second possible reaction pathway in which the $H_BH_C$ bond length $R_{BC}$ increases even though the $H_A$ atom is still relatively far away. This pathway is unlikely because it requires a great deal of potential energy to stretch the $H_BH_C$ bond before the attractive force from the $H_A$ atom influences this bond lengthening.
As the particles travel along Pathway 3, the reactants still must pass through a potential energy maximum at the saddle point, but this local maximum is the lowest energy barrier separating reactants from products. As noted above, this saddle point is called the transition state structure (designated by the red dot in pathway 3). This is the structure that is equally poised to return to the reactants or move forward to form products. Pathway 3 is the minimum energy pathway, and thus the one most likely to be followed during a successful reaction.
It is sometimes useful to create energy contour maps that include information about the vibrational state of reactants and products, as shown in figure $6$.
The Exchange Reaction F + D2 ⇒ DF + D
When modeling the potential energy for the reaction
$\ce{F(g) + D_2(g) -> DF(g) + D(g)}\nonumber$
it is useful to differentiate between the two deuterium atoms, so we can designate them as $\ce{D}_A$ and $\ce{D}_B$. Doing so allows us to determine how the potential energy of the pre-reaction system is affected by the $\ce{F}$ to $\ce{D}_A$ distance $R_{\ce{D}_A \ce{F}}$ and the $\ce{D}_A$ to $\ce{D}_B$ distance $R_{\ce{D}_A \ce{D}_B}$ (figure $7$).
As the $\ce{F}$ atom more closely approaches the $\ce{D_2}$ molecule, the interactions between these particles begin to affect the potential energy of the system. There are many possible potential energy pathways that the reaction could follow. Figure $8$ shows the energy contour map for three possible reaction pathways:
In pathway 1, the $\ce{D_2}$ bond length $R_{\ce{D}_A \ce{D}_B}$ is held constant as the distance $R_{\ce{D}_A \ce{F}}$ decreases. This type of interaction would lead to a continually increasing potential energy for the system as the $\ce{F}$ atom moves closer and closer to the $\ce{D_2}$ molecule. Eventually, the new $\ce{D}_A \ce{F}$ molecule would form, and the $\ce{D}_B$ atom would break off and would move farther and farther away.
Pathway 2 shows a second possible reaction pathway in which the $\ce{D_2}$ bond length $R_{\ce{D}_A \ce{D}_B}$ increases even though the F atom is still relatively far away. This pathway is unlikely because it requires a great deal of potential energy to stretch the $\ce{D_2}$ bond before the attractive force from the F atom influences this bond lengthening.
As the particles travel along Pathway 3, the reactants still must pass through a potential energy maximum at the saddle point, but this local maximum is the lowest energy barrier separating reactants from products. As noted above, this saddle point is called the transition state structure (designated by the red dot in pathway 3, labeled as $3^{\ddagger}$). This is the structure that is equally poised to return to the reactants or move forward to form products. Pathway 3 is the minimum energy pathway, and thus the one most likely to be followed during a successful reaction.
If we were to plot the potential energy curve for the reaction pathway along the minimum energy pathway, we would get the familiar potential energy curve of a reaction similar to that shown in Figure $9$.
30.E: Gas-Phase Reaction Dynamics (Exercises)
1.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(LibreTexts)/30%3A_Gas-Phase_Reaction_Dynamics/30.10%3A_The_Potential-Energy_Surface_Can_Be_Calculated_Using_Quantum_Mechanics.txt
|
The regular three-dimensional arrangement of atoms or ions in a crystal is usually described in terms of a space lattice and a unit cell. To see what these two terms mean, let us first consider the two-dimensional patterns shown in Figure $1$. We can think of each of these three structures as a large number of repetitions in two directions. Each light purple parallelogram represents one unit cell. Three valid unit cells and one invalid unit cell for the pattern of light and dark dots are shown.
Finding the unit cell in a three-dimensional crystal structure can be a challenge. Figure $2$ shows three representations of one class of three-dimensional structures: the three types of cubic unit cells. The black lines and gray spheres in the top right corner of the bottom pictures show one unit cell within a larger collection of unit cells.
Figure $3$ illustrates the space lattice and the unit cell for a real three-dimensional crystal structure—that of sodium chloride.
A unit cell for this structure is a cube whose comers are all occupied by sodium ions. Alternatively, the unit cell could be chosen with chloride ions at the corners. The unit cell of sodium chloride contains four sodium ions and four chloride ions. In arriving at such an answer we must bear in mind that many of the ions are shared by several adjacent cells (part c of Figure $2$ shows this well). Specifically, the sodium ions at the centers of the square faces of the cell are shared by two cells, so that only half of each lies within the unit cell. Since there are six faces to a cube, this makes a total of three sodium ions. In the middle of each edge of the unit cell is a chloride ion which is shared by four adjacent cells and so counts one-quarter. Since there are twelve edges, this makes three chloride ions. At each comer of the cube, a sodium ion is shared by eight other cells. Since there are eight comers, this totals to one more sodium ion. Finally, there is a chloride ion in the body of the cube unshared by any other cell. The grand total is thus four sodium and four chloride ions.
A general formula can be derived from the arguments just presented for counting N, the number of atoms or ions in a unit cell. It is
$N=N_{\text{body}}\text{ + }\frac{N_{\text{face}}}{\text{2}}\text{ + }\frac{N_{\text{edge}}}{\text{4}}\text{ + }\frac{N_{\text{corner}}}{\text{8}}\nonumber$
Crystal Systems
Unit cells need not be cubes, but they must be parallel-sided, three-dimensional figures. A general example is shown in Figure $4$. Such a cell can be described in terms of the lengths of three adjacent edges, a, b, and c, and the angles between them, α, β, and γ.
Crystals are usually classified as belonging to one of fourteen Bravais lattices, depending on the shape of the unit cell and the number of atoms in the unit cell. These fourteen systems are shown in Figure $5$ below. Figure $6$ lists the details about each of the lattice systems.
The simplest lattice system is the cubic system, in which all edges of the unit cell are equal and all angles are 90°. The tetragonal and orthorhombic systems also feature rectangular cells, but the edges are not all equal. In the remaining systems, some or all of the angles are not 90°. The least symmetrical is the triclinic, in which no edges are equal and no angles are equal to each other or to 90°.
Close-Packed Systems
An important class of crystal structures is found in many metals and also in the solidified noble gases where the atoms (which are all the same) are packed together as closely as possible. Most of us are familiar with the process of packing spheres together, either from playing with marbles or BB’s as children or from trying to stack oranges or other round fruit into a pyramid. On a level surface we can easily arrange a collection of spheres of the same size into a very compact hexagonal layer in which each sphere is touching six of its fellows, as seen Figure $7$.
Then we can add a second layer so that each added sphere snuggles into a depression between three spheres in the layer below. Within this second layer each sphere also contacts six neighbors, and the layer is identical to the first one. It appears that we can add layer after layer indefinitely, or until we run out of spheres. Each sphere will be touching twelve of its fellows since it is surrounded by six in the same plane and nestles among three in the plane above and three in the plane below. We say that each sphere has a coordination number of 12. It is impossible to make any other structure with a larger coordination number, that is, to pack more spheres within a given volume. Accordingly the structure just described is often referred to as a close-packed structure.
It turns out that there are two ways to create a close-packed structure. These two ways are shown in Figure $8$.
In part a of Figure $8$ the first layer of spheres has been labeled A and the second labeled B to indicate that spheres in the second layer are not directly above those in the first. The third layer is directly above the first, and so it is labeled A. If we continue in the fashion shown, adding alternately A, then B, then A layers, we obtain a structure whose unit cell (shown in part a) has two equal sides with an angle of 120° between them. Other angles are 90°, and so the cell belongs to the hexagonal crystal system. Hence this structure is called hexagonal close packed (hcp).
In part b of Figure $8$ the first layer of spheres has been labeled A and the second labeled B to indicate that spheres in the second layer are not directly above those in the first. In this packing system, the third layer is labeled C because it is not directly above the first or the second layer; it has its own unique orientation. If we continue in the fashion shown, adding alternately A, then B, then C layers, we obtain a structure whose unit cell (shown in part b) that has three equal sides with all angles at 90°, and so the cell belongs to the cubic crystal system. Hence this structure is called cubic close packed (ccp). To repeat, in both the hexagonal close-packed and the cubic close-packed structures each sphere has 12 nearest neighbors.
Figure $9$ shows two types of cubic lattices, the face-centered cubic and the body-centered cubic. The unit cell of a body-centered cubic (bcc) crystal is similar to the fcc structure except that, instead of spheres in the faces, there is a single sphere in the center of the cube. This central sphere is surrounded by eight neighbors at the corners of the unit cell, giving a coordination number of 8. Hence the bcc structure is not as compactly packed as the close-packed structures which had a coordination number 12.
Example $1$: Unit Cells
Count the number of spheres in the unit cell of (a) a face-centered cubic structure, and (b) a body-centered cubic structure.
Solution
Referring to Figure $9$ and using the equation:
$N=N_{\text{body}}\text{ + }\frac{N_{\text{face}}}{\text{2}}\text{ + }\frac{N_{\text{edge}}}{\text{4}}\text{ + }\frac{N_{\text{corner}}}{\text{8}} \nonumber$
we find
a) $N=N_{\text{body}}\text{ + }\frac{N_{\text{face}}}{\text{2}}\text{ + }\frac{N_{\text{edge}}}{\text{4}}\text{ + }\frac{N_{\text{corner}}}{\text{8}}=\text{0 + }\frac{\text{6}}{\text{2}}\text{ + 0 + }\frac{\text{8}}{\text{8}}=\text{4} \nonumber$
b) $N=\text{1 + 0 + 0 + }\frac{\text{8}}{\text{8}}=\text{2} \nonumber$
Example $2$: Crystal Forms
Silicon has the same crystal structure as diamond. Techniques are now available for growing crystals of this element which are virtually flawless. Analysis of some of these perfect crystals found the side of the unit cell to be 543.102064 pm long. The unit cell is a cube containing eight Si atoms, but is only one of the simple cubic cells discussed already. From the isotope make up, molar mass and density of the crystals, it was determined that one mole of Si in this crystal form has a volume of 12.0588349×10-6 m3. Determine NA from this data.
Solution This problem uses knowledge of silicon crystal structure to determine NA. From the edge length, we can obtain the volume of the cubic unit cell. We know that the unit contains eight atoms, and since we know the volume of one mole, we can calculate NA, with the Avogadro constant defined as the number of particles per unit amount of substance.
$N_{A}= \frac{N*V_{\text{m}}}{V_{\text{unit cell}}}=\frac{8\times{12.0588349}\times{10}^{-6}\text{m}^{3}}{({ 543.102064}\times{10}^{-12}\text{m})^{3}}={6.02214179}\times{10}^{23} \nonumber$
The values used to determine this value were taken from crystals using X Ray Crystal Density(XRCD), to determine side length. These values were used in the most recent analysis published by the Committee on Data for Science and Technology(CODATA)[1], which standardizes definitions of important scientific constants and units. The value you just calculated is therefore the most accurate determination of Avogadro's constant as of 2007.
It is important to note that the spheres in these models can represent atoms, monatomic ions, polyatomic ions, molecules, or a collection of molecules.
1. Mohr, P.J., Taylor, B.N., and D. B. Newell. "CODATA Recommended Values of the Fundamental Physical Constants:fckLR2006." National Institute of Standards and Technology. December 28, 2007. http://physics.nist.gov/cuu/Constants/codata.pdf
Example $1$
Copper has a density of 8.930 $\dfrac{grams}{cm^3}$ at 20°C. The molar mass of copper is 63.55 $\dfrac{grams}{mole}$. Copper crystallizes as a face-centered cubic lattice. Calculate the crystallographic radius of a copper atom.
Solution
By looking at figure $2$ c, you can determine that there are 4 atoms per unit cell. Thus, the mass of a unit cell is
$\dfrac{(63.55 \, grams/mole)(4 \, atoms/cell)}{6.022 x 10^{23} \, atoms/mole} = 4.221 x 10^{-22} \, grams/cell \nonumber$
The volume of the unit cell is
$V_{cell} = \dfrac{4.221 x 10^{-22} \, grams/cell}{8.930 \, grams/cm^3} = 4.727 x 10^{-23} \, cm^3\nonumber$
The unit cell is cubic, therefore all sides are the same length, and $V_{cell} = a^3$
$a = V_{cell}^{1/3} = 3.616 x 10^{-8} cm = 361.6 \, pm\nonumber$
As shown in figure $2$ c, the diagonal across the face of the unit cell has a length of 4 times the radius of a copper atom. Thus, the lengtyh of the face diagonal is $\sqrt{2}a$, which equals $\sqrt{2}(361.6 \, \text{pm})$ = 511.4 pm. One fourth of 511.4 pm is 127.8 pm.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(LibreTexts)/31%3A_Solids_and_Surface_Chemistry/31.01%3A_The_Unit_Cell_Is_the_Fundamental_Building_Block_of_a_Crystal.txt
|
The orientation of a surface or a crystal plane may be defined by considering how the plane (or indeed any parallel plane) intersects the main crystallographic axes of the solid. The application of a set of rules leads to the assignment of the Miller Indices (hkl), which are a set of numbers which quantify the intercepts and thus may be used to uniquely identify the plane or surface.
Assigning Miller Indices for a cubic crystal system
The following treatment of the procedure used to assign the Miller Indices is a simplified one (it may be best if you simply regard it as a "recipe") and only a cubic crystal system (one having a cubic unit cell with dimensions a x a x a ) will be considered.
The procedure is most easily illustrated using an example so we will first consider the following surface/plane:
Step 1: Identify the intercepts on the x- , y- and z- axes.
In this case the intercept on the x-axis is at x = a ( at the point (a,0,0) ), but the surface is parallel to the y- and z-axes - strictly therefore there is no intercept on these two axes but we shall consider the intercept to be at infinity ( ) for the special case where the plane is parallel to an axis. The intercepts on the x- , y- and z-axes are thus
Intercepts: a , ,
Step 2: Specify the intercepts in fractional co-ordinates
Co-ordinates are converted to fractional co-ordinates by dividing by the respective cell-dimension - for example, a point (x,y,z) in a unit cell of dimensions a x b x c has fractional co-ordinates of ( x/a , y/b , z/c ). In the case of a cubic unit cell each co-ordinate will simply be divided by the cubic cell constant , a . This gives
Fractional Intercepts: a/a , /a, /a i.e. 1 , ,
Step 3: Take the reciprocals of the fractional intercepts
This final manipulation generates the Miller Indices which (by convention) should then be specified without being separated by any commas or other symbols. The Miller Indices are also enclosed within standard brackets (….) when one is specifying a unique surface such as that being considered here.
The reciprocals of 1 and are 1 and 0 respectively, thus yielding
Miller Indices: (100)
So the surface/plane illustrated is the (100) plane of the cubic crystal.
Other Examples
1. The (110) surface
Assignment
Intercepts: a , a ,
Fractional intercepts: 1 , 1 ,
Miller Indices: (110)
2. The (111) surface
Assignment
Intercepts: a , a , a
Fractional intercepts: 1 , 1 , 1
Miller Indices: (111)
The (100), (110) and (111) surfaces considered above are the so-called low index surfaces of a cubic crystal system (the "low" refers to the Miller indices being small numbers - 0 or 1 in this case). These surfaces have a particular importance but there an infinite number of other planes that may be defined using Miller index notation. We shall just look at one more …
3. The (210) surface
Assignment
Intercepts: ½ a , a ,
Fractional intercepts: ½ , 1 ,
Miller Indices: (210)
Further notes:
1. in some instances the Miller indices are best multiplied or divided through by a common number in order to simplify them by, for example, removing a common factor. This operation of multiplication simply generates a parallel plane which is at a different distance from the origin of the particular unit cell being considered. e.g. (200) is transformed to (100) by dividing through by 2 .
2. if any of the intercepts are at negative values on the axes then the negative sign will carry through into the Miller indices; in such cases the negative sign is actually denoted by overstriking the relevant number. e.g. (00 -1) is instead denoted by ($00\bar{1}$).
3. in the hcp crystal system there are four principal axes; this leads to four Miller Indices e.g. you may see articles referring to an hcp (0001) surface. It is worth noting, however, that the intercepts on the first three axes are necessarily related and not completely independent; consequently the values of the first three Miller indices are also linked by a simple mathematical relationship.
What are symmetry-equivalent surfaces?
In the following diagram the three highlighted surfaces are related by the symmetry elements of the cubic crystal - they are entirely equivalent.
In fact, there are a total of 6 faces related by the symmetry elements and equivalent to the (100) surface - any surface belonging to this set of symmetry-related surfaces may be denoted by the more general notation {100} where the Miller indices of one of the surfaces is instead enclosed in curly-brackets.
Final important note: in the cubic system the (hkl) plane and the vector [hkl], defined in the normal fashion with respect to the origin, are normal to one another but this characteristic is unique to the cubic crystal system and does not apply to crystal systems of lower symmetry.
The perpendicular distance between planes
The symbol $d$ is used to designate the perpendicular spacing between adjacent planes. Table $1$ shows the equations for calculating the value of d for four of the seven crystal systems.
Table $1$: Equations for calculating the spacing between adjacent planes, $d$
System Equation for $d$
cubic $\dfrac{1}{d^2} = \dfrac{h^2+k^2+l^2}{a^2}$
tetragonal $\dfrac{1}{d^2} = \dfrac{h^2+k^2}{a^2}+\dfrac{l^2}{c^2}$
hexagonal $\dfrac{1}{d^2} = \dfrac{4}{3}\left(\dfrac{h^2+hk+k^2}{a^2} \right) +\dfrac{l^2}{c^2}$
orthorhombic $\dfrac{1}{d^2} = \dfrac{h^2}{a^2}+\dfrac{k^2}{b^2} +\dfrac{l^2}{c^2}$
Example $1$
If a tetragonal unit cell has dimensions $a$ = $b$ = 501 pm and $c$ = 451 pm, calculate the perpendicular distance between the 111 planes.
Solution
For a tetragonal unit cell
$\dfrac{1}{d^2} = \dfrac{h^2+k^2}{a^2}+\dfrac{l^2}{c^2}. \nonumber$
Thus,
\begin{align*} \dfrac{1}{d^2} &= \dfrac{1+1}{(501 \, \text{pm})^2}+\dfrac{1}{(451 \, \text{pm})^2} \[4pt] &= 1.29 \times 10^{-5} \, \text{pm}^{-2}. \end{align*} \nonumber
Then $d = 279 \, \text{pm}$
Contributors and Attributions
• Roger Nix (Queen Mary, University of London)
• Tom Neils (Grand Rapids Community College)
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(LibreTexts)/31%3A_Solids_and_Surface_Chemistry/31.02%3A_The_Orientation_of_a_Lattice_Plane_Is_Described_by_its_Miller_Indices.txt
|
X-ray crystallography is an instrumental technique used to determine the arrangement of atoms of a crystalline solid in three-dimensional space. This technique takes advantage of the interatomic spacing of most crystalline solids by employing them as a diffraction grating for x-ray light, which has wavelengths on the order of 1 angstrom (10-8 cm).
Introduction
In 1895, Wilhelm Rontgen discovered x- rays. The nature of x- rays, whether they were particles or electromagnetic radiation, was a topic of debate until 1912. If the wave idea was correct, researchers knew that the wavelength of this light would need to be on the order of 1 Angstrom (Å) (10-8 cm). Diffraction and measurement of such small wavelengths would require a grating with spacing on the same order of magnitude as the light.
In 1912, Max von Laue, at the University of Munich in Germany, postulated that atoms in a crystal lattice had a regular, periodic structure with interatomic distances on the order of 1 Å. Without having any evidence to support his claim on the periodic arrangements of atoms in a lattice, he further postulated that the crystalline structure could be used to diffract x-rays, much like a grating in an infrared spectrometer can diffract infrared light. His postulate was based on the following assumptions: the atomic lattice of a crystal is periodic, x- rays are electromagnetic radiation, and the interatomic distance of a crystal is on the same order of magnitude as x-ray light. Laue's predictions were confirmed when two researchers: Friedrich and Knipping, successfully photographed the diffraction pattern associated with the x-ray radiation of crystalline $CuSO_4 \cdot 5H_2O$. The science of x-ray crystallography was born.
The arrangement of the atoms needs to be in an ordered, periodic structure for them to diffract the x-ray beams. A set of mathematical calculations is then used to produce a diffraction pattern that is characteristic of the particular arrangement of atoms in that crystal. X-ray crystallography remains to this day the primary tool used by researchers in characterizing the structure and bonding of many compounds.
Diffraction
Diffraction is a phenomenon that occurs when light encounters an obstacle. The waves of light can either bend around the obstacle or, in the case of a slit, can travel through the slits. The resulting diffraction pattern will show areas of constructive interference, where two waves interact in phase, and destructive interference, where two waves interact out of phase. Calculation of the phase difference can be explained by examining Figure $1$ below.
In the figure above, two parallel waves are striking a grating at an angle $\alpha_o$. The incident wave on the right travels farther than the one on the left by a distance of BD before reaching the grating. The scattered wave depicted below the gradient on the left, travels farther than the scattered wave on the right by a distance of AC. So the total path difference between the left wave and the right wave is AC - BD. To observe a wave of high intensity (one created through constructive interference), the difference AC - BD must equal an integer number of wavelengths to be observed at the angle $\alpha$, AC - BD = $n\lambda$, where $\lambda$ is the wavelength of the light. Applying some basic trigonometric properties, the following two equations can be shown about the lines:
$BD = x \cos(\alpha_o) \nonumber$
and
$AC = x \cos (\alpha) \nonumber$
where $x$ is the distance between the points where the diffraction repeats. Combining the two equations,
$x(\cos \alpha - \cos \alpha_o) = n \lambda \nonumber$
Rotating Crystal Method
To describe the periodic, three dimensional nature of crystals, the Laue equations are employed:
$a(\cos \alpha – \cos \alpha_o) = nh\lambda \label{eq1}$
$b(\cos \beta – \cos \beta_o) = nk\lambda \label{eq2}$
$c(\cos \gamma – \cos \gamma_o) = nl\lambda \label{eq3}$
where $a$, $b$, and $c$ are the three axes of the unit cell, $\alpha_o$, $\beta_o$, $\gamma_o$ are the angles of incident radiation, and $\alpha$, $\beta$, and $\gamma$ are the angles of the diffracted radiation. The $n$ term refers to the order of the reflections. If $n$ = 1, then the reflections are first-order reflections. If $n$ = 2, then the reflections are second-order reflections.
A diffraction signal (constructive interference) will arise when $h$, $k$, and $l$ are integer values. The rotating crystal method employs these equations. X-ray radiation is shown onto a crystal, surrounded by a cylindrical film, as it rotates around one of its unit cell axis. The beam strikes the crystal at a 90-degree angle. Using equation 1 above, we see that if $\alpha_o$ is 90 degrees, then $\cos \alpha_o = 0$. For the equation to hold true, we can set h=0, given that $\alpha$= 90. The above three equations will be satisfied at various points as the crystal rotates. This gives rise to a diffraction pattern (shown in figure $2$ as multiple h values). The cylindrical film is then unwrapped and developed.
The following equation can be used to determine the length of the axis around which the crystal was rotated:
$a = \dfrac{nh \lambda}{\cos (\tan^{-1} (y/r))} \nonumber$
where a is the length of the axis, y is the distance from $h=0$ to the $h$ of interest, $r$ is the radius of the film (or the distance from the center of the crystal to the detector), and $\lambda$ is the wavelength of the x-ray radiation used.
Modern diffractometers use electronic scintillation detectors or area detectors that act as a sort of electronic "film." With these detectors, the diffraction data can be sent directly to a computer and analyzed much more rapidly than photographic film. Figure $3$ shows a mounted crystal and its x-ray diffraction pattern.Example $1$
For a detector with a flat surface, the reflections from a primitive cubic crystal would hit the surface in a pattern similar to that shown in figure $PageIndex{4}$
Example $1$
Suppose that you wish to measure the length of a unit cell $a$ of a crystal that has primitive cubic unit cell. A close up of the experimental set up is shown in Figure $4$. The crystal is aligned so that the incoming x-rays are perpendicular to the a-axis.The distance between the center of the crystal and the detector surface is 5.50 cm. The distance between the detected spots scattered by the 000 planes and the 100 planes is 2.50 cm, and it is scattered at an angle of $\alpha$. The wavelength of light from the copper x-ray source is 154.4 pm. Determine the length of the unit cell along the a-axis.
Solution
The angle $\alpha$ can be determined by the fact that $\tan \alpha$ = $\dfrac{5.50}{2.50}$. Thus $\tan ^{-1}(2.2)$, gives $\alpha$ = 65.56. We can then find the length of a from equation 30.3.10
a =$\dfrac{(1)(154.4 \, pm)}{\cos (65.56)} = 373.2 \, pm$
Bragg's Law Applied to Crystals
A second way to analyze the x-ray diffraction is to use Bragg's law. Diffraction of an x-ray beam by crystalline solids occurs when the light interacts with the electron cloud surrounding the atoms of the solid. Because of the periodic crystalline structure of a solid, it is possible to describe it as a series of planes with an equal interplanar distance. As an x-ray beam hits the surface of the crystal at an angle $\theta$, some of the light will be diffracted at that same angle away from the solid (Figure $5$). The remainder of the light will travel into the crystal and some of that light will interact with the second plane of atoms. Some of the light will be diffracted at an angle $theta$, and the remainder will travel deeper into the solid. This process will repeat for the many planes in the crystal. The x-ray beams travel different path lengths before hitting the various planes of the crystal, so after diffraction, the beams will interact constructively only if the path length difference is equal to an integer number of wavelengths (just like in the normal diffraction case above). In the figure below, the difference in path lengths of the beam striking the first plane and the beam striking the second plane is equal to BG + GF. So, the two diffracted beams will constructively interfere (be in phase) only if $BG + GF = n \lambda$. Basic trigonometry will tell us that the two segments are equal to one another with the interplanar distance times the sine of the angle $\theta$. So we get:
$BG = BC = d \sin \theta \label{1}$
Thus,
$2d \sin \theta = n \lambda \label{2}$
This equation is known as Bragg's Law, named after W. H. Bragg and his son, W. L. Bragg; who discovered this geometric relationship in 1912. Bragg's Law relates the distance between two planes in a crystal and the angle of reflection to the x-ray wavelength. The x-rays that are diffracted off the crystal have to be in-phase in order to be observed. Only certain angles that satisfy the following condition will register:
$\sin \theta = \dfrac{n \lambda}{2d} \label{3}$
For historical reasons, the resulting diffraction spectrum is represented as intensity vs. $2θ$.
Example $1$
Cesium metal has a body-centered cubic crystal structure with a unit cell length of 605.0 pm. Use the Bragg equation to determine the first two observed diffraction angles from the 110 planes when the wavelength of the x-rays is 154.4 pm.
Solution
From section 31.2.3, we found that the interplanar distance for cubic cells can be calculated using the equation
$\dfrac{1}{d^2} = \dfrac{h^2+k^2+l^2}{a^2} \nonumber$
If we square equation 31.3.10 we get
$\sin^2 \theta = \dfrac{n^2 \lambda^2}{4d^2} \nonumber$
Combining these two equations, we get
$\sin^2 \theta = \dfrac{n^2 \lambda^2}{4a^2}(h^2+k^2+l^2) \nonumber$
The smallest diffraction angle occurs when $n = 1$
$\sin^2 \theta = \dfrac{(1^2) (154.4 \ , pm)^2}{4(605.0)^2}(1^2+1^2+0^2) = 0.03257 \nonumber$
Thus, $\theta$ = 10.40°
The next largest diffraction angle occurs when $n = 1$
$\sin^2 \theta = \dfrac{(2^2) (154.4 \ , pm)^2}{4(605.0)^2}(1^2+1^2+0^2) = 0.1303 \nonumber$
Thus, $\theta$ = 21.16°
In the next section, we will discuss the factors that determine the phase and the amplitude of the multiple scattered x-rays produced during diffraction.
Contributors
• Roman Kazantsev (UC Davis) and Michelle Towles (UC Davis)
• Tom Neils - Grand Rapids Community College
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(LibreTexts)/31%3A_Solids_and_Surface_Chemistry/31.03%3A_The_Spacing_Between_Lattice_Planes_Can_Be_Determined_from_X-Ray_Diffraction_Measurements.txt
|
Systematic Absences
Diffraction data from a face-centered cubic crystal will be missing reflections (spots) from all hkl planes in which h + k, h + l, and k + l are odd numbers. The reason for these absent reflections is that the arrangement of the lattice points leads to total destructive interference of the diffracted X-rays. Although the lack of data may seem like a hindrance to determining a crystal structure, the systematic absence of reflections from specific families of planes actually reveals the presence of symmetry features in the crystal lattice that help identify the lattice type. Crystals with centering features (body-centered cels, face-centered cells, etc.), glide planes, and screw axes will produce diffraction patterns with systematic absences.
The Structure Factor
Systematic reflection absences, as well as variations in reflection intensity, are described mathematically by the structure factor, a mathematical function that incorporates all of the variables that affect how X-rays are scattered by the electrons of the atoms in the crystal lattice. (We do not concern ourselves with the effect of the nucleus on scattering because the mass of the nucleus is so large relative to the mass of an electron.)
The Atomic/Ionic Scattering Factor
The construction of a structure factor begins with the fundamental assumption that the total scattering intensity is related to the electron density of the atoms in the crystal. But the scattering also depends on the scattering angle and the wavelength of the radiation. To account for all of these parameters, an atomic scattering factor, $f$, has been developed:
$f = 4\pi \int_0^{\infty} \rho(r)\dfrac{\sin \, kr}{kr}r^2 \, dr \nonumber$
In this equation, $r$ is the radius of an idealized spherical atom, $\rho(r)$ is the electron density of this idealized atom. Because the X-ray wavelength used in diffraction experiments is of similar size as the size of the atoms it scatters from, there will be interference among the waves scattered by the electrons on a single atom. The more electrons in an atom and the more diffuse the electron cloud around the atom, the greater the destructive interference among the scattered waves. To account for this intra-atomic interference, the $\dfrac{\sin \, kr}{kr}$ term is included in the scattering factor. Here, $k$ is equal to $\dfrac{4\pi \sin \theta}{\lambda}$, where $\theta$ is the scattering angle and $\lambda$ is the wavelength of the X-ray. A plot of $f$ versus $\dfrac{\sin \theta}{\lambda}$ is shown in figure $1$. Notice that at $\theta$ = 0, $f$ is equal to the number of electrons in the particle. Also notice that, although the isoelectronic ions Ca2+ and Cl- have an identical $f$ value at $\theta$ = 0, their $f$ values differ at larger angles because the ions have different electron densities. Finally, notice that H atoms do not scatter a great deal, thus making them difficult to "locate" among the scattering from all of the larger atoms.
Scattering by Neighboring Atoms and Neighboring Planes
Along with the scattering factor for individual atoms, we must also take into consideration the interferences caused by neighboring atoms in the lattice. As our example, we will analyze a crystal that contains atoms of two elements, here designated P and Q. We will assume that the respective atoms have scattering factors $f$P and $f$Q. We will also assume that the distance between successive P atoms is the same as the distance between successive Q atoms. This distance is $\dfrac{a}{h}$, where $a$ is the length of the unit cell along the a axis. Finally, we will assume that the distance between P and Q atoms is $x$. In this arrangement, the P atoms lie in neighboring hkl planes, and the Q atoms lie in neighboring hkl planes that are interleaved between the hkl planes of the P atoms. Figure $2$ shows the arrangement of the atoms lying along the a axis.
In this arrangement, the difference in path length of X-rays scattered by successive P atoms, $\Delta$PP, (and the difference in path length of X-rays scattered by successive Q atoms, ($\Delta$QQ) is described by the van Laue equation
$\Delta$PP = $\Delta$QQ = $\dfrac{a}{h}(cos\alpha - cos\alpha_0) = n\lambda$
The difference in path length for X-rays diffracted by neighboring P and Q atoms is thus
$\Delta$PQ = $x(\cos\alpha - \cos\alpha_0) = \dfrac{nh\lambda x}{a}$
Because of this difference in path length, there is a resulting a phase difference between the scattered beams from neighboring P and Q atoms due to the difference in the time required to travel the different distances:
$\phi = 2\pi \dfrac{\Delta_{PQ}}{\lambda} = 2\pi \dfrac{h\lambda x}{\lambda a} = \dfrac{2\pi h x}{a}$
The light scattered from neighboring P and Q atoms will have an amplitude dependent on the scattering factors of each atom and the angular frequency of the X-ray radiation, $\omega$
$A = f_P \, cos \omega t + f_Q \, cos(\omega t + \phi) \nonumber$
Replacing the cosine functions with exponential functions will simplify the following mathematical calculation, thus
$A = f_P \, e^{i\omega t} + f_Q \, e^{i(\omega t + \phi)} \label{1}$
Because intensity is proportional to amplitude squared,
$I \propto |A|^2 = (f_P \, e^{i\omega t} + f_Q \, e^{i(\omega t + \phi)})(f_P \, e^{-i\omega t} + f_Q \, e^{-i(\omega t + \phi)}$
which can be simplified to
$f_P^2 + f_Pf_Qe^{i\phi} + f_Pf_Qe^{-i\phi} + f_Q^2$
Switching back to cosine functions, we get
$f_P^2 + f_Q^2 + 2 f_Pf_Qcos\phi$
The $f_P^2$ term represents the constructive interference of the x-rays scattered from the parallel planes through the P atoms. The $f_Q^2$ term represents the constructive interference of the x-rays scattered from the parallel planes through the Q atoms. The $2 f_Pf_Qcos\phi$ term represents the interference from the scattering from the two sets of interleaved P and Q planes. From this equation, we can see that the intensity of the scattered light is independent of the angular frequency of the X-ray radiation because there is no term involving $\omega$. Thus, in equation $\ref{1}$ the frequency-dependent $e^{i\omega t}$ must have a value of 1, and the equation can be rewritten and redefined as the structure factor along the a axis of the crystal, $F(h)$
$F(h) = f_P + f_Q \, e^{i\phi)} = f_P + f_Q \, e^{\dfrac{2\pi h x}{a}} \label{2}$
For a unit cell with sides of length $a, \, b,$ and $c$ that contains atoms of type $j$ positioned at points $x_j, y_j$ and $z_j$, equation $\ref{2}$ becomes
$F(hkl) = \sum_j f_{j} e^{2\pi (hx_{j}/a + ky_{j}/b + lz_{j}/c)} \label{3}$
Here $f_j$ is the scattering factor for type $j$ atoms, and hkl are the Miller indicies of the diffracting planes. By convention, $x_j, y_j,$ and $z_j$ are expressed in units of $a, b,$ and $c$, so that $x_j^{'} =\dfrac{x_j}{a}$, $y_j^{'} =\dfrac{y_j}{b}$, and $z_j^{'} =\dfrac{z_j}{c}$, to give
$F(hkl) = \sum_j f_j e^{2\pi (hx_{j}^{'} + ky_{j}^{'} + lz_{j}^{'})} \label{4}$
Combining the Influences
$F(hkl)$ is the structure factor of a crystal, and the intensity of reflections from this crystal are proportional to $|F(hkl)|^2$. The strcuture factor can thus be used to explain the systematic absences described in the introduction to this section. Whenever $F(hkl)$ = 0 for a set of Miller planes, the reflection will be absent.
Example $1$
Derive an expression for the structure factor of a face-centered cubic unit cell of identical atoms. Determine which hkl planes will not produce reflections.
Solution
The coordinates of the lattice points in a face-centered cubic unit cell are (0,0,0), (1,0,0), (0,1,0), (0,0,1), (1,0,1), (1,1,0), (1,1,1), (0,1,1), (0,1/2,1/2), (1/2,1/2,0), (1/2,0,1/2), (1,1/2,1/2), (1/2,1,1/2), (1/2,1/2,1). The length of the sides of the cell is $a$. Each corner lattice point is shared by 8 cells, so only 1/8 of each of these atoms is considered to be contributing to the scattering. Each face lattice point is shared by 2 cells, so only 1/2 of each of these atoms is considered to be contributing to the scattering. We will use equation $\ref{4}$ to solve the problem, remembering that $a = b = c$ for a cubic cell:
$F(hkl) = \dfrac{1}{8} f(e^{2\pi i(0+0+0)} + e^{2\pi i(h+0+0)} + e^{2\pi i(0+k+0)} + e^{2\pi i(0+0+l)} + e^{2\pi i(h+0+l)} + e^{2\pi i(h+k+0)} + e^{2\pi i(h+k+l)} + e^{2\pi i(0+k+l)}) + \dfrac{1}{2}f(e^{2\pi i(0+k/2+l/2)} + e^{2\pi i(h/2+k/2+0)}+ e^{2\pi i(h/2+0+l/2)} + e^{2\pi i(h+k/2+l/2)} + e^{2\pi i(h/2+k+l/2)} + e^{2\pi i(h/2+k/2+l)}) \label{5}$
Because $e^{2\pi i}$ = 1 and $e^{\pi i}$ = -1, equation $\ref{5}$ becomes
$F(hkl) = \dfrac{1}{8}f(1^0 + 1^h + 1^k + 1^l + 1^{h+l} + 1^{h+k} + 1^{h+k+l} + 1^{k+l}) + \dfrac{1}{2}f((-1)^{k+l} + (-1)^{h+k} + (-1)^{h+l} + (-1)^{2h+k+l} + (-1)^{h+2k+l} + (-1)^{h+k+2l}) \nonumber$
Because $1^n = 1$ for all $n$,
$F(hkl) = \dfrac{1}{8}f(8) + \dfrac{1}{2}f((-1)^{k+l} + (-1)^{h+k} + (-1)^{h+l} + (-1)^{2h+k+l} + (-1)^{h+2k+l} + (-1)^{h+k+2l}) \nonumber$
Upon testing a few hkl planes, you will see that if $h,k,l$ are a mixture of odd and even numbers, then $F(hkl) = 0$, but if $h,k,l$ are all odd or all even, the value of $F(hkl) = 4f$. Three examples:
All odd $F(111) = f (1 + \dfrac{1}{2}((-1)^2 + (-1)^2 + (-1)^2 + (-1)^4 + (-1)^4 + (-1)^4 = f(1 + 3) = 4f$
All even $F(222) = f (1 + \dfrac{1}{2}((-1)^4 + (-1)^4 + (-1)^4 + (-1)^8 + (-1)^8 + (-1)^8 = f(1 + 3) = 4f$
Mix $F(110) = f (1 + \dfrac{1}{2}((-1)^1 + (-1)^2 + (-1)^1 + (-1)^3 + (-1)^3 + (-1)^2 = f(1 + \dfrac{1}{2}((-1) + 1 + (-1) + (-1) + (-1) + 1)) = 1+ (-1) = 0$
For crystals comprised of atoms of more than one element, there are often reflections that are weaker than other reflections. These lower intensities are sometimes created when the scattering factors of different elements are not identical, leading to incomplete destructive interference, a smaller $F(hkl)$, and thus a reflection of lower intensity. The lower intensity reflections are also a direct result of the structure factor. For example, for an ionically-bonded compound with the formula MX, which crystallizes in a face-centered cubic lattice, the structure factor can be shown to be
$f(hkl) = f_+[1 + (-1)^{h+k} + (-1)^{h+l} + (-1)^{k+l}] + f_-[(-1)^{h+k+l} + (-1)^{h} + (-1)^{k} + (-1)^{l}] \nonumber$
Inspection of this equation reveals that $F(hkl) = 4(f_+ + f_-)$ if $h, k, l$ are all even, and $F(hkl) = 4(f_+ - f_-)$if $h, k, l$ are all odd. Thus, reflections from the all-even $hkl$ planes are more intense than reflections from the all-odd $hkl$ planes.
See also
The structure factor. P. Coppens. International Tables for Crystallography (2006). Vol. B, ch. 1.2, pp. 10-24
Contributors
• Tom Neils - Grand Rapids Community College
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(LibreTexts)/31%3A_Solids_and_Surface_Chemistry/31.04%3A_The_Total_Scattering_Intensity_Is_Related_to_the_Periodic_Structure_of_the_Electron_Density_in_th.txt
|
Fourier Transform
In mathematics, a Fourier transform is an operation that converts one real function into another. In the case of FTIR, a Fourier transform is applied to a function in the time domain to convert it into the frequency domain. One way of thinking about this is to draw the example of music by writing it down on a sheet of paper. Each note is in a so-called "sheet" domain. These same notes can also be expressed by playing them. The process of playing the notes can be thought of as converting the notes from the "sheet" domain into the "sound" domain. Each note played represents exactly what is on the paper just in a different way. This is precisely what the Fourier transform process is doing to the collected data of x-ray diffraction. This is done in order to determine the electron density around the crystalline atoms in real space. In the previous section, we treated the lattice points as individual, localized electron densities. In reality, the electron density of a unit cell is distributed over a much larger space. The following equations can be used to determine the electrons' positions:
$p(x,y,z) = \sum_{h=-\infty}^{\infty} \sum_{k=-\infty}^{\infty} \sum_{l=-\infty}^{\infty}F(hkl) e ^{-2\pi i (hx/a+ky/b+lz/c)} \label{1}$
Employing a Fourier transform
$F(hkl) \propto \int _{-\infty}^{\infty} \int _{-\infty}^{\infty} \int _{-\infty}^{\infty} p(x,y,z) e ^{2\pi i (hx/a+ky/b+lz/c)} dx\;dy\;dz \label{2}$
$F(q) = | F(q) | e^{i \phi(q)} \label{3}$
where $p(xyz)$ is the electron density function, and $F(hkl)$ is the electron density function in real space. Equation $\ref{2}$ represents the Fourier expansion of the electron density function. To solve for $F(hkl)$, equation $\ref{1}$ needs to be evaluated over all values of h, k, and l, resulting in equation $\ref2$. The resulting function $F(hkl)$ is generally expressed as a complex number (as seen in equation $\ref{3}$ above) with $| F(q)|$ representing the magnitude of the function and $\phi$ representing the phase.
The structure factor may also be expressed as
\begin{align} \mathbf{F}_{hkl} &= F_{hkl} e^{i\alpha_{hkl}} = \sum_j f_je^{2\pi i (hx_j + ky_j + lz_j)} = \sum_j f_j\cos[2\pi (hx_j + ky_j + lz_j)] + i\sum_{j} f_j\sin[2\pi (hx_j + ky_j + lz_j)] \[4pt] &= A_{hkl} + iB_{hkl} \end{align} \nonumber
where the sum is over all atoms in the unit cell, xj,yj,zj are the positional coordinates of the j th atom, fj is the scattering factor of the j th atom, and $α_{hkl}$ is the phase of the diffracted beam.The intensity of a diffracted beam is directly related to the amplitude of the structure factor, but the phase must normally be deduced by indirect means. In structure determination, phases are estimated and an initial description of the positions and anisotropic displacements of the scattering atoms is deduced. From this initial model, structure factors are calculated and compared with those experimentally observed. Iterative refinement procedures attempt to minimize the difference between calculation and experiment until a satisfactory fit has been obtained.
Figure $1$ shows an electron density map of a quinoline derivative (and one water molecule) that was determined from X-ray diffraction data.
Contributor
Tom Neils - Grand Rapids Community College
31.06: A Gas Molecule can Physisorb or Chemisorb to a Solid Surface
In this section, we will consider both the energetics of adsorption and factors which influence the kinetics of adsorption by looking at the "potential energy diagram/curve" for the adsorption process. The potential energy curve for the adsorption process is a representation of the variation of the energy (PE or E) of the system as a function of the distance ($d$) of an adsorbate from a surface.
Within this simple one-dimensional (1D) model, the only variable is the distance ($d$) of the adsorbing molecule from the substrate surface (Figure $1$).
Thus, the energy of the system is a function only of this variable i.e.
$E = E(d) \nonumber$
It should be remembered that this is a very simplistic model which neglects many other parameters which influence the energy of the system (a single molecule approaching a clean surface), including for example
• the angular orientation of the molecule
• changes in the internal bond angles and bond lengths of the molecule
• the position of the molecule parallel to the surface plane
The interaction of a molecule with a given surface will also clearly be dependent upon the presence of any existing adsorbed species, whether these be surface impurities or simply pre-adsorbed molecules of the same type (in the latter case we are starting to consider the effect of surface coverage on the adsorption characteristics). Nevertheless, it is useful to first consider the interaction of an isolated molecule with a clean surface using the simple 1D model. For the purposes of this section, we will also not be overly concerned whether the "energy" being referred to should strictly be the internal energy, the enthalpy or free energy of the system. We will also make a distinction between physisorption and chemisorption. These two types of interaction with a surface are differentiated by the strength of interaction bewteen the adsorbed particle and the surface. A physisorbed particle will always be farther away from the surface than an identical particle that is chemisorbed (Figures $2$ and $3$).
CASE I - Physisorption
In the case of pure physisorption, e.g., Ar/metals, the only attraction between the adsorbing species and the surface arises from weak, van der Waals forces. As illustrated in Figure $4$ below, these forces give rise to a shallow minimum in the PE curve at a relatively large distance from the surface (typically $d > 0.3\, nm$) before the strong repulsive forces arising from electron density overlap cause a rapid increase in the total energy.
There is no barrier to prevent the atom or molecule which is approaching the surface from entering this physisorption well, i.e. the process is not activated and the kinetics of physisorption are invariably fast.
CASE II - Physisorption + Molecular Chemisorption
The weak physical adsorption forces and the associated long-range attraction will be present to varying degrees in all adsorbate/substrate systems. However, in cases where chemical bond formation between the adsorbate and substrate can also occur, the PE curve is dominated by a much deeper chemisorption minimum at shorter values of d, as shown in Figure $5$.
The graph above shows the PE curves due to physisorption and chemisorption separately. In practice, the PE curve for any real molecule capable of undergoing chemisorption is best described by a combination of the two curves, with a curve crossing at the point at which chemisorption forces begin to dominate over those arising from physisorption alone (Figure $6$).
The minimum energy pathway obtained by combining the two PE curves is now highlighted in red. Any perturbation of the combined PE curve from the original, separate curves is most likely to be evident close to the highlighted crossing point.
For clarity, we will now consider only the overall PE curve as shown in Figure $7$:
The depth of the chemisorption well is a measure of the strength of binding to the surface. In fact, it is a direct representation of the energy of adsorption, whilst the location of the global minimum on the horizontal axis corresponds to the equilibrium bond distance ($d_{ch}$ ) for the adsorbed molecule on this surface.
The energy of adsorption is negative, and because it corresponds to the energy change upon adsorption, it is better represented as ΔE (ads) or ΔEads . However, you will also often find the depth of this well associated with the enthalpy of adsorption, ΔH (ads).
The "heat of adsorption", Q, is taken to be a positive quantity equal in magnitude to the enthalpy of adsorption ; i.e. Q = -ΔH(ads) )
In this particular case, there is clearly no barrier to be overcome in the adsorption process and there is no activation energy of adsorption (i.e. Eaads = 0 , but do remember the previously mentioned limitations of this simple 1D model). There is of course a significant barrier to the reverse, desorption process; the red arrow in Figure $8$ below represents the activation energy for desorption.
Clearly, in this particular case, the magnitudes of the energy of adsorption and the activation energy for desorption can also be equated i.e.
$E_a^{des} = ΔE (ads) \nonumber$
or
$E_a^{des} \approx -ΔH (ads) \nonumber$
CASE III - Physisorption + Dissociative Chemisorption
In this case, the main differences arise from the substantial changes in the PE curve for the chemisorption process. Again, we start off with the basic PE curve (Figure $9$) for the physisorption process which represents how the molecule can weakly interact with the surface :
If we now consider a specific molecule such as H2 and initially treat it as being completely isolated from the surface ( i.e. when the distance, d, is very large ), then a substantial amount of energy would need to be put into the system in order to cause dissociation of the molecule.
$\ce{H_2 → H + H} \nonumber$
This amount of required energy is the bond dissociation energy $D_{(H-H)}$, which is some 435 kJ mol-1 or 4.5 eV.
The red dot in Figure $10$ above thus represents two hydrogen atoms, equidistant (and a long distance) from the surface, and also now well separated from each other. If these atoms are then allowed to approach the surface they may ultimately both form strong chemical bonds to the substrate. This possible bonding of two H atoms to the surface (shown as 2(M-H) in the Figure) corresponds to the minimum in the red curve which represents the chemisorption PE curve for the two H atoms.
In reality, of course, such a mechanism for dissociative hydrogen chemisorption is not practical; the energy down payment associated with breaking the H-H bond is far too severe. Instead, a hydrogen molecule will initially approach the surface along the physisorption curve. If it has sufficient energy it may pass straight into the chemisorption well ( "direct chemisorption" ) as shown below in Figure $11$.
Alternatively, the hydrogen molecule may first undergo transient physisorption, a state from which it can then either desorb back as a molecule into the gas phase or cross over the barrier into the dissociated, chemisorptive state (as illustrated schematically in Figure $12$).
In this latter case, the molecule can be said to have undergone "precursor-mediated" chemisorption. A picture of the process of the approach, physisorption, and dissociative chemisorption of a molecule is shown in Figure $13$. The molecule in this picture is following the general potential energy pathway described in either Figure $11$ or Figure $12$.
The characteristics of this type of dissociative adsorption process are clearly going to be strongly influenced by the position of the crossing point of the two curves (molecular physisorption versus dissociative chemisorption) - relatively small shifts in the position of either of the two curves can significantly alter the size of any barrier to chemisorption.
In the example shown in Figure $14$ below there is no direct activation barrier to dissociative adsorption - the curve crossing is below the initial "zero energy" of the system.
In the case shown in Figure $15$, there is a substantial barrier to chemisorption. Such a barrier has a major influence on the kinetics of adsorption.
The depth of the physisorption well for the hydrogen molecule is actually very small (in some cases negligible), but this is not the case for other molecules and does not alter the basic conclusions regarding dissociative adsorption that result from this model; namely that the process may be either activated or non-activated depending on the exact location of the curve crossing.
At this point, it is useful to return to consider the effect of such a barrier on the relationship between the activation energies for adsorption and desorption, and the energy (or enthalpy) of adsorption.
As shown in Figure $16$,
$E_a^{des} - E_a^{ads} = - ΔE_{ads} \nonumber$
but, because the activation energy for adsorption is nearly always very much smaller than that for desorption, and the difference between the energy and the enthalpy of adsorption is also very small, it is still quite common to see the relationship
$E_a^{des} \approx -ΔH_{ads} \nonumber$
For a slightly more detailed treatment of the adsorption process, you are referred to the following examples of More Complex PE Curves & Multi-Dimensional PE Surfaces.
Contributors and Attributions
Roger Nix (Queen Mary, University of London)
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(LibreTexts)/31%3A_Solids_and_Surface_Chemistry/31.05%3A_The_Structure_Factor_and_the_Electron_Density_Are_Related_by_a_Fourier_Transform.txt
|
Introduction
Whenever a gas is in contact with a solid there will be an equilibrium established between the molecules in the gas phase and the corresponding adsorbed species (molecules or atoms) which are bound to the surface of the solid. As with all chemical equilibria, the position of equilibrium will depend upon a number of factors:
1. The relative stabilities of the adsorbed and gas phase species involved
2. The temperature of the system (both the gas and surface, although these are normally the same)
3. The pressure of the gas above the surface
In general, factors (2) and (3) exert opposite effects on the concentration of adsorbed species - that is to say that the surface coverage may be increased by raising the gas pressure but will be reduced if the surface temperature is raised.
The Langmuir isotherm was developed by Irving Langmuir in 1916 to describe the dependence of the surface coverage of an adsorbed gas on the pressure of the gas above the surface at a fixed temperature. There are many other types of isotherm (Temkin, Freundlich ...) which differ in one or more of the assumptions made in deriving the expression for the surface coverage; in particular, on how they treat the surface coverage dependence of the enthalpy of adsorption. Whilst the Langmuir isotherm is one of the simplest, it still provides a useful insight into the pressure dependence of the extent of surface adsorption.
Note: Surface Coverage & the Langmuir Isotherm
When considering adsorption isotherms it is conventional to adopt a definition of surface coverage ($θ$) which defines the maximum (saturation) surface coverage of a particular adsorbate on a given surface always to be unity, i.e. $θ_{max} = 1$. This way of defining the surface coverage differs from that usually adopted in surface science where the more common practice is to equate $θ$ with the ratio of adsorbate species to surface substrate atoms (which leads to saturation coverages which are almost invariably less than unity).
The Langmuir Isotherm - Derivation from Equilibrium Considerations
We may derive the Langmuir isotherm by treating the adsorption process as we would any other equilibrium process - except in this case the equilibrium is between the gas phase molecules ($M$), together with vacant surface sites, and the species adsorbed on the surface. Thus, for a non-dissociative (molecular) adsorption process, we consider the adsorption to be represented by the following chemical equation :
$S - * + M_{(g)} \rightleftharpoons S - M \label{eq1}$
where :
• $S - *$ represents a vacant surface site
Assumption 1
In writing Equation ref{Eq1} we are making an inherent assumption that there are a fixed number of localized surface sites present on the surface. This is the first major assumption of the Langmuir isotherm.
We may now define an equilibrium constant ($K$) in terms of the concentrations of "reactants" and "products"
$K = \dfrac{[S-M]}{[S-*][M]}\label{2}$
We may also note that :The [ S - M ] is proportional to the surface coverage of adsorbed molecules, i.e. proportional to $θ$
• [ S - * ] is proportional to the number of vacant sites, i.e. proportional to $1-θ$
• [ M ] is proportional to the pressure of gas, $P$
Hence, it is also possible to define another equilibrium constant, b , as given below :
$b =\dfrac{\theta}{(1- \theta)P}\label{3}$
Rearrangement then gives the following expression for the surface coverage
$\theta =\dfrac{b P}{1 + bP}\label{4}$
which is the usual form of expressing the Langmuir Isotherm. As with all chemical reactions, the equilibrium constant, $b$, is both temperature-dependent and related to the Gibbs free energy and hence to the enthalpy change for the process.
Assumption 2
$b$ is only a constant (independent of $\theta$) if the enthalpy of adsorption is independent of coverage. This is the second major assumption of the Langmuir Isotherm.
A plot of $\theta$ vs. $bP$ shows that as the pressure increases, $\theta$ approaches 1, meaning that nearly the entire surface is coated with a monolayer of adsorbed gas (Figure $1$).
Equation $\ref{4}$ can be rearranged to the form
$\dfrac{1}{\theta} = 1 + \dfrac{1}{bP} \label{5}$
showing that the inverse of the fraction of occupied surface sites is a linear function of the inverse of the pressure. If we plot experimental data for the adsorption of diatomic oxygen and carbon monoxide onto a silica surface, we can see that the Langmuir adsorption isotherm describes the data well (figure $2$).
The Langmuir Isotherm from a Kinetics Consideration
The equilibrium that may exist between gas adsorbed on a surface and molecules in the gas phase is a dynamic state, i.e. the equilibrium represents a state in which the rate of adsorption of molecules onto the surface is exactly counterbalanced by the rate of desorption of molecules back into the gas phase. It should therefore be possible to derive an isotherm for the adsorption process simply by considering and equating the rates for these two processes.
Expressions for the rate of adsorption and rate of desorption have been derived in Sections 2.3 & 2.6 respectively: specifically ,
$R_{ads}=\dfrac{f(\theta)P}{\sqrt{2\pi mkT}} \exp (-E_a^{ads}/RT) \nonumber$
$R_{des} = v\; f'(\theta) \exp (-E_a^{ads}/RT) \nonumber$
Equating these two rates yields an equation of the form :
$\dfrac{P \; f(\theta)}{f'(\theta)} = C(T) \label{33Eq1}$
where $\theta$ is the fraction of sites occupied at equilibrium and the terms $f(\theta)$ and $f '(\theta)$ contain the pre-exponential surface coverage dependence of the rates of adsorption and desorption respectively and all other factors have been taken over to the right hand side to give a temperature-dependent "constant" characteristic of this particular adsorption process, $C(T)$. We now need to make certain simplifying assumptions. The first is one of the key assumptions of the Langmuir isotherm.
Assumption 1
Adsorption takes place only at specific localized sites on the surface and the saturation coverage corresponds to complete occupancy of these sites.
Let us initially further restrict our consideration to a simple case of reversible molecular adsorption, i.e.
$S- * + \ce{M_{(g)}} \rightleftharpoons \ce{S-M} \label{33Eq2}$
where
• $S-*$ represents a vacant surface site and
• $\ce{S-M}$ the adsorption complex.
Under these circumstances it is reasonable to assume coverage dependencies for rates of the two processes of the form :
• Adsorption (forward reaction in Equation $\ref{33Eq2}$): $f (θ) = c (1-θ) \label{Eqabsorb}$ i.e. proportional to the fraction of sites that are unoccupied.
• Desorption (reverse reaction in Equation $\ref{33Eq2}$): $f '(θ) = c'θ \label{Eqdesorb}$ i.e. proportional to the fraction of sites which are occupied by adsorbed molecules.
Note
These coverage dependencies in Equations $\ref{Eqabsorb}$ and $\ref{Eqdesorb}$ are exactly what would be predicted by noting that the forward and reverse processes are elementary reaction steps, in which case it follows from standard chemical kinetic theory that
• The forward adsorption process will exhibit kinetics having a first order dependence on the concentration of vacant surface sites and first order dependence on the concentration of gas particles (proportional to pressure).
• The reverse desorption process will exhibit kinetics having a first order dependence on the concentration of adsorbed molecules.
Substitution of Equations $\ref{Eqabsorb}$ and $\ref{Eqdesorb}$ into Equation $\ref{33Eq1}$ yields:
$\dfrac{P(1-\theta)}{\theta}=B(T) \nonumber$
where
$B(T) = \left(\dfrac{c'}{c}\right) C(T). \nonumber$
After rearrangement this gives the Langmuir Isotherm expression for the surface coverage
$\theta = \dfrac{bP}{1+bP} \nonumber$
where $b = 1/B(T)$ is a function of temperature and contains an exponential term of the form
$b \propto \exp [ ( E_a^{des} - E_a^{ads} ) / R T ] = \exp [ - ΔH_{ads} / R T ] \nonumber$
with
$ΔH_{ads} = E_a^{des} - E_a^{ads}. \nonumber$
Assumption 2
$b$ can be regarded as a constant with respect to coverage only if the enthalpy of adsorption is itself independent of coverage; this is the second major assumption of the Langmuir Isotherm.
Contributors and Attributions
• Roger Nix (Queen Mary, University of London)
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(LibreTexts)/31%3A_Solids_and_Surface_Chemistry/31.07%3A_Isotherms_are_Plots_of_Surface_Coverage_as_a_Function_of_Gas_Pressure_at_Constant_Temperature.txt
|
It is possible to predict how the kinetics of certain heterogeneously-catalyzed reactions might vary with the partial pressures of the reactant gases above the catalyst surface by using the Langmuir isotherm expression for equilibrium surface coverages.
Unimolecular Decomposition
Consider the surface decomposition of a molecule $A$, i.e. the process
$A_{(g)} \rightleftharpoons A_{(ads)} → \text{Products} \nonumber$
Let us assume that :
1. The decomposition reaction occurs uniformly across the surface sites at which molecule $A$ may be adsorbed and is not restricted to a limited number of special sites.
2. The products are very weakly bound to the surface and, once formed, are rapidly desorbed.
3. The rate determining step (rds) is the surface decomposition step.
Under these circumstances, the molecules of $A$ adsorbed on the surface are in equilibrium with those in the gas phase and we may predict the surface concentration of A from the Langmuir isotherm, i.e.
$θ = \dfrac{bP}{1 + bP} \nonumber$
The rate of the surface decomposition (and hence of the reaction) is given by an expression of the form
$\text{rate} = k θ \nonumber$
This is assuming that the decomposition of $A(ads)$ occurs in a simple unimolecular elementary reaction step and that the kinetics are first order with respect to the surface concentration of this adsorbed intermediate). Substituting for the coverage, $θ$, gives us the required expression for the rate in terms of the pressure of gas above the surface
$\text{rate} = \dfrac{k b P}{1 + b P} \label{rate}$
It is useful to consider two extremes:
Low Pressure/Binding Limit
This is the low pressure (or weak binding. i.e., small $b$) limit : under these conditions the steady state surface coverage, $θ$, of the reactant molecule is very small.
$b P \ll 1 \nonumber$
then
$1 + bP \approx 1 \nonumber$
and Equation \ref{rate} can be simplified to
$\text{rate} \approx kbP \nonumber$
Under this limiting case, the kinetics follow a first order reaction (with respect to the partial pressure of $A$) with an apparent first order rate constant $k' = kb$.
High Pressure/Binding Limit
This is the high pressure (or strong binding, i.e., large $b$) limit: under these conditions the steady state surface coverage, $θ$, of the reactant molecule is almost unity and
$bP \gg 1 \nonumber$
then
$1 + bP \approx bP \nonumber$
and Equation $\ref{rate}$ can be simplified to
$rate \approx k \nonumber$
Under this limiting case, the kinetics follow a zero-order reaction (with respect to the partial pressure of $A$). The rate shows the same pressure variation as does the surface coverage, but this is hardly surprising since it is directly proportional to θ.
These two limiting cases can be identified in the general kinetics from Equation $\ref{rate}$ in Figure $1$.
Bimolecular Reaction (between molecular adsorbates)
Consider a Langmuir-Hinshelwood reaction of the following type:
$A_{(g)} \rightleftharpoons A_{(ads)} \label{Eq2.1}$
$B_{(g)} \rightleftharpoons B_{(ads)} \label{Eq2.2}$
$A_{(ads)} + B_{(ads)} \overset{slow}{\longrightarrow} AB_{(ads)} \overset{fast}{\longrightarrow} AB_{(g)} \label{Eq2.3}$
We will further assume, as noted in the above scheme, that the surface reaction between the two adsorbed species (left side of Equation $\ref{Eq2.3}$ is the rate determining step.
If the two adsorbed molecules are mobile on the surface and freely intermix then the rate of the reaction will be given by the following rate expression for the bimolecular surface combination step
$Rate = k θ_A θ_B \nonumber$
For a single molecular adsorbate the surface coverage (as given by the standard Langmuir isotherm) is
$θ = \dfrac{bP}{1 + bP} \nonumber$
Where two molecules ($A$ & $B$ ) are competing for the same adsorption sites then the relevant expressions are (see derivation):
$\theta_A = \dfrac{b_AP_A}{1+b_AP_A + b_BP_B} \nonumber$
and
$\theta_B = \dfrac{b_BP_B}{1+b_AP_A + b_BP_B} \nonumber$
Substituting these into the rate expression gives :
$Rate = k \theta_A \theta_B = \dfrac{k b_AP_A b_BP_B }{( 1+b_AP_A + b_BP_B )^2} \nonumber$
Once again, it is interesting to look at several extreme limits
Low Pressure/Binding Limit
$b_A P_A \ll 1 \nonumber$
and
$b_B P_B \ll 1 \nonumber$
In this limit that $θ_A$ & $θ_B$ are both very low , and
$rate → k b_A P_A b_B P_B = k' P_A P_B \nonumber$
i.e. first order in both reactants
Mixed Pressure/Binding Limit
$b_A P_A \ll 1 \ll b_B P_B \nonumber$
In this limit $θ_A → 0$, $θ_B → 1$, and
$Rate → \dfrac{k b_A P_A }{b_B P_B } = \dfrac{k' P_A}{P_B} \nonumber$
i.e. first order in $A$, but negative first order in $B$
Clearly, depending upon the partial pressure and binding strength of the reactants, a given model for the reaction scheme can give rise to a variety of apparent kinetics: this highlights the dangers inherent in the reverse process - namely trying to use kinetic data to obtain information about the reaction mechanism.
Example $1$: CO Oxidation Reaction
On precious metal surfaces (e.g. Pt), the $CO$ oxidation reaction is generally believed to by a Langmuir-Hinshelwood mechanism of the following type :
$CO_{(g)} \rightleftharpoons CO_{(ads)} \nonumber$
$O_{2 (g)} \rightleftharpoons 2 O_{(ads)} \nonumber$
$CO_{(ads)} + O_{(ads)} \overset{slow}{\longrightarrow} CO_{2 (ads)} \overset{fast}{\longrightarrow} CO_{2 (g)} \nonumber$
As CO2 is comparatively weakly-bound to the surface, the desorption of this product molecule is relatively fast and in many circumstances it is the surface reaction between the two adsorbed species that is the rate determining step.
If the two adsorbed molecules are assumed to be mobile on the surface and freely intermix then the rate of the reaction will be given by the following rate expression for the bimolecular surface combination step
$Rate = k \,θ_{CO}\, θ_O \nonumber$
Where two such species (one of which is molecularly adsorbed, and the other dissociatively adsorbed) are competing for the same adsorption sites then the relevant expressions are (see derivation):
$\theta_{CO} = \dfrac{b_{CO}P_{CO}}{1+ \sqrt{b_OP_{O_2}} + b_{CO}P_{CO}} \nonumber$
and
$\theta_{O} = \dfrac{ \sqrt{b_OP_{O_2}} }{1+ \sqrt{b_OP_{O_2}} + b_{CO}P_{CO}} \nonumber$
Substituting these into the rate expression gives:
$rate = k \theta_{CO} \theta_O = \dfrac{ k b_{CO}P_{CO} \sqrt{b_OP_{O_2}} }{(1+ \sqrt{b_OP_{O_2}} + b_{CO}P_{CO})^2} \label{Ex1.1}$
Once again, it is interesting to look at certain limits. If the $CO$ is much more strongly bound to the surface such that
$b_{CO}P_{CO} \gg 1 + \sqrt{b_OP_{O_2}} \nonumber$
then
$1 + \sqrt{b_OP_{O_2}} + b_{CO}P_{CO} \approx b_{CO}P_{CO} \nonumber$
and the Equation $\ref{Ex1.1}$ simplifies to give
$rate \approx \dfrac{k \sqrt{b_OP_{O_2}} } {b_{CO}P_{CO}} = k' \dfrac{P^{1/2}_{O_2}}{P_{CO}} \nonumber$
In this limit the kinetics are half-order with respect to the gas phase pressure of molecular oxygen, but negative order with respect to the $CO$ partial pressure, i.e. $CO$ acts as a poison (despite being a reactant) and increasing its pressure slows down the reaction. This is because the CO is so strongly bound to the surface that it blocks oxygen adsorbing, and without sufficient oxygen atoms on the surface the rate of reaction is reduced.
Contributors and Attributions
• Roger Nix (Queen Mary, University of London)
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(LibreTexts)/31%3A_Solids_and_Surface_Chemistry/31.08%3A_The_Langmuir_Isotherm_Can_Be_Used_to_Derive_Rate_Laws_for_Surface-Catalyzed_Gas-Phase_Reactions.txt
|
The kinetics and thermodynamics of the chemical and physical processes that occur on the surface of a solid are greatly dependent on the structure of the surface. Few, if any surfaces are perfectly flat, and thus the cavities, protrusions, ridges, and edges of the surface must be treated differently when studying chemisorption and physisorption. Several types of spectroscopies and microscopies are available to study the atomic-scale structure of surfaces.
Electron Microscopy
The two forms of electron microscopy which are commonly used to provide surface information are
• Secondary Electron Microscopy ( SEM ): which provides a direct image of the topographical nature of the surface from all the emitted secondary electrons
• Scanning Auger Microscopy ( SAM ): which provides compositional maps of a surface by forming an image from the Auger electrons emitted by a particular element.
Both techniques employ the focusing of the probe beam (a beam of high-energy electrons, typically 10 - 50 keV in energy) to obtain spatial localization.
Secondary Electron Microscopy (SEM)
As the primary electron beam is scanned across the surface, electrons of a wide range of energies will be emitted from the surface in the region where the beam is incident. These electrons will include backscattered primary electrons and Auger electrons, but the vast majority will be secondary electrons formed in multiple inelastic scattering processes (these are the electrons that contribute to the background and are completely ignored in Auger spectroscopy). The secondary electron current reaching the detector is recorded and the microscope image consists of a "plot" of this current, I, against probe position on the surface. The contrast in the micrograph arises from several mechanisms, but first and foremost from variations in the surface topography. Consequently, the secondary electron micrograph is virtually a direct image of the real surface structure (Figure $1$).
The attainable resolution of the technique is limited by the minimum spot size that can be obtained with the incident electron beam, and ultimately by the scattering of this beam as it interacts with the substrate. With modern instruments, a resolution of better than 5 nm is achievable. This is more than adequate for imaging semiconductor device structures, for example, but insufficient to enable many supported metal catalysts to be studied in any detail.
Scanning Auger Microscopy ( SAM )
The incident primary electrons cause the ionization of atoms within the region illuminated by the focused beam. Subsequent relaxation of the ionized atoms leads to the emission of Auger electrons characteristic of the elements present in this part of the sample surface. As with SEM , the attainable resolution is again ultimately limited by the incident beam characteristics. More significantly, however, the resolution is also limited by the need to acquire sufficient Auger signal to form a respectable image within a reasonable time period, and for this reason, the instrumental resolution achievable is rarely better than about 15-20 nm.
Low-Energy Electron Diffraction (LEED) Spectroscopy
LEED is the principal technique for the determination of surface structures. It may be used in one of two ways:
1. Qualitatively: where the diffraction pattern is recorded and analysis of the spot positions yields information on the size, symmetry, and rotational alignment of the adsorbate unit cell with respect to the substrate unit cell.
2. Quantitatively: where the intensities of the various diffracted beams are recorded as a function of the incident electron beam energy to generate so-called I-V curves which, by comparison with theoretical curves, may provide accurate information on atomic positions.
The LEED experiment uses a beam of electrons of a well-defined low energy (typically in the range 20 - 200 eV) incident normally on the sample. The sample itself must be a single crystal with a well-ordered surface structure in order to generate a back-scattered electron diffraction pattern. A typical experimental setup is shown in figure $3$ below.
Only the elastically-scattered electrons contribute to the diffraction pattern; the lower energy (secondary) electrons are removed by energy-filtering grids placed in front of the fluorescent screen that is employed to display the pattern.
Example $1$
As shown by the Bragg equation, for the electrons to be diffracted, the de Broglie wavelength of the electrons has to be less than 2$d$, which is twice the distance between the atomic planes. The equation used to calculate the wavelength (in nm) of backscattered electrons that are accelarated by a potential, $\phi$ is
$\lambda = \left( \dfrac{1.504 \, V nm^2}{\phi} \right)^{1/2} \nonumber$
Calculate the minimum acceleration voltage needed to observe electron diffraction from the surface of a crystal with an interplanar spacing of 0.1250 nm.
Solution
To observe diffraction, the wavelength must be less than 2$d$, so $\lambda$ must be less than 0.2500 nm. Therefore,
$0.2500 \, nm = \left( \dfrac{1.504 \, V nm^2}{\phi} \right)^{1/2} \nonumber$
$\phi = \dfrac{1.504 \, V nm^2}{(0.2500 \, nm)^2} \nonumber$
$\phi = 24.06 V \nonumber$
Contributors and Attributions
• Roger Nix (Queen Mary, University of London)
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(LibreTexts)/31%3A_Solids_and_Surface_Chemistry/31.09%3A_The_Structure_of_a_Surface_is_Different_from_that_of_a_Bulk_Solid.txt
|
The Haber-Bosch Process for Synthesis of Ammonia
An example of an industrial catalytic process is the Haber-Bosch process. Karl Bosch (1874–1940) was a German chemical engineer who was responsible for designing the process that took advantage of Fritz Haber’s discoveries regarding the N2 + H2/NH3 equilibrium to make ammonia synthesis via this route cost-effective. He received the Nobel Prize in Chemistry in 1931 for his work. The industrial process is called either the Haber process or the Haber-Bosch process. used to synthesize ammonia via the following reaction:
$N_{2(g)}+3H_{2(g)} \rightleftharpoons 2NH_{3(g)} \nonumber$
with
$ΔH_{rxn}=−91.8\; kJ/mol \nonumber$
Because the reaction converts 4 mol of gaseous reactants to only 2 mol of gaseous product, Le Chatelier’s principle predicts that the formation of NH3 will be favored when the pressure is increased. The reaction is exothermic, however (ΔHrxn = −91.8 kJ/mol), so the equilibrium constant decreases with increasing temperature, which causes an equilibrium mixture to contain only relatively small amounts of ammonia at high temperatures ($1$). Taken together, these considerations suggest that the maximum yield of NH3 will be obtained if the reaction is carried out at as low a temperature and as high a pressure as possible. Unfortunately, at temperatures less than approximately 300°C, where the equilibrium yield of ammonia would be relatively high, the reaction is too slow to be of any commercial use. The industrial process, therefore, uses a mixed oxide (Fe2O3/K2O) catalyst that enables the reaction to proceed at a significant rate at temperatures of 400°C–530°C, where the formation of ammonia is less unfavorable than at higher temperatures.
Because of the low value of the equilibrium constant at high temperatures (e.g., K = 0.039 at 800 K), there is no way to produce an equilibrium mixture that contains large proportions of ammonia at high temperatures. We can, however, control the temperature and the pressure while using a catalyst to convert a fraction of the N2 and H2 in the reaction mixture to NH3, as is done in the Haber-Bosch process. This process also makes use of the fact that the product—ammonia—is less volatile than the reactants. Because NH3 is a liquid at room temperature at pressures greater than 10 atm, cooling the reaction mixture causes NH3 to condense from the vapor as liquid ammonia, which is easily separated from unreacted N2 and H2. The unreacted gases are recycled until the complete conversion of hydrogen and nitrogen to ammonia is eventually achieved. Figure $2$ is a simplified layout of a Haber-Bosch process plant.
We can write the seven reactions that are involved in the process where ad indicates that the molecule or atom is adsorbed on the catalyst
$H_{2}(g) + 2 S(s) \rightleftharpoons 2H \mathit{(ad)} \label{1}$
$N_{2}(g) \rightleftharpoons N_2 \mathit{(ad)} \label{2}$
$N_2 \mathit{(ad)} + 2 S(s) \rightleftharpoons 2N \mathit{(ad)} \label{3}$
$H \mathit{(ad)} + N \mathit{(ad)} \rightleftharpoons NH \mathit{(ad)} \label{4}$
$NH \mathit{(ad)} + H \mathit{(ad)} \rightleftharpoons NH_2 \mathit{(ad)} \label{5}$
$NH_2 \mathit{(ad)} + H \mathit{(ad)} \rightleftharpoons NH_3 \mathit{(ad)} \label{6}$
$NH_3 \mathit{(ad)} \rightleftharpoons NH_{3}(g) \label{7}$
Reaction $\ref{3}$ is much the slowest, and therefore the rate determining step. Figure $3$ summarizes the reaction scheme.
Gerhard Ertl worked out the energetics of the reaction shown in Figure $4$ below shows the amount of energy per mole needed on the catalyst and that which would be needed in the gas phase. Ertl's Nobel Prize speech about his work on the catalytic reactions forming ammonia and other catalytic reactions can be viewed on line.
Further studies of the Fe2O3/K2O catalyst have shown that the rate of the reaction depends on the particular surface on which the reaction is occurring. Figure $5$ shows the reaction rates for the synthesis of ammonia on five different surfaces of the iron.
Contributors
• Anonymous
Modified by Joshua Halpern (Howard University), Scott Sinex, and Scott Johnson (PGCC)
Figure $3$ is from the ESA, now available on the internet wayback machine
Figure $4$ is from the Wikipedia Commons
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(LibreTexts)/31%3A_Solids_and_Surface_Chemistry/31.10%3A_The_Haber-Bosch_Reaction_Can_Be_Surface_Catalyzed.txt
|
• 32.1: Complex Numbers
• 32.2: Probability and Statistics
A random variable X can have more than one value x as an outcome. Which value the variable has in a particular case is a matter of chance and cannot be predicted other than that we associate a probability to the outcome. Probability p is a number between 0 and 1 that indicates the likelihood that the variable X has a particular outcome x . The set of outcomes and their probabilities form a probability distribution.
• 32.3: Vectors
In this chapter we will review a few concepts you probably know from your physics courses. This chapter does not intend to cover the topic in a comprehensive manner, but instead touch on a few concepts that you will use in your physical chemistry classes.
• 32.4: Spherical Coordinates
Understand the concept of area and volume elements in cartesian, polar and spherical coordinates. Be able to integrate functions expressed in polar or spherical coordinates. Understand how to normalize orbitals expressed in spherical coordinates, and perform calculations involving triple integrals.
• 32.5: Determinants
The determinant is a useful value that can be computed from the elements of a square matrix
• 32.6: Matrices
Learn the nomenclature used in linear algebra to describe matrices (rows, columns, triangular matrices, diagonal matrices, trace, transpose, singularity, etc). Learn how to add, subtract and multiply matrices. Learn the concept of inverse. Understand the use of matrices as symmetry operators.
• 32.7: Numerical Methods
• 32.8: Partial Differentiation
The development of thermodynamics would have been unthinkable without calculus in more than one dimension (multivariate calculus) and partial differentiation is essential to the theory.
• 32.9: Series and Limits
Learn how to obtain Maclaurin and Taylor expansions of different functions. Learn how to express infinite sums using the summation operator ( Σ ) Understand how a series expansion can be used in the physical sciences to obtain an approximation that is valid in a particular regime (e.g. low concentration of solute, low pressure of a gas, small oscillations of a pendulum, etc). Understand how a series expansion can be used to prove a mathematical relationship.
• 32.10: Fourier Analysis
The Fourier transform converts a function vs. continuous (or descrete) time and maps it into a function vs. continuous (or descrete) frequencies. Hence, the transform converts time-domain data into frequency-domain data (and vice versa). This decomposion of a function into sinusoids of different frequencies is a powerful approach to many experimental and theoretical problems.
• 32.11: The Binomial Distribution and Stirling's Appromixation
32: Math Chapters
Real Numbers
Let us think of the ordinary numbers as set out on a line which goes to infinity in both positive and negative directions. We could start by taking a stretch of the line near the origin (that is, the point representing the number zero) and putting in the integers as follows:
Next, we could add in rational numbers, such as ½, 23/11, etc., then the irrationals like $\sqrt{2}$, then numbers like $\pi$, and so on, so any number you can think of has its place on this line. Now let’s take a slightly different point of view, and think of the numbers as represented by a vector from the origin to that number, so 1 is
and, for example, –2 is represented by:
Note that if a number is multiplied by –1, the corresponding vector is turned through 180 degrees. In pictures,
The “vector” 2 is turned through $\pi$, or 180 degrees, when you multiply it by –1.
Example A.1
What are the square roots of 4?
Solution
Well, 2, obviously, but also –2, because multiplying the backwards pointing vector –2 by –2 not only doubles its length, but also turns it through 180 degrees, so it is now pointing in the positive direction. We seem to have invented a hard way of stating that multiplying two negatives gives a positive, but thinking in terms of turning vectors through 180 degrees will pay off soon.
Solving Quadratic Equations
In solving the standard quadratic equation
$ax^2 + bx + c = 0 \label{A.1}$
we find the solution to be:
$x =\dfrac{-b \pm \sqrt{b^2-ac}}{2a} \label{A.2}$
The problem with this is that sometimes the expression inside the square root is negative. What does that signify? For some problems in physics, it means there is no solution. For example, if I throw a ball directly upwards at 10 meters per sec, and ask when will it reach a height of 20 meters, taking g = 10 m per sec2, the solution of the quadratic equation for the time t has a negative number inside the square root, and that means that the ball doesn’t get to 20 meters, so the question didn’t really make sense.
We shall find, however, that there are other problems, in wide areas of physics, where negative numbers inside square roots have an important physical significance. For that reason, we need to come up with a scheme for interpreting them.
The simplest quadratic equation that gives trouble is:
$x^2 + 1 = 0 \label{A.3}$
the solutions being
$x = \pm \sqrt{-1}\label{A.4}$
What does that mean? We’ve just seen that the square of a positive number is positive, and the square of a negative number is also positive, since multiplying one negative number, which points backwards, by another, which turns any vector through 180 degrees, gives a positive vector. Another way of saying the same thing is to regard the minus sign itself, -, as an operator which turns the number it is applied to through 180 degrees. Now $(-2)\times (-2)$ has two such rotations in it, giving the full 360 degrees back to the positive axis.
To make sense of the square root of a negative number, we need to find something which when multiplied by itself gives a negative number. Let’s concentrate for the moment on the square root of –1, from the quadratic equation above. Think of –1 as the operator – acting on the vector 1, so the – turns the vector through 180 degrees. We need to find the square root of this operator, the operator which applied twice gives the rotation through 180 degrees. Put like that, it is pretty obvious that the operator we want rotates the vector 1 through 90 degrees.
But if we take a positive number, such as 1, and rotate its vector through 90 degrees only, it isn’t a number at all, at least in our original sense, since we put all known numbers on one line, and we’ve now rotated 1 away from that line. The new number created in this way is called a pure imaginary number, and is denoted by $i$.
Once we’ve found the square root of –1, we can use it to write the square root of any other negative number—for example, $2i$ is the square root of $–4$. Putting together a real number from the original line with an imaginary number (a multiple of i) gives a complex number. Evidently, complex numbers fill the entire two-dimensional plane. Taking ordinary Cartesian coordinates, any point $P$ in the plane can be written as $(x, y)$ where the point is reached from the origin by going $x$ units in the direction of the positive real axis, then y units in the direction defined by $i$, in other words, the $y$ axis.
Thus the point P with coordinates (x, y) can be identified with the complex number z, where
$z = x + iy. \label{A.5}$
The plane is often called the complex plane, and representing complex numbers in this way is sometimes referred to as an Argand Diagram.
Visualizing the complex numbers as two-dimensional vectors, it is clear how to add two of them together. If z1 = x1 + iy1, and z2 = x2 + iy2, then z1 + z2 = (x1 + x2) + i(y1 + y2). The real parts and imaginary parts are added separately, just like vector components.
Multiplying two complex numbers together does not have quite such a simple interpretation. It is, however, quite straightforward—ordinary algebraic rules apply, with i2 replaced where it appears by -1. So for example, to multiply z1 = x1 + iy1 by z2 = x2 + iy2,
$z_1z_2 = (x_1 + iy_1)( x_2 + iy_2) = (x_1x_2 - y_1y_2) + i(x_1y_2 + x_2y_1). \label{A.6}$
Polar Coordinates
Some properties of complex numbers are most easily understood if they are represented by using the polar coordinates $r, \theta$ instead of $(x, y)$ to locate $z$ in the complex plane.
Note that $z = x + iy$ can be written $r(\cos \theta + i \sin \theta)$ from the diagram above. In fact, this representation leads to a clearer picture of multiplication of two complex numbers:
\begin{align} z_1z_2 &= r_2 ( \cos(\theta_1 + i\sin \theta_1) r_2( \cos(\theta_2 + i\sin \theta_2) \label{A.7} \[4pt] & = r_1r_2 \left[ (\cos \theta_1 \cos \theta_2 - \sin \theta_1 \sin \theta_2) + i (\sin \theta_1 \cos \theta_2 + \cos \theta_1 \sin \theta_2) \right] \label{A.8} \[4pt] & = r_1r_2 \left[ \cos(\theta_1+\theta_2) + i\sin (\theta_1+\theta_2) \right] \label{A.9} \end{align}
So, if
$z = r(cos \theta + i\sin \theta ) = z_1z_2 \label{A.10}$
then
$r = r_1r_2 \label{A.11}$
and
$\theta=\theta_1\theta_2 \label{A.12}$
That is to say, to multiply together two complex numbers, we multiply the r’s – called the moduli – and add the phases, the $\theta$ ’s. The modulus $r$ is often denoted by $|z|$, and called mod z, the phase $\theta$ is sometimes referred to as arg z. For example, $|i| = 1$, $\text{arg}\; i = \pi/2$.
We can now see that, although we had to introduce these complex numbers to have a $\sqrt{-1}$, we do not need to bring in new types of numbers to get $\sqrt{-1}$, or $\sqrt{i}$. Clearly, $|\sqrt{i}|=1$, $arg \sqrt{i} = 45°$. It is on the circle of unit radius centered at the origin, at 45°, and squaring it just doubles the angle.
The Unit Circle
In fact this circle—called the unit circle—plays an important part in the theory of complex numbers and every point on the circle has the form
$z = \cos \theta + i \sin \theta = Cis(\theta) \label{A.13}$
Since all points on the unit circle have $|z| = 1$, by definition, multiplying any two of them together just amounts to adding the angles, so our new function $Cis(\theta)$ satisfies
$Cis(\theta_1)Cis(\theta_2)=Cis(\theta_1+\theta_2). \label{A.14}$
But that is just how multiplication works for exponents! That is,
$a^{\theta_1}a^{\theta_2} = a^{\theta_1+\theta_2} \label{A.15}$
for $a$ any constant, which strongly suggests that maybe our function $Cis(\theta$ is nothing but some constant $a$ raised to the power $\theta$, that is,
$Cis(\theta) = a^{\theta}\label{A.16}$
It turns out to be convenient to write $a^{\theta} = e^{(\ln a)\theta} = e^{A \theta}$, where $A = \ln a$. This line of reasoning leads us to write
$\cos \theta + i\sin \theta = e^{A\theta} \label{A.17}$
Now, for the above “addition formula” to work for multiplication, $A$ must be a constant, independent of $\theta$. Therefore, we can find the value of A by choosing $\theta$ for which things are simple. We take $\theta$ to be very small—in this limit:
$\cos \theta = 1 \nonumber$
$\sin \theta = \theta \nonumber$
$e^{A\theta} = 1+ A\theta \nonumber$
with we drop terms of order $\theta^2$ and higher.
Substituting these values into Equation \ref{A.17} gives $\theta$
So we find:
$(\cos \theta + i \sin \theta) e ^{i \theta} \label{A.18}$
To test this result, we expand $e^{i \theta}$:
\begin{align} e^{i \theta} &= 1 + i\theta + \dfrac{(i\theta)^2}{2!} + \dfrac{(i\theta)^3}{3!} + \dfrac{(i\theta)^4}{4!} + \dfrac{(i\theta)^5}{5!} ... \label{A.19a} \[4pt] &= 1 + i\theta - \dfrac{\theta^2}{2!} - \dfrac{i\theta^3}{3!} +\dfrac{\theta^4}{4!} +\dfrac{i\theta^5}{5!} ... \label{A.19b} \[4pt] &= \left( 1 - \dfrac{\theta^2}{2!} + \dfrac{\theta^4}{4!} \right) + i \left(\theta - \dfrac{i\theta^3}{3!}+\dfrac{i\theta^5}{5!} \right) \label{A.19c} \[4pt] &= \cos \theta + i\sin \theta \label{A.19d} \end{align}
We write $= \cos \theta + i\sin \theta$ in Equation \ref{A.19d} because the series in the brackets are precisely the Taylor series for $\cos \theta$ and $\sin \theta$ confirming our equation for $e^{i\theta}$. Changing the sign of $\theta$ it is easy to see that
$e^{-i \theta} = \cos \theta - i\sin \theta \label{A.20}$
so the two trigonometric functions can be expressed in terms of exponentials of complex numbers:
$\cos (\theta) = \dfrac{1}{2} \left( e^{i\theta} + e^{-i \theta} \right) \nonumber$
$\sin (\theta) = \dfrac{1}{2i} \left( e^{i\theta} - e^{-i \theta} \right) \nonumber$
Euler Formula
The Euler formula states that any complex number can be written:
$e^{i \theta} = \cos \theta + i\sin \theta \nonumber$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(LibreTexts)/32%3A_Math_Chapters/32.01%3A_Complex_Numbers.txt
|
A random variable X can have more than one value x as an outcome. Which value the variable has in a particular case is a matter of chance and cannot be predicted other than that we associate a probability to the outcome. Probability $p$ is a number between 0 and 1 that indicates the likelihood that the variable $X$ has a particular outcome $x$. The set of outcomes and their probabilities form a probability distribution. There are two kinds of distributions:
1. discrete ones
2. continuous ones
Total probability should always add up to unity.
Discrete Distributions
A good example of a discrete distribution is a true coin. The random variable X can have two values:
1. heads (0)
2. tails (1)
Both have equal probability and as the sum must equal unity, the probability must be ½ for each. 'The probability that X=heads' is written formally as:
$Pr(X=heads) = Pr(X=0) = 0.5 \nonumber$
The random function is written as a combination of three statements.
• Pr(X=0) = ½
• Pr(X=1) = ½
• elsewhere Pr = 0
Continuous Distributions
Now consider a spherical die. One could say it has an infinite number of facets that it can land on. Thus the number of outcomes $n = ∞$, this make each probability
$p = 1/∞=0. \nonumber$
This creates a bit of a mathematical problem, because how can we get a total probability of unity by adding up zeros? Also, if we divide the sphere in a northern and a southern hemisphere clearly the probability that it lands on a point in say, the north should be ½. Still, p = 0 for all points. We introduce a new concept: probability density over which we integrate rather than sum. We assign an equal density to each point of the sphere and make sure that if we integrate over a hemisphere we get ½. (This involves two angles θ and φ and an integration over them and I won't go into that).
A bit simpler example of a continuous distribution than the spherical die is the 1D uniform distribution. It is the one that the Excel function =RAND() produces to good approximation. Its probability density is defined as
• f(x) = 1 for 0<1>
• f(x) = 0 elsewhere
The figure shows a (bivariate) uniform distribution.
The probability that the outcome is smaller than 0.5 is written as Pr(X<0.5) and is found by integrating from 0 to 0.5 over f(x).
Pr(X<0.5) = ∫ f(x).dx from 0 to 0.5 = ∫ 1.dx from 0 to 0.5 = [x]0.5-[x]0 = 0.5
Notice that in each individual outcome b the probability is indeed zero because an integral from b to b is always zero, even if the probability density f(b) is not zero. Clearly the probability and the probability density are not the same. Unfortunately the distinction between probability and probability density is often not properly made in the physical sciences. Moments can also be computed for continuous distributions by integrating over the probability density
Another well-known continuous distribution is the normal (or Gaussian) distribution, defined as:
$f(x) = 1/[√(2π)σ] * exp(-½[(x-μ)/σ]2) \nonumber$
(Notice the normalization factor 1/[√(2π)σ])
We can also compute moments of continuous distribution. Instead of using a summation we now have to evaluate an integral:
$\langle X \rangle = \int [f(x)^*x] dx \nonumber$
$\langle X^2 \rangle =\int [f(x)^*x^2] dx \nonumber$
For the normal distribution $\langle X \rangle = μ$
Exercise
Compute 2> and 3> for the uniform distribution. answer>
Indistinguishable Outcomes
When flipping two coins we could get four outcomes: two heads (0), heads plus tails (1), tails plus heads (1), two tails (2)
Each outcome is equally likely, this implies a probability of ¼ for each:
Xtot = X1 + X2 = 0 + 0 = 0 p=¼
Xtot = X1 + X2 = 0 + 1 = 1 p=¼
Xtot = X1 + X2 = 1 + 0 = 1 p=¼
Xtot = X1 + X2 = 1 + 1 = 2 p=¼
The probability of a particular outcome is often abbreviated simply to p. The two middle outcomes collapse into one with p=¼+¼= ½ if the coins are indistinguishable. We will see that this concept has very important consequences in statistical thermodynamics.
If we cannot distinguish the two outcomes leading to Xtot=1 we get the following random function:
• Pr(Xtot=0) = ¼
• Pr(Xtot=1) = ½
• Pr(Xtot=2) = ¼
• elsewhere Pr = 0
Notice that it is quite possible to have a distribution where the probabilities differ from outcome to outcome. Often the p values are given as f(x), a function of x. An example:
X3 defined as:
• f(x) = (x+1)/6 for x=0,1,2;
• f(x) =0 elsewhere;
The factor 1/6 makes sure the probabilities add up to unity. Such a factor is known as a normalization factor. Again this concept is of prime importance in statistical thermodynamics.
Another example of a discrete distribution is a die. If it has 6 sides (the most common die) there are six outcomes, each with p= 1/6. There are also dice with n=4, 12 or 20 sides. Each outcome will then have p= 1/n.
Moments of Distributions
In important aspect of probability distributions are the moments of the distribution. They are values computed by summing over the whole distribution.
The zero order moment is simply the sum of all p and that is unity:
0> = ΣX0*p= Σ1*p= 1
The first moment multiplies each outcome with its probability and sums over all outcomes:
= ΣX*p
This moment is known as the average or mean. (It is what we have done to your grades for years...)
For one coin is ½, for two coins is 1. (Exercise: verify this)
The second moment is computed by summing the product of the square of X and p:
2> = ΣX2*p
For one coin we have 2> = ½,
For two coins 2>= [0*¼ + 1*½ + 4*¼] = 1.5
What is 2> for X3? answer
The notation is used a lot in quantum mechanics, often in the form <ψ*ψ> or <ψ*|h|ψ>. The <.. part is known as the bra, the ..> part as the ket. (Together bra(c)ket)
Intermezzo: The strange employer
You have a summer job but your employer likes games of chance. At the end of every day he rolls a die and pays you the square of the outcome in dollars per hour. So on a lucky day you'd make $36.- per hour, but on a bad day$1.-. Is this a bad deal? What would you make on the average over a longer period?
To answer this we must compute the second moment 2> of the distribution:
2> = 1/6 *[1+4+9+16+25+36] = 91/6 = \$15.17 per hour.
(I have taken p=1/6 out of brackets because the value is the same for all six outcomes)
As you see in the intermezzo, the value of the second moment is in this case what you expect to be making on the long term. Moments are examples of what is know as expectation values. Another term you may run into that of a functional. A functional is a number computed by some operation (such as summation or integration) over a whole function. Moments are clearly an example of that too.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(LibreTexts)/32%3A_Math_Chapters/32.02%3A_Probability_and_Statistics.txt
|
In this chapter we will review a few concepts you probably know from your physics courses. This chapter does not intend to cover the topic in a comprehensive manner, but instead touch on a few concepts that you will use in your physical chemistry classes.
A vector is a quantity that has both a magnitude and a direction, and as such they are used to specify the position, velocity and momentum of a particle, or to specify a force. Vectors are usually denoted by boldface symbols (e.g. $\mathbf{u}$) or with an arrow above the symbol (e.g. $\vec{u}$). A tilde placed above or below the name of the vector is also commonly used in shorthand ($\widetilde{u}$,$\underset{\sim}{u}$).
If we multiply a number $a$ by a vector $\mathbf{v}$, we obtain a new vector that is parallel to the original but with a length that is $a$ times the length of $\mathbf{v}$. If $a$ is negative $a\mathbf{v}$ points in the opposite direction than $\mathbf{v}$. We can express any vector in terms of the so-called unit vectors. These vectors, which are designated $\hat{\mathbf{i}}$, $\hat{\mathbf{j}}$ and $\hat{\mathbf{k}}$, have unit length and point along the positive $x, y$ and $z$ axis of the cartesian coordinate system (Figure $1$). The symbol $\hat{\mathbf{i}}$ is read "i-hat". Hats are used to denote that a vector has unit length.
The length of $\mathbf{u}$ is its magnitude (or modulus), and is usually denoted by $u$:
$\label{eq:vectors1} u=|u|=(u_x^2+u_y^2+u_z^2)^{1/2}$
If we have two vectors $\mathbf{u}=u_x\hat{\mathbf{i}}+u_y \hat{\mathbf{j}}+u_z \hat{\mathbf{k}}$ and $\mathbf{v}=v_x \hat{\mathbf{i}}+v_y \hat{\mathbf{j}}+v_z \hat{\mathbf{k}}$, we can add them to obtain
$\mathbf{u}+\mathbf{v}=(u_x+v_x)\hat{\mathbf{i}}+(u_y+v_y)\hat{\mathbf{j}}+(u_z+v_z)\hat{\mathbf{k}} \nonumber$
or subtract them to obtain:
$\mathbf{u}-\mathbf{v}=(u_x-v_x)\hat{\mathbf{i}}+(u_y-v_y)\hat{\mathbf{j}}+(u_z-v_z)\hat{\mathbf{k}} \nonumber$
When it comes to multiplication, we can perform the product of two vectors in two different ways. The first, which gives a scalar (a number) as the result, is called scalar product or dot product. The second, which gives a vector as a result, is called the vector (or cross) product. Both are important operations in physical chemistry.
The Scalar Product
The scalar product of vectors $\mathbf{u}$ and $\mathbf{v}$, also known as the dot product or inner product, is defined as (notice the dot between the symbols representing the vectors)
$\mathbf{u}\cdot \mathbf{v}=|\mathbf{u}||\mathbf{v}|\cos \theta \nonumber$
where $\theta$ is the angle between the vectors. Notice that the dot product is zero if the two vectors are perpendicular to each other, and equals the product of their absolute values if they are parallel. It is easy to prove that
$\mathbf{u}\cdot \mathbf{v}=u_xv_x+u_yv_y+u_zv_z \nonumber$
Example $1$
Show that the vectors
\begin{align*} \mathbf{u_1} &=\dfrac{1}{\sqrt{3}}\hat{\mathbf{i}}+\dfrac{1}{\sqrt{3}}\hat{\mathbf{j}}+\dfrac{1}{\sqrt{3}}\hat{\mathbf{k}} \[4pt] \mathbf{u_2} &=\dfrac{1}{\sqrt{6}}\hat{\mathbf{i}}-\dfrac{2}{\sqrt{6}}\hat{\mathbf{j}}+\dfrac{1}{\sqrt{6}}\hat{\mathbf{k}} \[4pt] \mathbf{u_3} &=-\dfrac{1}{\sqrt{2}}\hat{\mathbf{i}}+\dfrac{1}{\sqrt{2}}\hat{\mathbf{k}} \end{align*} \nonumber
are of unit length and are mutually perpendicular.
Solution
The length of the vectors are:
\begin{align*} |\mathbf{u_1}|&=\left[\left(\dfrac{1}{\sqrt{3}}\right)^2+\left(\dfrac{1}{\sqrt{3}}\right)^2+\left(\dfrac{1}{\sqrt{3}}\right)^2\right]^{1/2}=\left[\dfrac{1}{3}+\dfrac{1}{3}+\dfrac{1}{3}\right]^{1/2}=1 \[4pt] |\mathbf{u_2}| &=\left[\left(\dfrac{1}{\sqrt{6}}\right)^2+\left(-\dfrac{2}{\sqrt{6}}\right)^2+\left(\dfrac{1}{\sqrt{6}}\right)^2\right]^{1/2}=\left[\dfrac{1}{6}+\dfrac{4}{6}+\dfrac{1}{6}\right]^{1/2}=1 \[4pt] |\mathbf{u_3}| &=\left[\left(-\dfrac{1}{\sqrt{2}}\right)^2+\left(\dfrac{1}{\sqrt{2}}\right)^2\right]^{1/2}=\left[\dfrac{1}{2}+\dfrac{1}{2}\right]^{1/2}=1 \end{align*} \nonumber
To test if two vectors are perpendicular, we perform the dot product:
\begin{align*} \mathbf{u_1}\cdot \mathbf{u_2}&=\left(\dfrac{1}{\sqrt{3}}\dfrac{1}{\sqrt{6}}-\dfrac{1}{\sqrt{3}}\dfrac{2}{\sqrt{6}}+\dfrac{1}{\sqrt{3}}\dfrac{1}{\sqrt{6}}\right)=0 \[4pt] \mathbf{u_1}\cdot \mathbf{u_3} &=\left(-\dfrac{1}{\sqrt{3}}\dfrac{1}{\sqrt{2}}+\dfrac{1}{\sqrt{3}}\dfrac{1}{\sqrt{2}}\right)=0 \[4pt] \mathbf{u_2}\cdot \mathbf{u_3} &=\left(-\dfrac{1}{\sqrt{6}}\dfrac{1}{\sqrt{2}}+\dfrac{1}{\sqrt{6}}\dfrac{1}{\sqrt{2}}\right)=0 \end{align*} \nonumber
Therefore, we just proved that the three pairs are mutually perpendicular, and the three vectors have unit length. In other words, these vectors are the vectors $\hat{\mathbf{i}}$, $\hat{\mathbf{j}}$ and $\hat{\mathbf{k}}$ rotated in space.
If the dot product of two vectors (of any dimension) is zero, we say that the two vectors are orthogonal. If the vectors have unit length, we say they are normalized. If two vectors are both normalized and they are orthogonal, we say they are orthonormal. The set of vectors shown in the previous example form an orthonormal set.[vectors:orthonormal] These concepts also apply to vectors that contain complex entries, but how do we perform the dot product in this case?
In general, the square of the modulus of a vector is
$|\mathbf{u}|^2=\mathbf{u}\cdot \mathbf{u}=u_x^2+u_y^2+u_z^2. \nonumber$
However, this does not work correctly for complex vectors. The square of $i$ is -1, meaning that we risk having non-positive absolute values. To address this issue, we introduce a more general version of the dot product:
$\mathbf{u}\cdot \mathbf{v}=u_x^*v_x+u_y^*v_y+u_z^*v_z, \nonumber$
where the “$*$ ” refers to the complex conjugate. Therefore, to calculate the modulus of a vector $\mathbf{u}$ that has complex entries, we use its complex conjugate:
$|\mathbf{u}|^2=\mathbf{u}^*\cdot \mathbf{u} \nonumber$
Example $2$: Calculating the Modulus of a vector
Calculate the modulus of the following vector:
$\mathbf{u}=\hat{\mathbf{i}}+i \hat{\mathbf{j}} \nonumber$
Solution
$|\mathbf{u}|^2=\mathbf{u}^*\cdot \mathbf{u}=(\hat{\mathbf{i}}-i \hat{\mathbf{j}})(\hat{\mathbf{i}}+i \hat{\mathbf{j}})=(1)(1)+(-i)(i)=2\rightarrow |\mathbf{u}|=\sqrt{2} \nonumber$
Analogously, if vectors contain complex entries, we can test whether they are orthogonal or not by checking the dot product $\mathbf{u}^*\cdot \mathbf{v}$.
Example $3$: Confirming orthogonality
Determine if the following pair of vectors are orthogonal (do not confuse the irrational number $i$ with the unit vector $\hat{\mathbf{i}}$!)
$\mathbf{u}=\hat{\mathbf{i}}+(1-i)\hat{\mathbf{j}} \nonumber$
and
$\mathbf{v}=(1+i)\hat{\mathbf{i}}+\hat{\mathbf{j}} \nonumber$
Solution
$\mathbf{u}^*\cdot \mathbf{v}=(\hat{\mathbf{i}}+(1+i)\hat{\mathbf{j}})((1+i)\hat{\mathbf{i}}+\hat{\mathbf{j}})=(1)(1+i)+(1+i)(1)=2+2i\neq 0 \nonumber$
Therefore, the vectors are not orthogonal.
The Vector Product
The vector product of two vectors is a vector defined as
$\mathbf{u}\times \mathbf{v}=|\mathbf{u}| |\mathbf{v}| \mathbf{n} \sin\theta \nonumber$
where $\theta$ is again the angle between the two vectors, and $\mathbf{n}$ is the unit vector perpendicular to the plane formed by $\mathbf{u}$ and $\mathbf{v}$. The direction of the vector $\mathbf{n}$ is given by the right-hand rule. Extend your right hand and point your index finger in the direction of $\mathbf{u}$ (the vector on the left side of the $\times$ symbol) and your forefinger in the direction of $\mathbf{v}$. The direction of $\mathbf{n}$, which determines the direction of $\mathbf{u}\times \mathbf{v}$, is the direction of your thumb. If you want to revert the multiplication, and perform $\mathbf{v}\times \mathbf{u}$, you need to point your index finger in the direction of $\mathbf{v}$ and your forefinger in the direction of $\mathbf{u}$ (still using the right hand!). The resulting vector will point in the opposite direction (Figure $1$).
The magnitude of $\mathbf{u}\times \mathbf{v}$ is the product of the magnitudes of the individual vectors times $\sin \theta$. This magnitude has an interesting geometrical interpretation: it is the area of the parallelogram formed by the two vectors (Figure $1$).
The cross product can also be expressed as a determinant:
$\mathbf{u}\times \mathbf{v}= \begin{vmatrix} \hat{\mathbf{i}}&\hat{\mathbf{j}}&\hat{\mathbf{k}}\ u_x&u_y&u_z\ v_x&v_y&v_z\ \end{vmatrix} \nonumber$
Example $1$:
Given $\mathbf{u}=-2 \hat{\mathbf{i}}+\hat{\mathbf{j}}+\hat{\mathbf{k}}$ and $\mathbf{v}=3 \hat{\mathbf{i}}-\hat{\mathbf{j}}+\hat{\mathbf{k}}$, calculate $\mathbf{w}=\mathbf{u}\times \mathbf{v}$ and verify that the result is perpendicular to both $\mathbf{u}$ and $\mathbf{v}$.
Solution
\begin{align*} \mathbf{u}\times \mathbf{v} &= \begin{vmatrix} \hat{\mathbf{i}}&\hat{\mathbf{j}}&\hat{\mathbf{k}}\ u_x&u_y&u_z\ v_x&v_y&v_z\ \end{vmatrix}=\begin{vmatrix} \hat{\mathbf{i}}&\hat{\mathbf{j}}&\hat{\mathbf{k}}\ -2&1&1\ 3&-1&1\ \end{vmatrix} \[4pt] &=\hat{\mathbf{i}}(1+1)-\hat{\mathbf{j}}(-2-3)+\hat{\mathbf{k}}(2-3) \[4pt] &=\displaystyle{\color{Maroon}2 \hat{\mathbf{i}}+5 \hat{\mathbf{j}}-\hat{\mathbf{k}}} \end{align*} \nonumber
To verify that two vectors are perpendicular we perform the dot product:
$\mathbf{u} \cdot \mathbf{w}=(-2)(2)+(1)(5)+(1)(-1)=0 \nonumber$
$\mathbf{v} \cdot \mathbf{w}=(3)(2)+(-1)(5)+(1)(-1)=0 \nonumber$
An important application of the cross product involves the definition of the angular momentum. If a particle with mass $m$ moves a velocity $\mathbf{v}$ (a vector), its (linear) momentum is $\mathbf{p}=m\mathbf{v}$. Let $\mathbf{r}$ be the position of the particle (another vector), then the angular momentum of the particle is defined as
$\mathbf{l}=\mathbf{r}\times\mathbf{p} \nonumber$
The angular momentum is therefore a vector perpendicular to both $\mathbf{r}$ and $\mathbf{p}$. Because the position of the particle needs to be defined with respect to a particular origin, this origin needs to be specified when defining the angular momentum.
Vector Normalization
A vector of any given length can be divided by its modulus to create a unit vector (i.e. a vector of unit length). We will see applications of unit (or normalized) vectors in the next chapter.
For example, the vector
$\mathbf{u}=\hat{\mathbf{i}}+\hat{\mathbf{j}}+i\hat{\mathbf{k}} \nonumber$
has a magnitude:
$|\mathbf{u}|^2=1^2+1^2+(-i)(i)=3\rightarrow |\mathbf{u}|=\sqrt{3} \nonumber$
Therefore, to normalize this vector we divide all the components by its length:
$\hat{\mathbf{u}}=\frac{1}{\sqrt{3}}\hat{\mathbf{i}}+\frac{1}{\sqrt{3}}\hat{\mathbf{j}}+\frac{i}{\sqrt{3}}\hat{\mathbf{k}} \nonumber$
Notice that we use the “hat” to indicate that the vector has unit length.
Need help? The links below contain solved examples.
Operations with vectors: http://tinyurl.com/mw4qmz8
External links:
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(LibreTexts)/32%3A_Math_Chapters/32.03%3A_Vectors.txt
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.