chapter
stringlengths 1.97k
1.53M
| path
stringlengths 47
241
|
---|---|
As we have already seen, an eigenfunction of an operator $\hat{A}$ is a function $f$ such that the application of $\hat{A}$ on $f$ gives $f$ again, times a constant:
$\hat{A} f = k f, \label{22.2.1}$
where $k$ is a constant called the eigenvalue.
When a system is in an eigenstate of observable $A$ (i.e., when the wave function is an eigenfunction of the operator $\hat{A}$) then the expectation value of $A$ is the eigenvalue of the wave function. Therefore:
$\hat{A} \psi({\bf r}) = a \psi({\bf r}), \label{22.2.2}$
then:
\begin{aligned}\langle A \rangle &= \int \psi^{*}({\bf r}) \hat{A} \psi({\bf r}) d{\bf r} \ &= \int \psi^{*}({\bf r}) a \psi({\bf r}) d{\bf r} \ &= a \int \psi^{*}({\bf r}) \psi({\bf r}) d{\bf r} = a, \end{aligned} \label{22.2.3}
which implies that:
$\int \psi^{*}({\bf r}) \psi({\bf r}) d{\bf r} = 1. \label{22.2.4}$
This property of wave functions is called normalization, and in the one-electron TISEq guarantees that the maximum probability of finding an electron over the entire space is one.1
A unique property of quantum mechanics is that a wave function can be expressed not just as a simple eigenfunction, but also as a combination of several of them. We have in part already encountered such property in the previous chapter, where complex hydrogen orbitals have been combined to form corresponding linear ones. As a general example, let us consider a wave function written as a linear combination of two eigenstates of $\hat{A}$, with eigenvalues $a$ and $b$:
$\psi = c_a \psi_a + c_b \psi_b, \label{22.2.5}$
where $\hat{A} \psi_a = a \psi_a$ and $\hat{A} \psi_b = b \psi_b$. Then, since $\psi_a$ and $\psi_b$ are orthogonal and normalized (usually abbreviated as orthonormal), the expectation value of $A$ is:
\begin{aligned}\langle A \rangle &= \int \psi^{*} \hat{A} \psi d{\bf r} \ &= \int \left[ c_a \psi_a + c_b \psi_b \right]^{*} \hat{A} \left[ c_a \psi_a + c_b \psi_b \right] d{\bf r}\ &= \int \left[ c_a \psi_a + c_b \psi_b \right]^{*} \left[ a c_a \psi_a + b c_b \psi_b \right] d{\bf r}\ &= a \vert c_a\vert^2 \int \psi_a^{*} \psi_a d{\bf r} + b c_a^{*} c_b \int \psi_a^{*} \psi_b d{\bf r} + a c_b^{*} c_a \int \psi_b^{*} \psi_a d{\bf r} + b \vert c_b\vert^2 \int \psi_b^{*} \psi_b d{\bf r}\ &= a \vert c_a\vert^2 + b \vert c_b\vert^2. \end{aligned} \label{22.2.6}
This result shows that the average value of $A$ is a weighted average of eigenvalues, with the weights being the squares of the coefficients of the eigenvectors in the overall wavefunction.2
1. ︎Imposing the normalization condition is the best way to find the constant $A$ in the solution of the TISEq for the particle in a box, a topic that we delayed in chapter 20.︎
2. This section was adapted in part from Prof. C. David Sherrill’s A Brief Review of Elementary Quantum Chemistry Notes available here.
21.03: Common Operators in Quantum Mechanics
Some common operators occurring in quantum mechanics are collected in the table below:
Observable Name Symbol Operator Operation
Position ${\bf r}$ $\hat{\bf r}$ Multiply by ${\bf r}$
Momentum ${\bf p}$ $\hat{\bf p}$ $-i \hbar \left(\hat{i}\dfrac{\partial}{\partial x} +\hat{j} \dfrac{\partial}{\partial y}+\hat{k} \dfrac{\partial}{\partial z} \right)$
Kinetic energy $K$ $\hat{K}$ $- \dfrac{\hbar^2}{2m} \left(\dfrac{\partial^2}{\partial x^2} +\dfrac{\partial^2}{\partial y^2} +\dfrac{\partial^2}{\partial z^2} \right)$
Potential energy $V({\bf r})$ $\hat{V}({\bf r})$ Multiply by $V({\bf r})$
Total energy $E$ $\hat{H}$ $-\dfrac{\hbar^2}{2m} \left(\dfrac{\partial^2}{\partial x^2} +\dfrac{\partial^2}{\partial y^2} +\dfrac{\partial^2}{\partial z^2} \right) +V({\bf r})$
Angular momentum $L$ $\hat{L}^2$ $\hat{L}_x^2+\hat{L}_y^2+\hat{L}_z^2$
$L_x$ $\hat{L}_x$ $-i\hbar\left(y\dfrac{\partial}{\partial z} - z \dfrac{\partial}{\partial y} \right)$
$L_y$ $\hat{L}_y$ $-i \hbar \left(z\dfrac{\partial}{\partial x} - x \dfrac{\partial}{\partial z} \right)$
$L_z$ $\hat{L}_z$ $-i \hbar \left(x\dfrac{\partial}{\partial y} - y \dfrac{\partial}{\partial x} \right)$
In the sections below we analyze in details two main operators for the energy and the angular momentum.
Hamiltonian Operator
The main quantity that quantum mechanics is interested in is the total energy of the system, $E$. The operator corresponding to this quantity is called Hamiltonian:
$\hat{H} = - \dfrac{\hbar^2}{2} \sum_i \dfrac{1}{m_i} \nabla_i^2 + V, \label{22.3.1}$
where $i$ is an index over all the particles of the system. Using the formalism of operators in conjunction with Equation \ref{22.3.1}, we can write the TISEq just simply as:
$\hat{H} \psi = E\psi. \label{22.3.2}$
Comparing Equation \ref{22.3.1} to the classical analog in Equation 18.3.2, we notice how the first term in the Hamiltonian operator represents the corresponding kinetic energy operator, $\hat{K}$, while the second term represents the potential energy operator, $\hat{V}$. For a one-electron system—such as the ones we studied in chapter 20—we can write:
$\hat{K}=- \dfrac{\hbar^2}{2m} \left(\dfrac{\partial^2}{\partial x^2} + \dfrac{\partial^2}{\partial y^2} + \dfrac{\partial^2}{\partial z^2} \right) = \dfrac{\hbar^2}{2m} \nabla^2, \label{22.3.3}$
which is universal and applies to all systems. The potential energy operator $\hat{V}$ is what differentiate each system. Using Equation \ref{22.3.2}, we can then simply obtain the TISEq for each of the first three models discussed in chapter 20 by simply using:
\begin{aligned} \text{Free particle:}\qquad \hat{V} &= 0, \ \text{Particle in a box:}\qquad \hat{V} &= 0 \; \text{inside the box, } \hat{V} = \infty \; \text{outside the box},\ \text{Harmonic oscillator:}\qquad \hat{V} &= \dfrac{1}{2}kx^2. \ \end{aligned} \label{22.3.4}
While these three cases are trivial to solve, the case of the rigid rotor is more complicated to solve, since the kinetic energy operator needs to be solved in spherical polar coordinates, as we will show in the next section.
Angular Momentum Operator
To write the kinetic energy operator $\hat{K}$ for the rigid rotor, we need to express the Laplacian, $\nabla^2$, in spherical polar coordinates:
$\nabla^2=\nabla^2_r - \dfrac{\hat{L}^2}{r^2}, \label{22.3.5}$
where $\nabla_r^2 = \dfrac{1}{r^2}\dfrac{\partial}{\partial r} \left( r^2\dfrac{\partial}{\partial r} \right)$ is the radial Laplacian, and $\hat{L}^2$ is the square of the total angular momentum operator, which is:
\begin{aligned} \hat{L}^2 &=\hat{L}\cdot\hat{L}=\left(\mathbf{i}\hat{L}_x+\mathbf{j}\hat{L}_y+\mathbf{k}\hat{L}_z\right)\cdot\left(\mathbf{i}\hat{L}_x+\mathbf{j}\hat{L}_y+\mathbf{k}\hat{L}_z \right) \ &=\hat{L}_x^2+\hat{L}_y^2+\hat{L}_z^2, \end{aligned} \label{22.3.6}
with $\left\{\mathbf{i},\mathbf{j},\mathbf{k}\right\}$ the unitary vectors in three-dimensional space. The component along each direction, $\left\{\hat{L}_x,\hat{L}_y,\hat{L}_z\right\}$, are then expressed in cartesian coordinates using to the following formulas:
\begin{aligned} \hat{L}_x &= -i\hbar\left(y\dfrac{\partial}{\partial z} - z \dfrac{\partial}{\partial y} \right), \ \hat{L}_y &= -i \hbar \left(z\dfrac{\partial}{\partial x} - x \dfrac{\partial}{\partial z} \right), \ \hat{L}_z &= -i \hbar \left(x\dfrac{\partial}{\partial y} - y \dfrac{\partial}{\partial x} \right). \end{aligned} \label{22.3.7}
The eigenvalues equation corresponding to the total angular momentum is:
$\hat{L}^2 Y(\theta, \varphi) = \hbar^2 \ell(\ell+1) Y_{\ell}^{m_{\ell}}(\theta, \varphi), \label{22.3.8}$
where $\ell$ is the azimuthal quantum number and $Y_{\ell}^m(\theta, \varphi)$ are the spherical harmonics, both of which we already encountered in chapter 20. Recall once again that each energy level $E_{\ell}$ is $(2\ell+1)$-fold degenerate in $m_{\ell}$, since $m_{\ell}$ can have values $-\ell, -\ell+1, \ldots, \ell-1, \ell$. This means that there are $(2\ell+1)$ states with the same energy $E_{\ell}$, each characterized by the magnetic magnetic quantum number $m_{\ell}$. This quantum number can be determined using the following eigenvalues equation:
$\hat{L}_z Y(\theta, \varphi) = \hbar m_{\ell} Y_{\ell}^{m_{\ell}}(\theta, \varphi). \label{22.3.9}$
The interpretation of these results is rather complicated, since the angular momenta are quantum operators and they cannot be drawn as vectors like in classical mechanics. Nevertheless, it is common to depict them heuristically as in figure $1$,1 where a set of states with quantum numbers $\ell =2$, and $m_{\ell}=-2,-1,0,1,2$ are reported. Since $|L|={\sqrt {L^{2}}}=\hbar {\sqrt {6}}$, the vectors are all shown with length $\hbar \sqrt{6}$. The rings represent the fact that $L_{z}$ is known with certainty, but $L_{x}$ and $L_{y}$ are unknown; therefore every classical vector with the appropriate length and $z$-component is drawn, forming a cone. The expected value of the angular momentum for a given ensemble of systems in the quantum state characterized by $\ell$ and $m_{\ell}$, could be somewhere on this cone but it cannot be defined for a single system.
1. This diagram is taken from Wikipedia by user Maschen, and is of public domain
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/21%3A_Operators_and_Mathematical_Background/21.02%3A_Eigenfunctions_and_Eigenvalues.txt
|
Spin is a special property of particles that has no classical analogue. Spin is an intrinsic form of angular momentum carried by elementary particles, such as the electron.
• 22.1: Stern-Gerlach Experiment
In 1920, Otto Stern and Walter Gerlach designed an experiment that unintentionally led to the discovery that electrons have their own individual, continuous spin even as they move along their orbital of an atom. The experiment was done by putting a silver foil in an oven to vaporize its atoms.
• 22.2: Sequential Stern-Gerlach Experiments
An interesting result can be obtain if we link multiple Stern–Gerlach apparatuses into one experiment and we perform the measurement along two orthogonal directions in space.
• 22.3: Spin Operators
The mathematics of quantum mechanics tell us that Sz and Sx do not commute. When two operators do not commute, the two measurable quantities that are associated with them cannot be known at the same time.
22: Spin
In 1920, Otto Stern and Walter Gerlach designed an experiment that unintentionally led to the discovery that electrons have their own individual, continuous spin even as they move along their orbital of an atom. The experiment was done by putting a silver foil in an oven to vaporize its atoms. The silver atoms were collected into a beam that passed through an inhomogeneous magnetic field. The result was that the magnetic beam split the beam into two (and only two) separate ones. The Stern–Gerlach experiment demonstrated that the spatial orientation of angular momentum is quantized into two components (up and down). Thus an atomic-scale system was shown to have intrinsically quantum properties. The experiment is normally conducted using electrically neutral particles such as silver atoms. This avoids the large deflection in the path of a charged particle moving through a magnetic field and allows spin-dependent effects to dominate.
If the particle is treated as a classical spinning magnetic dipole, it will precess in a magnetic field because of the torque that the magnetic field exerts on the dipole. If it moves through a homogeneous magnetic field, the forces exerted on opposite ends of the dipole cancel each other out and the trajectory of the particle is unaffected. However, if the magnetic field is inhomogeneous then the force on one end of the dipole will be slightly greater than the opposing force on the other end, so that there is a net force which deflects the particle’s trajectory. If the particles were classical spinning objects, one would expect the distribution of their spin angular momentum vectors to be random and continuous. Each particle would be deflected by an amount proportional to its magnetic moment, producing some density distribution on the detector screen. Instead, the particles passing through the Stern–Gerlach apparatus are equally distributed among two possible values, with half of them ending up at an upper spot (“spin up”), and the other half at the lower spot (“spin down”). Since the particles are deflected by a magnetic field, spin is a magnetic property that is associated to some intrinsic form of angular momentum. As we saw in chapter 6, the quantization of the angular momentum gives energy levels that are $(2\ell+1)$-fold degenerate. Since along the direction of the magnet we observe only two possible eigenvalues for the spin, we conclude the following value for $s$:
$2s+1=2 \quad\Rightarrow\quad s=\dfrac{1}{2}. \label{23.1.1}$
The Stern-Gerlach experiment proves that electrons are spin-$\dfrac{1}{2}$ particles. These have only two possible spin angular momentum values measured along any axis, $+\dfrac {\hbar }{2}$ or $-\dfrac {\hbar }{2}$, a purely quantum mechanical phenomenon. Because its value is always the same, it is regarded as an intrinsic property of electrons, and is sometimes known as “intrinsic angular momentum” (to distinguish it from orbital angular momentum, which can vary and depends on the presence of other particles).
The act of observing (measuring) the momentum along the $z$ direction corresponds to the operator $\hat{S}_z$, which project the value of the total spin operator $\hat{S}^2$ along the $z$ axis. The eigenvalues of the projector operator are:
$\hat{S}_z \phi = \hbar m_s \phi, \label{23.1.2}$
where $m_s=\left\{-s,+s\right\}=\left\{-\dfrac{1}{2},+\dfrac{1}{2}\right\}$ is the spin quantum number along the $z$ component. The eigenvalues for the total spin operator $\hat{S}^2$—similarly to the angular momentum operator $\hat{L}^2$ seen in Equation 22.3.6—are:
$\hat{S}^2 \phi = \hbar^2 s(s+1) \phi, \label{23.1.3}$
The initial state of the particles in the Stern-Gerlach experiment is given by the following wave function:
$\phi = c_1\, \phi_{\uparrow} + c_2 \,\phi_{\downarrow}, \label{23.1.4}$
where $\uparrow=+\dfrac{\hbar}{2}$, $\downarrow=-\dfrac{\hbar}{2}$, and the coefficients $c_1$ and $c_2$ are complex numbers. In this initial state, spin can point in any direction. The expectation value of the operator $\hat{S}_z$ (the quantity that the Stern-Gerlach experiment measures), can be obtained using Equation 22.2.6:
\begin{aligned} <S_z> &= \int \phi^{*} \hat{S}_z \phi \, d\mathbf{s} \ &= +\dfrac{\hbar}{2} \vert c_1\vert^2 -\dfrac{\hbar}{2} \vert c_2\vert^2, \end{aligned} \label{23.1.5}
where the integration is performed along a special coordinate $\mathbf{s}$ composed of only two values, and the coefficient $c_1$ and $c_2$ are complex numbers. Applying the normalization condition, Equation 6.2.4 we can obtain:
$|c_{1}|^{2}+|c_{2}|^{2}=1 \quad\longrightarrow\quad |c_{1}|^{2}=|c_{2}|^{2}=\dfrac{1}{2}. \label{23.1.6}$
This equation is not sufficient to determine the values of the coefficients since they are complex numbers. Equation \ref{23.1.6}, however, tells us that the squared magnitudes of the coefficients can be interpreted as probabilities of outcome from the experiment. This is true because their values are obtained from the normalization condition, and the normalization condition guarantees that the system is observed with probability equal to one. Summarizing, since we started with random initial directions, each of the two states, $\phi_{\uparrow}$ and $\phi_{\downarrow}$, will be observed with equal probability of $\dfrac{1}{2}$.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/22%3A_Spin/22.01%3A_Stern-Gerlach_Experiment.txt
|
An interesting result can be obtain if we link multiple Stern–Gerlach apparatuses into one experiment and we perform the measurement along two orthogonal directions in space. As we showed in the previous section, all particles leaving the first Stern-Gerlach apparatus are in an eigenstate of the $\hat{S}_z$ operator (i.e., their spin is either “up or”down” with respect to the $z$-direction). We can then take either one of the two resulting beams (for simplicity let’s take the “spin up” output), and perform another spin measurement on it. If the second measurement is also aligned along the $z$-direction then only one outcome will be measured, since all particles are already in the “spin up” eigenstate of $\hat{S}_z$. In other words, the measurement of a particle being in an eigenstate of the corresponding operator leaves the state unchanged.
If, however, we perform the spin measurement along a direction perpendicular to the original $z$-axis (i.e., the $x$-axis) then the output will equally distribute among “spin up” or ”spin down” in the $x$-direction, which in order to avoid confusion, we can call “spin left” and “spin right”. Thus, even though we knew the state of the particles beforehand, in this case the measurement resulted in a random spin flip in either of the measurement directions. Mathematically, this property is expressed by the nonvanishing of the commutator of the spin operators:
$\left[\hat{S}_z,\hat{S}_x \right] \neq 0. \label{23.2.1}$
We can finally repeat the measurement a third time, with the magnet aligned along the original $z$-direction. According to classical physics, after the second apparatus, we would expect to have one beam with characteristic “spin up” and “spin left”, and another with characteristic “spin up” and “spin right”. The outcome of the third measurement along the original $z$-axis should be one output with characteristic “spin up”, regardless to which beam the magnet is applied (since the “spin down” component should have been “filtered out” by the first experiment, and the “spin left” and “spin right” component should be filtered out by the third magnet). This is not what is observed. The output of the third measurement is—once again—two beams in the $z$ direction, one with “spin up” characteristric and the other with “spin down”.
This experiment shows that spin is not a classical property. The Stern-Gerlach apparatus does not behave as a simple filter, selecting beams with one specific pre-determined characteristic. The second measurement along the $x$ axis destroys the previous determination of the angular momentum in the $z$ direction. This means that this property cannot be measured on two perpendicular directions at the same time.
22.03: Spin Operators
The mathematics of quantum mechanics tell us that $\hat{S}_z$ and $\hat{S}_x$ do not commute. When two operators do not commute, the two measurable quantities that are associated with them cannot be known at the same time.
In 3-dimensional space there are three directions that are orthogonal to each other $\left\{x,y,z\right\}$. Thus, we can define a third spin projection operator along the $y$ direction, $\hat{S}_y$, corresponding to a new set of Stern-Gerlach experiments where the second magnet is oriented along a direction that is orthogonal to the two that we consider in the previous section. The total spin operator, $\hat{S}^2$, can then be constructed similarly to the total angular momentum operator of Equation 22.3.5, as:
\begin{aligned} \hat{S}^2 &=\hat{S}\cdot\hat{S}=\left(\mathbf{i}\hat{S}_x+\mathbf{j}\hat{S}_y+\mathbf{k}\hat{S}_z\right)\cdot\left(\mathbf{i}\hat{S}_x+\mathbf{j}\hat{S}_y+\mathbf{k}\hat{S}_z \right) \ &=\hat{S}_x^2+\hat{S}_y^2+\hat{S}_z^2, \end{aligned}\label{23.3.1}
with $\left\{\mathbf{i},\mathbf{j},\mathbf{k}\right\}$ the unitary vectors in three-dimensional space.
Wolfgang Pauli explicitly derived the relationships between all three spin projection operators. Assuming the magnetic field along the $z$ axis, Pauli’s relations can be written using simple equations involving the two possible eigenstates $\phi_{\uparrow}$ and $\phi_{\downarrow}$:
\begin{aligned} \hat{S}_x \phi_{\uparrow} = \dfrac{\hbar}{2} \phi_{\downarrow} \qquad \hat{S}_y \phi_{\uparrow} &= \dfrac{\hbar}{2} i \phi_{\downarrow} \qquad \hat{S}_z \phi_{\uparrow} = \dfrac{\hbar}{2} \phi_{\uparrow} \ \hat{S}_x \phi_{\downarrow} = \dfrac{\hbar}{2} \phi_{\uparrow} \qquad \hat{S}_y \phi_{\downarrow} &= - \dfrac{\hbar}{2} i \phi_{\uparrow} \qquad \hat{S}_z \phi_{\downarrow} = -\dfrac{\hbar}{2} \phi_{\downarrow}, \end{aligned}\label{23.3.2}
where $i$ is the imaginary unit ($i^2=-1$). In other words, for $\hat{S}_z$ we have eigenvalue equations, while the remaining components have the effect of permuting state $\phi_{\uparrow}$ with state $\phi_{\downarrow}$ after multiplication by suitable constants. We can use these equations, together with Equation 23.1.7, to calculate the commutator for each couple of spin projector operators:
\begin{aligned} \left[\hat{S}_x, \hat{S}_y\right] &= i\hat{S}_z \ \left[\hat{S}_y, \hat{S}_z\right] &= i\hat{S}_x \ \left[\hat{S}_z, \hat{S}_x\right] &= i\hat{S}_y, \end{aligned}\label{23.3.3}
which prove that the three projection operators do not commute with each other.
Example $1$
Proof of Commutator Between Spin Projection Operators.
Solution
The equations in \ref{23.3.3} can be proved by writing the full eigenvalue equation and solving it using the definition of commutator, Equation 23.1.7, in conjunction with Pauli’s relation, Equations \ref{23.3.2}. For example, for the first couple:
\begin{aligned} \left[\hat{S}_x, \hat{S}_y\right] \phi_{\uparrow} &= \hat{S}_x\hat{S}_y\phi_{\uparrow}-\hat{S}_y\hat{S}_x\phi_{\uparrow} \ &= \hat{S}_x \left(\dfrac{\hbar}{2}i \phi_{\downarrow} \right)-\hat{S}_y \left(\dfrac{\hbar}{2} \phi_{\downarrow} \right) \ &= \dfrac{\hbar}{2}i \left(\dfrac{\hbar}{2}i \phi_{\downarrow} \right)- \dfrac{\hbar}{2} \left(\dfrac{\hbar}{2} \phi_{\downarrow} \right) \ &= \left(\dfrac{\hbar}{4}+\dfrac{\hbar}{4}\right)i\phi_{\uparrow} \ &= \dfrac{\hbar}{2}i \phi_{\uparrow} \ &= i\hat{S}_z \phi_{\uparrow} \end{aligned}\label{23.3.4}
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/22%3A_Spin/22.02%3A_Sequential_Stern-Gerlach_Experiments.txt
|
In order to understand deeper quantum mechanics, scientists have derived a series of axioms that result in what are called postulates of quantum mechanics. These are, in fact, assumptions that we need to make to understand how the measured reality relates with the mathematics of quantum mechanics. It is important to notice that the postulates are necessary for the interpretation of the theory, but not for the mathematics behind it. Regarding of whether we interpret it or not, the mathematics is complete and consistent. In fact, as we will see in the next chapter, several controversies regarding the interpretation of the mathematics are still open, and different philosophies have been developed to rationalize the results. Recall also that there are different ways of writing the equation of quantum mechanics, all equivalent to each other (i.e., Schrödinger’s differential formulation and Heisenberg’s algebraic formulation that we saw in chapter 3). For these reasons, there is not an agreement on the number of postulates that are necessary to interpret the theory, and some philosophy and/or formulation might require more postulates than others. In this chapter, we will discuss the six postulates, as they are usually presented in chemistry and introductory physics textbooks and as they relate with a basic statistical interpretation of quantum mechanics. Regardless of the philosophical consideration on the meanings and numbers of the postulate, as well as their physical origin, these statements will make the interpretation of the theory a little easier, as we will see in the next chapter.
23: Postulates of Quantum Mechanics
The state of a quantum mechanical system is completely specified by a function $\Psi({\bf r}, t)$ that depends on the coordinates of the particle(s) and on time. This function, called the wave function or state function, has the important property that $\Psi^{*}({\bf r}, t)\Psi({\bf r}, t) d\tau$ is the probability that the particle lies in the volume element $d\tau$ located at ${\bf r}$ at time $t$.
The wave function must satisfy certain mathematical conditions because of this probabilistic interpretation. For the case of a single particle, the probability of finding it somewhere is 1, so that we have the normalization condition
$\int_{-\infty}^{\infty} \Psi^{*}({\bf r}, t) \Psi({\bf r}, t) d\tau = 1 \label{24.1.1}$
It is customary to also normalize many-particle wave functions to 1. As we already saw for the particle in a box in chapter 20, a consequence of the first postulate is that the wave function must also be single-valued, continuous, and finite, so that derivatives can be defined and calculated at each point in space. This consequence allows for operators (which typically involve derivation) to be applied without mathematical issues.
23.02: Postulate 2- Experimental Observables
To every observable in classical mechanics there corresponds a linear, Hermitian operator in quantum mechanics. We have in part already discussed this postulate in chapter 22, albeit we didn’t call it as such. This postulate is necessary if we require the expectation value of an operator $\hat{A}$ to be real, as it should be.
23.03: Postulate 3- Individual Measurements
In any measurement of the observable associated with operator $\hat{A}$, the only values that will ever be observed are the eigenvalues $a$ that satisfy the eigenvalue equation:
$\hat{A} \Psi = a \Psi. \label{24.3.1}$
This postulate captures the central point of quantum mechanics: the values of dynamical variables can be quantized (although it is still possible to have a continuum of eigenvalues in the case of unbound states). If the system is in an eigenstate of $\hat{A}$ with eigenvalue $a$, then any measurement of the quantity $A$ will yield $a$. Although measurements must always yield an eigenvalue, the state does not have to be an eigenstate of $\hat{A}$ initially.
An arbitrary state can be expanded in the complete set of eigenvectors of $\hat{A}$ $\left(\hat{A}\Psi_i = a_i \Psi_i\right)$ as:
$\Psi = \sum_i^{n} c_i \Psi_i, \label{24.3.2}$
where $n$ may go to infinity. In this case, we only know that the measurement of $A$ will yield one of the values $a_i$, but we don’t know which one. However, we do know the probability that eigenvalue $a_i$ will occur (it is the absolute value squared of the coefficient, $\vert c_i\vert^2$, as we obtained already in chapter 22), leading to the fourth postulate below.
23.04: Postulate 4- Expectation Values and Collapse of the Wavefunction
If a system is in a state described by a normalized wave function $\Psi$, then the average value of the observable corresponding to $\hat{A}$ is given by:
$\langle A \rangle = \int_{-\infty}^{\infty} \Psi^{*} \hat{A} \Psi d\tau. \label{24.4.1}$
An important consequence of the fourth postulate is that, after measurement of $\Psi$ yields some eigenvalue $a_i$, the wave function immediately “collapses” into the corresponding eigenstate $\Psi_i$. In other words, measurement affects the state of the system. This fact is used in many experimental tests of quantum mechanics, such as the Stern-Gerlach experiment. Think again at the sequential experiment that we discussed in chapter 23. The act of measuring the spin along one coordinate is not simply a “filtration” of some pre-existing feature of the wave function, but rather an act that changes the nature of the wave function itself, affecting the outcome of future experiments. To this act corresponds the collapse of the wave function, a process that remains unexplained to date. Notice how the controversy is not in the mathematics of the experiment, which we already discussed in the previous chapter without issues. The issues rather arise because we don’t know how to define the measurement act in itself (other than the fact that it is some form of quantum mechanical procedure with clear and well-defined macroscopic outcomes). This is the reason why the collapse of the wave function is also sometimes called the measurement problem of quantum mechanics, and it is still a source of research and debate among modern scientists.
23.05: Postulate 5- Time Evolution
The wave function of a system evolves in time according to the time-dependent Schrödinger equation:
$\hat{H} \Psi({\bf r}, t) = i \hbar \dfrac{\partial \Psi}{\partial t}. \label{24.5.1}$
The central equation of quantum mechanics must be accepted as a postulate.
23.06: Postulate 6- Pauli Exclusion Principle
The total wave function of a system with $N$ spin-$\dfrac{1}{2}$ particles (also called fermions) must be antisymmetric with respect to the interchange of all coordinates of one particle with those of another. For spin-1 particles (also called bosons), the wave function is symmetric:
\begin{aligned} \Psi\left({\bf r}_1,{\bf r}_2,\ldots, {\bf r}_N\right) &= - \Psi\left({\bf r}_2,{\bf r}_1,\ldots, {\bf r}_N\right) \quad \text{fermions}, \ \Psi\left({\bf r}_1,{\bf r}_2,\ldots, {\bf r}_N\right) &= + \Psi\left({\bf r}_2,{\bf r}_1,\ldots, {\bf r}_N\right) \quad \text{bosons}. \end{aligned} \label{24.6.1}
Electronic spin must be included in this set of coordinates. As we will see in chapter 26, the mathematical treatment of the antisymmetry postulate gives rise to the Pauli exclusion principle, which states that two or more identical fermions cannot occupy the same quantum state simultaneously (while bosons are perfectly capable of doing so).
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/23%3A_Postulates_of_Quantum_Mechanics/23.01%3A_Postulate_1-_The_Wave_Function_Postulate.txt
|
In this chapter, we will delve deeper into the strangeness of quantum mechanics. In particular, we will explore quantum phenomena that don’t have a classical counterpart, starting from perhaps the most simple but also one of the most revealing: the double-slit experiment.
• 24.1: The Double-slit Experiment
The double-slit experiment is considered by many the seminal experiment in quantum mechanics. The reason why we see it only at this advanced point is that its interpretation is not as straightforward as it might seem from a superficial analysis.
• 24.2: Heisenberg's Uncertainty Principle
Let’s now revisit the simple case of a free particle.
• 24.3: Tunneling
Tunneling is a phenomenon where a particle may cross a barrier even if it does not have sufficient kinetic energy to overcome the potential of the barrier itself. In this situation, the particle is said to “tunnel through” the barrier following a purely quantum mechanical phenomenon (figure 25.3.1).
24: Quantum Weirdness
The double-slit experiment is considered by many the seminal experiment in quantum mechanics. The reason why we see it only at this advanced point is that its interpretation is not as straightforward as it might seem from a superficial analysis. The famous physicist Richard Feynman was so fond of this experiment that he used to say that all of quantum mechanics can be understood from carefully thinking through its implications.
The premises of the experiment are very simple: cut two slits in a solid material (such as a sheet of metal), send light or electrons through them, and observe what happens on a screen position at some distance on the other side. The result of this experiment though are far from straightforward.
Let’s first consider the single-slit case. If light consisted of classical particles, and these particles were sent in a straight line through a single-slit and allowed to strike a screen on the other side, we would expect to see a pattern corresponding to the size and shape of the slit. However, when this “single-slit experiment” is actually performed, the pattern on the screen is a diffraction pattern in which the light is spread out. The smaller the slit, the greater the angle of spread. This behavior is typical of waves, where diffraction explains the pattern as being the result of the interference of the waves with the slit.
If one illuminates two parallel slits, the light from the two slits again interferes. Here the interference is a more pronounced pattern with a series of alternating light and dark bands. The width of the bands is a property of the frequency of the illuminating light. The pattern observed on the screen is the result of this interference, as shown in figure \(1\).1
The interference pattern resulting from the double-slit experiment are observed not only with light, but also with a beam of electrons, and other small particles.
The individual particles experiment
The first twist in the plot is if we perform the experiment by sending individual particles (e.g, either individual photons, or individual electrons). Sending particles through a double-slit apparatus one at a time results in single particles appearing on the screen, as expected. Remarkably, however, an interference pattern emerges when these particles are allowed to build up one by one (figure \(2\))2. The resulting pattern on the screen is the same as if each individual particle had passed through both slits.
This variation of the double-slit experiment demonstrates the wave–particle duality: the particle is measured as a single pulse at a single position, while the wave describes the probability of absorbing the particle at a specific place on the screen.
“Which way” experiment
A second twist happens if we place particle detectors at the slits with the intent of showing through which slit a particle goes. The interference pattern in this case will disappear.
This experiment illustrates that photons (and electrons) can behave as either particles or waves, but cannot be observed as both at the same time. The simplest interpretation of this experiment is that the wave function of the photon collapses into a deterministic position due to the interaction with the detector on the slit, and the interference pattern is therefore lost. This result also proves that in order to measure (detect) a photon, we must interact with it, an act that changes its wave function.
The interpretation of the results of this experiment is not simple. As for other situations in quantum mechanics, the problem arise not because we cannot describe the experiment in mathematical terms, but because the math that we need to describe it cannot be related to the macroscopic classical world we live in. According to the math, in fact, particles in the experiment are described exclusively in probabilistic terms (given by the square of the wave function). The macroscopic world, however, is not probabilistic, and outcomes of experiments can be univocally measured. Several different ways of reediming this controversy have been proposed, including for example the possibility that quantum mechanics is incomplete (the emergence of probability is due to the ignorance of some more fundamental deterministic feature of nature), or assuming that every time a measurement is done on a quantum system, the universe splits, and every possible measurable outcome is observed in different branches of our universe (we only happen to live in one of such branches, so we observe only one non-probabilistic result).3 The interpretation of quantum mechanics is still an unsolved problem in modern physics (luckily, it does not prevent us from using quantum mechanics in chemistry).
1. This diagram is taken from Wikipedia by user Jordgette, and distributed under CC BY-SA 3.0 license.︎
2. This diagram is taken from Wikipedia by user Alexandre Gondran, and distributed under CC BY-SA 4.0 license︎
3. The interested student can read more about different interpretations HERE.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/24%3A_Quantum_Weirdness/24.01%3A_The_Double-slit_Experiment.txt
|
Let’s now revisit the simple case of a free particle. As we saw in chapter 20, the wave function that solved the TISEq:
$\psi(x) = A \exp(\pm ikx), \label{25.2.1}$
is the equation of a plane wave along the $x$ direction. This result is in agreement with the de Broglie hypothesis, which says that every object in the universe is a wave. If this wave function describes a particle with mass (such as an electron), freely moving along one spatial direction $x$, it would be reasonable to ask the question: where is the particle located? Analyzing Equation \ref{25.2.1}, however, it is not possible to answer this question since $\psi(x)$ is delocalized in space from $x=-\infty$ to $x=+\infty$.1 In other words, the particle position is extremely uncertain because it could be essentially anywhere along the wave.
Thus for a free particle, the particle side of the wave-particle duality seems completely lost. We can, however, bring it back into the picture by writing the wave function as a sum of many plane waves, called a wave packet:
$\psi (x)\propto \sum _{n}A_{n}\exp\left(\dfrac{ip_n x}{\hbar} \right), \label{25.2.2}$
where $A_n$ represents the relative contribution of the mode $p_n$ to the overall total. We are allowed to write the wave function this way because each individual plane wave is a solution of the TISEq, and as we already saw in chapter 22 and several other places, the sum of each individual solution is also a solution. An interesting consequence of writing the wave function as a wave packet is that when we sum different waves, they interfere with each other, and they might localize in some region of space. Thus for a wave function written as in Equation \ref{25.2.2}, the wave packet can become more localized. We may also make this procedure a step further to the continuum limit, where the wave function goes from a sum to an integral over all possible modes:
$\psi (x)=\dfrac {1}{\sqrt{2\pi\hbar}}\int_{-\infty }^{\infty }\varphi (p)\cdot \exp \left(\dfrac{ip x}{\hbar} \right)\,dp, \label{25.2.3}$
where $\varphi(p)$ represents the amplitude of these modes and is called the wave function in momentum space. In mathematical terms, we say that $\varphi (p)$ is the Fourier transform of $\psi (x)$ and that $x$ and $p$ are conjugate variables. Adding together all of these plane waves comes at a cost; namely, the momentum has become less precise since it becomes a mixture of waves of many different momenta.
One way to quantify the precision of the position and momentum is the standard deviation, $\sigma$. Since $|\psi (x)|^{2}$ is a probability density function for position, we calculate its standard deviation. The precision of the position is improved—i.e., reduced $\sigma_x$—by using many plane waves, thereby weakening the precision of the momentum—i.e., increased $\sigma_p$. Another way of stating this is that $\sigma_x$ and $\sigma_p$ have an inverse relationship (once we know one with absolute precision, the other becomes completely unknown). This fact was discovered by Werner Heisenberg and is now called the Heisenberg’s uncertainty principle. The mathematical treatment of this procedure results in the simple formula:
$\sigma_{x}\sigma_{p} \geq \dfrac{\hbar }{2}. \label{25.2.4}$
The uncertainty principle can be extended to any couple of conjugated variables, including, for example, energy and time, angular momentum components along perpendicular directions, spin components along perpendicular directions, etc. It is also easy to show that conjugate variables in quantum mechanics correspond to non-commuting operators.2
1. The time-dependent picture does not help us either, but since it is a little more complicated to work with the TDSEq, we are not showing it here.︎
2. Therefore, a simpler way of finding if two variables are subject to the uncertainty principle is to check if their corresponding operators commute.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/24%3A_Quantum_Weirdness/24.02%3A_Heisenberg%27s_Uncertainty_Principle.txt
|
Tunneling is a phenomenon where a particle may cross a barrier even if it does not have sufficient kinetic energy to overcome the potential of the barrier itself. In this situation, the particle is said to “tunnel through” the barrier following a purely quantum mechanical phenomenon (figure $1$).1
To explain tunneling we must resort once again to the TISeq. A traveling or standing wave function incident on a non-infinite potential barrier ($V_0$) decays in the potential as a function of $A_0\exp[-\alpha x]$, where $A_0$ is the amplitude at the boundary, $\alpha$ is proportional to the potential, and $x$ is the distance into the potential. If a second well exists at infinite distance from the first well, the probability goes to zero, so the probability of a particle existing in the second well is zero. If a second well is brought closer to the first well, the amplitude of the wave function at this boundary is not zero, so the particle may tunnel into that well from the first well. It would appear that the particle is “leaking” through the barrier; it can travel through it without having to surmount it. An important point to keep in mind is that tunneling conserves energy. The final sum of the kinetic and potential energy of the system cannot exceed the initial sum. Therefore, the potential on both sides of the barrier does not need to be the same, but the sum of the ground state energy and the potential on the opposite side of the barrier may not be larger than the initial particle energy and potential.
Tunneling can be described using the TISEq, Equation 22.3.1. For the tunneling problem we can take the potential $V$ to be zero for all space, except for the region inside the barrier (between $0$ and $a$):
$V=\begin{cases} 0\quad&\text{if}\; -\infty<x\leq 0 \ V_0\quad&\text{if}\; 0<x<a \ 0\quad&\text{if}\; a\leq x< \infty \end{cases}. \label{25.3.1}$
To solve the TISEq with this potential, we must solve it separately for each region, but we should make sure that the wave function stays single-valued, continuous and everywhere continuously differentiable. The general solution for each region, before applying the boundary conditions, is:
$\psi=\begin{cases} A\sin (kx)+B\cos (kx)\quad&\text{if}\; -\infty<x\leq 0 \ C \exp(-\alpha x)+D\exp(\alpha x) \quad&\text{if}\; 0<x<a \ E\sin (kx)+F\cos (kx) \quad&\text{if}\; a\leq x< \infty \end{cases} \label{25.3.2}$
where $k=\dfrac{\sqrt{2mE}}{\hbar}$, and $\alpha=\dfrac{\sqrt{2m(V_0-E)}}{\hbar}$. To enforce continuity, we must have at the first boundary:
$A\sin(0) +B \cos(0)=C\exp(0)+D\exp(0), \label{25.3.3}$
which implies that $A=0$, and $B=C+D$. At the opposite boundary:
$A\sin(ka) +B \cos(ka)=C\exp(-\alpha a)+D\exp(\alpha a). \label{25.3.4}$
We notice that, as $a$ goes to infinity, the right hand side of Equation \ref{25.3.4} goes to infinity, which does not make physical sense. To reconcile this, we must set $D=0$.
For the final region, $E$ and $F$, present a potentially intractable problem. However, if one realizes that the value at the boundary $a$ is driving the wave in the region $a$ to infinity, it may also be realized that the wave function could be rewritten as $C\exp[-\alpha a]\cos[k(x-a)]$, phase shifting the wave function by the value of $a$, and setting the amplitude to the boundary value. Summarizing, the wave function is:
$\psi=\begin{cases} B\cos (kx)\quad&\text{if}\; -\infty<x\leq 0 \ B \exp(-\alpha x) \quad&\text{if}\; 0<x<a \ B\exp(-\alpha a)\cos[k(x-a)] \quad&\text{if}\; a\leq x< \infty. \end{cases} \label{25.3.5}$
Comparing the wave function on the left of the barrier with the one on its right, we notice how the amplitude is attenuated by the barrier as $\exp\left(-a\dfrac{\sqrt{2m(V_0-E)}}{\hbar}\right)$, where $a$ is the width of the barrier, and $(V_0-E)$ is the difference between the potential energy of the barrier and the current energy of the particle. Since the square of the wave function is the probability distribution, the probability of transmission through a barrier is:
$\exp\left(-2a\dfrac{\sqrt{2m(V_0-E)}}{\hbar}\right). \label{25.3.6}$
As the barrier width or height approaches zero, the probability of a particle tunneling through the barrier becomes one. We can also note that $k$ is unchanged on the other side of the barrier. This implies that the energy of the particle is exactly the same as it was before it tunneled through the barrier, as stated earlier, the only thing that changes is the quantity of particles going in that direction. The rest is reflected off the barrier, and go back the way it came. On the opposite end, as the barrier width or height approaches infinity, the probability of a particle tunneling through the barrier becomes zero, and the barrier behaves similarly to those that contained the particle in the particle in a box example discussed in chapter 20.
1. This diagram is taken from Wikipedia by user Felix Kling, and distributed under CC BY-SA 3.0 license.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/24%3A_Quantum_Weirdness/24.03%3A_Tunneling.txt
|
When two or more electrons are present in a system, the TISEq equation cannot be solved analytically. Thus for the vast majority of chemical applications, we must rely on approximate methods. We will explore some of these approximations in this and further chapter, starting from the many-electron atoms (all atoms other than hydrogen). It is important to stress that because of the nature of approximations, this is still a very active field of scientific research, and improved methods are developed every year.
The electronic Hamiltonian for a many-electron atom can be written as:
$\hat{H}({\bf r}_1,{\bf r}_2,\ldots,{\bf r}_N)=\sum_{i=1}^N \left(-{\dfrac {\hbar ^{2}}{2m_e }}\nabla_{i}^{2}-{\dfrac {Ze^{2}}{4\pi \varepsilon _{0}r_{i}}} \right)+{\dfrac {e^{2}}{4\pi \epsilon _{0}}}\sum_{i<j}\dfrac{1}{r_{i j}}, \tag{10.1}$
where $Z$ is the nuclear charge, $m_e$ and $e$ are respectively the mass and charge of an electron, ${\bf r}_i$ and $\nabla_i^2$ are the spatial coordinates and the Laplacian of each electron, $r_{i}=|{\bf r}_i|$, and $r_{ij}=|{\bf r}_i-{\bf r}_j|$ is the distance between two electrons (all other symbols have been explained in previous chapters). The TISEq is easily written using Equation 22.3.1.
• 25.1: Many-Electron Wave Functions
When we have more than one electron, the sixth postulate that we discussed in chapter 24 comes into place. In other words, we need to account for the spin of the electrons and we need the wave function to be antisymmetric with respect to exchange of the coordinates of any two electrons.
• 25.2: Approximated Hamiltonians
In order to solve the TISEq for a many-electron atom we also need to approximate the Hamiltonian, since analytic solution using the full Hamiltonian as in Equation 26.1 are impossible to find. The most significant approximation used in chemistry is called the variational method.
25: Many-Electron Atoms
When we have more than one electron, the sixth postulate that we discussed in chapter 24 comes into place. In other words, we need to account for the spin of the electrons and we need the wave function to be antisymmetric with respect to exchange of the coordinates of any two electrons. In order to do so, we can define a new variable ${\bf x}$ which represents the set of all four coordinates associated with an electron: three spatial coordinates ${\bf r}$, and one spin coordinate $\mathbf{s}$, i.e., ${\bf x} = \{ {\bf r}, {\bf s} \}$. We can then write the electronic wave function as $\Psi({\bf x}_1, {\bf x}_2, \ldots, {\bf x}_N)$, and we require the sixth postulate to hold by writing:
$\Psi\left({\bf x}_1,{\bf x}_2,\ldots, {\bf x}_N\right) = - \Psi\left({\bf x}_2,{\bf x}_1,\ldots, {\bf x}_N\right) \label{26.1.1}$
A very important step in simplifying $\Psi({\bf x})$ is to expand it in terms of a set of one-electron functions. Since we need to take into account the spin coordinate as well, we can define a new function, called spin-orbital, by multiplying a spatial orbital by one of the two spin functions:
\begin{aligned} \chi({\bf x}) &= \psi({\bf r}) \phi_{\uparrow}({\bf s}), \ \chi({\bf x}) &= \psi({\bf r}) \phi_{\downarrow}({\bf s}). \end{aligned} \label{26.1.2}
Notice that for a given spatial orbital $\psi({\bf r})$, we can form two spin orbitals, one with $\uparrow$ spin, and one with $\downarrow$ spin (since the spin coordinate ${\bf s}$ has only two possible values, as already discussed in chapter 23). For the spatial orbitals we can use the same one-particle functions that solve the TISEq for the hydrogen atom, $\psi_{n\ell m_{\ell}}({\bf r})$(eq. 21.7 in chapter 21). Notice how each spin-orbital now depends on four quantum numbers, the three for the spatial part, $n,\ell,m_{\ell}$, plus the spin quantum number $m_s$. We need to keep in mind, however, that the spin-orbitals, $\chi_{n\ell m_{\ell} m_{s}}$, are not analytic solutions to the TISEq, so the resulting wave function is not the exact wave function of the system, but just an approximation.
Once we have defined one-electron spin-orbitals for each electron in the system, we can use them as the basis for our many-electron wave function. While doing so, we need to make sure to enforce the antisymmetry property of the overall wave function. We will start from the simplest case of an atom with two electrons with coordinates $\mathbf{x}_1$ and $\mathbf{x}_2$, which we put in two spin-orbitals $\chi_1$ and $\chi_2$. We can write the total wave function as a linear combination of the two spin-orbitals as:
\begin{aligned} \Psi({\bf x}_1, {\bf x}_2) =& b_{11} \chi_1({\bf x}_1) \chi_1({\bf x}_2) + b_{12} \chi_1({\bf x}_1) \chi_2({\bf x}_2) + \ & b_{21} \chi_2({\bf x}_1) \chi_1({\bf x}_2) + b_{22} \chi_2({\bf x}_1) \chi_2({\bf x}_2). \end{aligned} \label{26.1.3}
We then notice that in order for the antisymmetry principle to be obeyed, we need $b_{12} = -b_{21}$ and $b_{11} = b_{22} = 0$, which give:
$\Psi({\bf x}_1, {\bf x}_2) = b_{12} \left[ \chi_1({\bf x}_1) \chi_2({\bf x}_2) - \chi_2({\bf x}_1) \chi_1({\bf x}_2)\right]. \label{26.1.4}$
This wave function is sufficient to describe two-electron atoms and ions, such as helium. The numerical coefficient can be determined imposing the normalization condition, and is equal to $b_{12} = \dfrac{1}{\sqrt{2}}$. For the ground state of helium, we can replace the spatial component of each spin-orbital with the $1s$ hydrogenic orbital, $\psi_{100}$, resulting in:
\begin{aligned} \Psi({\bf x}_1, {\bf x}_2) &= \dfrac{1}{\sqrt{2}} \left[ \psi_{100}({\bf r}_1)\phi_{\uparrow} \; \psi_{100}({\bf r}_2)\phi_{\downarrow} - \psi_{100}({\bf r}_1)\phi_{\downarrow} \; \psi_{100}({\bf r}_2)\phi_{\uparrow} \right] \ &= \psi_{100}({\bf r}_1)\psi_{100}({\bf r}_2) \dfrac{1}{\sqrt{2}} \left[ \phi_{\uparrow}\phi_{\downarrow} - \phi_{\downarrow}\phi_{\uparrow} \right], \end{aligned} \label{26.1.5}
which clearly shows how we need just one spatial orbital, $\psi_{100}$, to describe the system, while the antisymmetry is taken care by a suitable combination of spin functions, $\dfrac{1}{\sqrt{2}} \left[ \phi_{\uparrow}\phi_{\downarrow} - \phi_{\downarrow}\phi_{\uparrow} \right]$. Notice also that we commit a small inaccuracy when we say: “two electron occupies one spin-orbital, one electron has spin up, and the other electron has spin down, with configuration: $[\uparrow\downarrow]$”, as is typically found in general chemistry textbooks. The reality of the spin configuration is indeed more complicated, and the ground state of helium should be represented as $\dfrac{1}{\sqrt{2}}\left[\uparrow\downarrow-\downarrow\uparrow\right]$.
In order to generalize from two electrons to $N$, we can first observe how Equation (26.1.4) could be easily constructed by placing the spin-orbitals into a $2\times2$ matrix and calculating its determinant:
$\Psi({\bf x}_1, {\bf x}_2)= \dfrac{1}{\sqrt{2}}{\begin{vmatrix} \chi_1({\bf x}_1)&\chi_2({\bf x}_1)\\chi_1({\bf x}_2)&\chi_2({\bf x}_2) \end{vmatrix}}, \label{26.1.6}$
where each column contains one spin-orbital, each row contains the coordinates of a single electron, and the vertical bars around the matrix mean that we need to calculate its determinant. This notation is called the Slater determinant, and it is the preferred way of building any $N$-electron wave function. Slater determinants are useful because they can be easily bult for any case of $N$ electrons in $N$ spin-orbitals, and they also automatically enforce the antisymmetry of the resulting wave function. A general Slater determinant is written:
$\Psi (\mathbf{x} _{1},\mathbf{x} _{2},\ldots ,\mathbf{x} _{N})={\dfrac {1}{\sqrt {N!}}}{\begin{vmatrix}\chi _{1}(\mathbf{x} _{1})&\chi _{2}(\mathbf{x} _{1})&\cdots &\chi _{N}(\mathbf{x} _{1})\\chi _{1}(\mathbf{x} _{2})&\chi _{2}(\mathbf{x} _{2})&\cdots &\chi _{N}(\mathbf{x} _{2})\\vdots &\vdots &\ddots &\vdots \\chi _{1}(\mathbf{x} _{N})&\chi _{2}(\mathbf{x} _{N})&\cdots &\chi _{N}(\mathbf{x} _{N})\end{vmatrix}} = |\chi _{1},\chi _{2},\cdots ,\chi _{N}\rangle, \label{26.1.7}$
where the notation $|\cdots\rangle$ is a shorthand to indicate the Slater determinant where only the diagonal elements are reported.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/25%3A_Many-Electron_Atoms/25.01%3A_Many-Electron_Wave_Functions.txt
|
In order to solve the TISEq for a many-electron atom we also need to approximate the Hamiltonian, since analytic solution using the full Hamiltonian as in Equation 26.1 are impossible to find. The most significant approximation used in chemistry is called the variational method.
Variational method
The basic idea of the variational method is to guess a “trial” wave function for the problem consisting of some adjustable parameters called “variational parameters”. These parameters are adjusted until the energy of the trial wave function is minimized. The resulting trial wave function and its corresponding energy are variational method approximations to the exact wave function and energy.
Why would it make sense that the best approximate trial wave function is the one with the lowest energy? This results from the Variational Theorem, which states that the energy of any trial wave function $E$ is always an upper bound to the exact ground state energy ${\cal E}_0$. This can be proven easily. Let the trial wave function be denoted $\Phi$. Any trial function can formally be expanded as a linear combination of the exact eigenfunctions $\Psi_i$. Of course, in practice, we don’t know the $\Psi_i$, since we are applying the variational method to a problem we can’t solve analytically. Nevertheless, that doesn’t prevent us from using the exact eigenfunctions in our proof, since they certainly exist and form a complete set, even if we don’t happen to know them. So, the trial wave function can be written:
$\Phi = \sum_i c_i \Psi_i, \label{26.2.1}$
and the approximate energy corresponding to this wave function is:
$E[\Phi] = \dfrac{\int \Phi^* {\hat H} \Phi d\mathbf{\tau}}{\int \Phi^* \Phi d\mathbf{\tau}}, \label{26.2.2}$
where $\mathbf{\tau}=\left(\mathbf{r}_1,\mathbf{r}_2,\ldots,\mathbf{r}_N\right)$ is the ensemble of the spatial coordinates of each electron and the integral symbol is assumed as a $3N$-dimensional integration. Replacing the expansion over the exact wave functions, we obtain:
$E[\Phi] = \dfrac{\sum_{ij} c_i^* c_j \int \Psi_i^* {\hat H} \Psi_jd\mathbf{\tau}}{ \sum_{ij} c_i^* c_j \int \Psi_i^* \Psi_jd\mathbf{\tau}}. \label{26.2.3}$
Since the functions $\Psi_j$ are the exact eigenfunctions of ${\hat H}$, we can use ${\hat H} \Psi_j = {\cal E}_j \Psi_j$ to obtain:
$E[\Phi] = \dfrac{\sum_{ij} c_i^* c_j {\cal E}_j \int \Psi_i^* \Psi_j d\mathbf{\tau}}{ \sum_{ij} c_i^* c_j \int \Psi_i^* \Psi_j d\mathbf{\tau}}. \label{26.2.4}$
Now using the fact that eigenfunctions of a Hermitian operator form an orthonormal set (or can be made to do so), we can write:
$E[\Phi] = \dfrac{\sum_{i} c_i^* c_i {\cal E}_i}{\sum_{i} c_i^* c_i}. \label{26.2.5}$
We now subtract the exact ground state energy ${\cal E}_0$ from both sides to obtain
$E[\Phi] - {\cal E}_0 = \dfrac{\sum_i c_i^* c_i ( {\cal E}_i - {\cal E}_0)}{ \sum_i c_i^* c_i}. \label{26.2.6}$
Since every term on the right-hand side is greater than or equal to zero, the left-hand side must also be greater than or equal to zero:
$E[\Phi] \geq {\cal E}_0. \label{26.2.7}$
In other words, the energy of any approximate wave function is always greater than or equal to the exact ground state energy ${\cal E}_0$.
This explains the strategy of the variational method: since the energy of any approximate trial function is always above the true energy, then any variations in the trial function which lower its energy are necessarily making the approximate energy closer to the exact answer. (The trial wave function is also a better approximation to the true ground state wave function as the energy is lowered, although not necessarily in every possible sense unless the limit $\Phi = \Psi_0$ is reached).
Approximated solution for the helium atom
We now have all the ingredients to attempt the simplest approximated solution to the TISEq of a many-electron atom. We can start by writing the total wave function using the Slater determinant of Equation 26.1.7 in terms of spin-orbitals:
$\Psi (\mathbf{x}_{1},\mathbf{x}_{2},\ldots ,\mathbf{x}_{N})= |\chi_{1},\chi_{2},\cdots ,\chi_{N}\rangle = |\psi_{1}\phi_{\uparrow},\psi_{1}\phi_{\downarrow},\cdots ,\psi_{\dfrac{N}{2}}\phi_{\uparrow},\psi_{\dfrac{N}{2}}\phi_{\downarrow}\rangle, \label{26.2.8}$
and then we can replace it into the TISEq for an $N$-electron system. This results into a set of $N$ one-electron equations, one for each electron. When we attempt to solve each individual equation, however, we end up with a problem, since the potential energy in the Hamiltonian of Equation 26.1 does not have spherical symmetry because of the electron-electron repulsion term. As such, the one-electron TISEq cannot be simply solved in spherical polar coordinates, as we did for the hydrogen atom in chapter 21. The simplest way of circumventing the problem is to neglect the electron-electron repulsion term (i.e., assume that the electrons are not correlated and do not interact with each other). For a 2-electron atom this procedure is straightforward, since the Hamiltonian can be written as a sum of one-electron Hamiltonians:
$\hat{H} =\hat{H}_1+\hat{H}_2, \label{26.2.9}$
with $\hat{H}_1$ and $\hat{H}_2$ looking identical to those used in the TISEq of the hydrogen atom. This one-particle Hamiltonian does not depend on the spin of the electron, and therefore, we can neglect the spin component of the Slater determinant and write the total wave function for the ground state of helium, Equation 26.1.4, simply as:
$\Psi({\bf r}_1, {\bf r}_2) = \psi_{100}({\bf r}_1)\psi_{100}({\bf r}_2). \label{26.2.10}$
The overall TISEq reduces to a set of two single-particle equations:
\begin{aligned} \hat{H}_1 \psi_{100}({\bf r}_1) &= E_1\psi_{100}({\bf r}_1) \ \hat{H}_2 \psi_{100}({\bf r}_2) &= E_2\psi_{100}({\bf r}_2), \end{aligned} \label{26.2.11}
which can then be solved similarly to those for the hydrogen atom, and the solution be combined to give:
$E = E_1+E_2. \label{26.2.12}$
In other words, the resulting energy eigenvalue for the ground state of the helium atom in this approximation is equal to twice the energy of a $\psi_{100}$, $1s$, orbital. The resulting approximated value for the energy of the helium atom is $7,217 \text{ kJ/mol}$, compared with the exact value of $7,620 \text{ kJ/mol}$.
The nuclear charge $Z$ in the $\psi_{100}$ orbital can be used as a variational parameter in the variational method to obtain a more accurate value of the energy. This method provides a result for the ground-state energy of the helium atom of $7,478 \text{ kJ/mol}$ (only $142 \text{ kJ/mol}$ lower than the exact value), with the nuclear charge parameter minimized at $Z_{\text{min}}=1.6875$. This new value of the nuclear charge can be interpreted as the effective nuclear charge that is felt by one electron when a second electron is present in the atom. This value is lower than the real nuclear charge ($Z=2$) because the interaction between the electron and the nuclei is shielded by presence of the second electron.
This procedure can be extended to atoms with more than two electrons, resulting in the so-called Hartree-Fock method. The procedure, however, is not straightforward. We will explain it in more details in the next chapter, since it is the simplest approximation that also describes the chemical bond.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/25%3A_Many-Electron_Atoms/25.02%3A_Approximated_Hamiltonians.txt
|
• 26.1: The Molecular Hamiltonian
For a molecule, we can decompose the Hamiltonian operator.
• 26.2: The Born-Oppenheimer Approximation
As we already saw in the previous chapter, if a Hamiltonian is separable into two or more terms, then the total eigenfunctions are products of the individual eigenfunctions of the separated Hamiltonian terms.
• 26.3: Solving the Electronic Eigenvalue Problem
Once we have invoked the Born-Oppenheimer approximation, we can attempt to solve the electronic TISEq in Equation 27.2.7. However, for molecules with more than one electron, we need to—once again—keep in mind the antisymmetry of the wave function. This obviously means that we need to write the electronic wave function as a Slater determinant (i.e., all molecules but H+2 and a few related highly exotic ions).
26: Introduction to Molecules
For a molecule, we can decompose the Hamiltonian operator as:
$\hat{H} = \hat{K}_N +\hat{K}_{e} + \hat{V}_{NN} + \hat{V}_{eN} + \hat{K}_{ee} \label{27.1.1}$
where we have decomposed the kinetic energy operator into nuclear and electronic terms, $\hat{K}_N$ and $\hat{K}_e$, as well as the potential energy operator into terms representing the interactions between nuclei, $\hat{V}_{NN}$, between electrons, $\hat{V}_{ee}$, and between electrons and nuclei, $\hat{V}_{eN}$. Each term can then be calculated using:
\begin{aligned} \hat{K}_N &=-\sum_i^{\text {nuclei }} \dfrac{\hbar^2}{2 M_i} \nabla_{\mathbf{R}_i}^2 \ \hat{K}_e &=-\sum_i^{\text {electrons }} \dfrac{\hbar^2}{2 m_e} \nabla_{\mathbf{r}_i}^2 \ \hat{V}_{N N} &=\sum_i \sum_{j>i} \dfrac{Z_i Z_j e^2}{4 \pi \varepsilon_0\left|\mathbf{R}_i-\mathbf{R}_j\right|} \ \hat{V}_{e N} &=-\sum_i \sum_j \dfrac{Z_i e^2}{4 \pi \varepsilon_0\left|\mathbf{R}_i-\mathbf{r}_j\right|} \ \hat{V}_{e e} &=\sum_i \sum_{i<j} \dfrac{e^2}{4 \pi \varepsilon_0\left|\mathbf{r}_i-\mathbf{r}_j\right|} \end{aligned} \label{27.1.2}
where $M_i$, $Z_i$, and $\mathbf{R}_i$ are the mass, atomic number, and coordinates of nucleus $i$, respectively, and all other symbols are the same as those used in Equation 26.1 for the many-electron atom Hamiltonian.
Small terms in the molecular Hamiltonian
The operator in Equation \ref{27.1.1} is known as the “exact” nonrelativistic Hamiltonian in field-free space. However, it is important to remember that it neglects at least two effects. Firstly, although the speed of an electron in a hydrogen atom is less than 1% of the speed of light, relativistic mass corrections can become appreciable for the inner electrons of heavier atoms. Secondly, we have neglected the spin-orbit effects, which is explained as follows. From the point of view of an electron, it is being orbited by a nucleus which produces a magnetic field (proportional to ${\bf L}$); this field interacts with the electron’s magnetic moment (proportional to ${\bf S}$), giving rise to a spin-orbit interaction (proportional to ${\bf L} \cdot {\bf S}$ for a diatomic.) Although spin-orbit effects can be important, they are generally neglected in quantum chemical calculations, and we will neglect them in the remainder of this textbook as well.
26.02: The Born-Oppenheimer Approximation
As we already saw in the previous chapter, if a Hamiltonian is separable into two or more terms, then the total eigenfunctions are products of the individual eigenfunctions of the separated Hamiltonian terms. The total eigenvalues are then sums of individual eigenvalues of the separated Hamiltonian terms.
For example. let’s consider a Hamiltonian that is separable into two terms, one involving coordinate $q_1$ and the other involving coordinate $q_2$:
$\hat{H} = \hat{H}_1(q_1) + \hat{H}_2(q_2) \label{27.2.1}$
with the overall Schrödinger equation being:
$\hat{H} \psi(q_1, q_2) = E \psi(q_1, q_2). \label{27.2.2}$
If we assume that the total wave function can be written in the form:
$\psi(q_1, q_2) = \psi_1(q_1) \psi_2(q_2), \label{27.2.3}$
where $\psi_1(q_1)$ and $\psi_2(q_2)$ are eigenfunctions of $\hat{H}_1$ and $\hat{H}_2$ with eigenvalues $E_1$ and $E_2$, then:
\begin{aligned}\displaystyle \hat{H} \psi(q_1, q_2) &= ( \hat{H}_1 + \hat{H}_2 ) \psi_1(q_1) \psi_2(q_2) \ &= \hat{H}_1 \psi_1(q_1) \psi_2(q_2) + \hat{H}_2 \psi_1(q_1) \psi_2(q_2) \ &= E_1 \psi_1(q_1) \psi_2(q_2) + E_2 \psi_1(q_1) \psi_2(q_2) \ &= (E_1 + E_2) \psi_1(q_1) \psi_2(q_2) \ &= E \psi(q_1, q_2) \end{aligned} \label{27.2.4}
Thus the eigenfunctions of $\hat{H}$ are products of the eigenfunctions of $\hat{H}_1$ and $\hat{H}_2$, and the eigenvalues are the sums of eigenvalues of $\hat{H}_1$ and $\hat{H}_2$.
If we examine the nonrelativistic Hamiltonian in Equation 27.1.1, we see that the $\hat{V}_{en}$ terms prevents us from cleanly separating the electronic and nuclear coordinates and writing the total wave function. If we neglect these terms, we can write the total wave function as:
$\psi({\bf r}, {\bf R}) = \psi_e({\bf r}) \psi_N({\bf R}), \label{27.2.5}$
This approximation is called the Born-Oppenheimer approximation, and allows us to treat the nuclei as nearly fixed with respect to electron motion. The Born-Oppenheimer approximation is almost always quantitatively correct, since the nuclei are much heavier than the electrons and the (fast) motion of the latter does not affect the (slow) motion of the former. Using this approximation, we can fix the nuclear configuration at some value, ${\bf R_a}$, and solve for the electronic portion of the the wave function, which is dependent only parametrically on ${\bf R}$ (we write this wave function as $\psi_e \left({\bf r}; {\bf R_a} \right)$, where the semicolon indicate the parametric dependence on the nuclear configuration). To solve the TISEq we can then write the electronic Hamiltonian as:
$\hat{H}_{\text{e}} = \hat{K}_e({\bf r}) + \hat{V}_{eN}\left({\bf r}; {\bf R_a} \right) + \hat{V}_{ee}({\bf r}) \label{27.2.6}$
where we have also factored out the nuclear kinetic energy, $\hat{K}_N$ (since it is smaller than $\hat{K}_e$ by a factor of $\dfrac{M_i}{m_e}$), as well as $\hat{V}_{NN}({\bf R})$. This latter approximation is justified, since in the Born-Oppenheimer approximation ${\bf R}$ is just a parameter, and $\hat{V}_{NN}({\bf R_a})$ is a constant that shifts the eigenvalues only by some fixed amount. This electronic Hamiltonian results in the following TISEq:
$\hat{H}_{e} \psi_e \left({\bf r}; {\bf R_a} \right) = E_{e} \psi_e \left({\bf r}; {\bf R_a} \right), \label{27.2.7}$
which is the equation that is used to explain the chemical bond in the next section. Notice that Equation \ref{27.2.7} is not the total TISEq of the system, since the nuclear eigenfunction and its eigenvalues (which can be obtained solving the Schrödinger equation with the nuclear Hamiltonian) are neglected. As a final note, in the remainder of this textbook we will confuse the term “total energy” with “total energy at fixed geometry”, as is customary in many other quantum chemistry textbooks (i.e., we are neglecting the nuclear kinetic energy). This is just $E_{e}$ of Equation \ref{27.2.7}, plus the constant shift,$\hat{V}_{NN}({\bf R_a})$, given by the nuclear-nuclear repulsion.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/26%3A_Introduction_to_Molecules/26.01%3A_The_Molecular_Hamiltonian.txt
|
Once we have invoked the Born-Oppenheimer approximation, we can attempt to solve the electronic TISEq in Equation 27.2.7. However, for molecules with more than one electron, we need to—once again—keep in mind the antisymmetry of the wave function. This obviously means that we need to write the electronic wave function as a Slater determinant (i.e., all molecules but $\mathrm{H}_2^+$ and a few related highly exotic ions). Once this is done, we can work on approximating the Hamiltonian, a task that is necessary because the presence of the electron-electron repulsion term forbids its analytic treatment. Similarly to the many-electron atom case, the simplest approximation to solve the molecular electronic TISEq is to use the variational method and to neglect the electron-electron repulsion. As we noticed in the previous chapter, this approximation is called the Hartree-Fock method.
The Hartree-Fock Method
The main difference when we apply the variational principle to a molecular Slater determinant is that we need to build orbitals (one-electron wave functions) that encompass the entire molecule. This can be done by assuming that the atomic contributions to the molecular orbitals will closely resemble the orbitals that we obtained for the hydrogen atom. The total molecular orbital can then be built by linearly combine these atomic contributions. This method is called linear combination of atomic orbitals (LCAO). A consequence of the LCAO method is that the atomic orbitals on two different atomic centers are not necessarily orthogonal, and Equation 26.2.4 cannot be simplified easily. If we replace each atomic orbital $\psi(\mathbf{r})$ with a linear combination of suitable basis functions $f_i(\mathbf{r})$:
$\psi(\mathbf{r}) = \sum_i^m c_{i} f_i(\mathbf{r}), \label{27.3.1}$
we can then use the following notation:
$\displaystyle H_{ij} = \int \phi_i^* {\hat H} \phi_j d\mathbf{\tau}\;, \qquad \displaystyle S_{ij} = \int \phi_i^* \phi_jd\mathbf{\tau}, \label{27.3.2}$
to simplify Equation 26.2.4 to:
$E[\Phi] = \dfrac{\sum_{ij} c_i^* c_j H_{ij}}{\sum_{ij} c_i^* c_j S_{ij}}. \label{27.3.3}$
Differentiating this energy with respect to the expansion coefficients $c_i$ yields a non-trivial solution only if the following “secular determinant” equals zero:
$\begin{vmatrix} H_{11}-ES_{11} & H_{12}-ES_{12} & \cdots & H_{1m}-ES_{1m}\\ H_{21}-ES_{21} & H_{22}-ES_{22} & \cdots & H_{2m}-ES_{2m}\\ \vdots & \vdots & \ddots & \vdots\\ H_{m1}-ES_{m1} & H_{m2}-ES_{m2} & \cdots & H_{mm}-ES_{mm} \end{vmatrix}=0 \label{27.3.4}$
where $m$ is the number of basis functions used to expand the atomic orbitals. Solving this set of equations with a Hamiltonian where the electron-electron correlation is neglected results is non-trivial, but possible. The reason for the complications comes from the fact that even if we are neglecting the direct interaction between electrons, each of them interact with the nuclei through an interaction that is screened by the average field of all other electrons, similarly to what we saw for the helium atom. This means that the Hamiltonian itself and the value of the coefficients $c_i$ in the wave function mutually depend on each other. A solution to this problem can be achieved numerically using specialized computer programs that use a cycle called the self-consistent-field (SCF) procedure. Starting from an initial guess of the coefficients, an approximated Hamiltonian operator is built from them and used to solve Equation \ref{27.3.4}. This solution gives updated values of the coefficients, which can then be used to create an improved version of the approximated Hamiltonian. This procedure is repeated until both the coefficients and the operator do not change anymore. From this final solution, the energy of the molecule is then calculated.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/26%3A_Introduction_to_Molecules/26.03%3A_Solving_the_Electronic_Eigenvalue_Problem.txt
|
In this chapter we will see a couple of examples of how the concept and mathematics of quantum mechanics can be applied to understand the chemical bond in molecules. We will start from the simplest molecule, the $\mathrm{H}_2^+$ molecular ion, and then we will move on to the simplest two-electron bond in the hydrogen molecule. To simplify the notation in this chapter, we will move away from S.I. units and use a set tailored for molecules, called atomic units (a.u.). This set of units is built by setting $\hbar=e=m_e=a_0=1$. As an example of the simplification that a.u. allows, the energy eigenvalues of the hydrogen atom, Equation 21.8, simply becomes $E_n=-\dfrac{1}{2n^2}$ in the a.u. of energy, which are called Hartrees.
27: The Chemical Bond in Diatomic Molecules
This system has only one electron, but since its geometry is not spherical (figure $1$), the TISEq cannot be solved analytically as for the hydrogen atom.
The electron is at point $P$, while the two protons are at position $A$ and $B$ at a fixed distance $R$. Using the Born-Oppenheimer approximation we can write the one-electron molecular Hamiltonian in a.u. as:
$\hat{H} = \hat{H}_e+\dfrac{1}{R} = \left( -\dfrac{1}{2}\nabla^2-\dfrac{1}{\mathbf{r}_A}-\dfrac{1}{\mathbf{r}_B} \right)+\dfrac{1}{R} \label{28.1.1}$
As a first approximation to the variational wave function, we can build the one-electron molecular orbital (MO) by linearly combine two $1s$ hydrogenic orbitals centered at $A$ and $B$, respectively:
$\varphi = c_1 a + c_2 b, \label{28.1.2}$
with:
\begin{aligned} a &= 1s_A = \left( \psi_{100} \right)_A\ b &= 1s_B = \left( \psi_{100} \right)_B. \end{aligned}\label{28.1.3}
Using Equation 27.3.2 and considering that the nuclei are identical, we can define the integrals $H_{aa}=H_{bb}, H_{ab}=H_{ba}$ and $S_{ab}=S$ (while $S_{aa}=1$ because the hydrogen atom orbitals are normalized). The secular equation, Equation 27.3.4 can then be written:
$\begin{vmatrix} H_{aa}-E & H_{ab}-ES \\ H_{ab}-ES & H_{aa}-E \end{vmatrix}=0 \label{28.1.4}$
The expansion of the determinant results into:
\begin{aligned} (H_{aa}-E)^2 &=(H_{ab}-ES)^2 \ H_{aa}-E &= \pm (H_{ab}-ES), \ \end{aligned}\label{28.1.5}
with roots:
\begin{aligned} E_{+} &= \dfrac{H_{aa}+H_{ab}}{1+S} = H_{aa}+\dfrac{H_{ba}-SH_{aa}}{1+S}, \ E_{-} &= \dfrac{H_{aa}-H_{ab}}{1-S} = H_{aa}-\dfrac{H_{ba}-SH_{aa}}{1-S}, \end{aligned}\label{28.1.6}
the first corresponding to the ground state, the second to the first excited state. Solving for the best value for the coefficients of the linear combination for the ground state $E_{+}$, we obtain:
$c_1=c_2=\dfrac{1}{\sqrt{2+2S}}, \label{28.1.7}$
which gives the bonding MO:
$\varphi_{+}=\dfrac{a+b}{\sqrt{2+2S}}. \label{28.1.8}$
Proceeding similarly for the excited state, we obtain:
$c_1=\dfrac{1}{\sqrt{2-2S}}\;\quad c_2=-\dfrac{1}{\sqrt{2-2S}}, \label{28.1.9}$
which gives the antibonding MO:
$\varphi_{-}=\dfrac{b-a}{\sqrt{2-2S}}. \label{28.1.10}$
These results can be summarized in the molecular orbital diagram of figure $2$ We notice that the splitting of the doubly degenerate atomic level under the interaction is non-symmetric for $S\neq0$, the antibonding level being more repulsive and the bonding less attractive than the symmetric case occurring for $S = 0$.
Calculating the values for the integrals and repeating these calculations for different internuclear distances, $R$, results in the plot of figure $3$ As we see from the plots, the ground state solution is negative for a vast portion of the plot. The energy is negative because the electronic energy calculated with the bonding orbital is lower than the nuclear repulsion. In other words, the creation of the molecular orbital stabilizes the molecular configuration versus the isolated fragments (one hydrogen atom and one proton).
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/27%3A_The_Chemical_Bond_in_Diatomic_Molecules/27.01%3A_The_Chemical_Bond_in_the_Hydrogen_Molecular_Cation.txt
|
We can now examine the formation of the two-electron chemical bond in the $\text{H}_2$ molecule. With reference to figure $1$, the molecular Hamiltonian for $\text{H}_2$ in a.u. in the Born-Oppenheimer approximation will be:
\begin{aligned} \hat{H} &= \hat{H}_e+\dfrac{1}{R} \ &=\left( -\dfrac{1}{2}\nabla^2_1-\dfrac{1}{\mathbf{r}_{A1}}-\dfrac{1}{\mathbf{r}_{B1}} \right)+\left( -\dfrac{1}{2}\nabla^2_2-\dfrac{1}{\mathbf{r}_{A2}}-\dfrac{1}{\mathbf{r}_{B2}} \right)+\dfrac{1}{r_{12}}+\dfrac{1}{R}\ &= \hat{h}_1+\hat{h}_2+\dfrac{1}{r_{12}}+\dfrac{1}{R}, \end{aligned}\label{28.2.1}
where $\hat{h}$ is the one-electron Hamiltonian. As for the previous case, we can build the first approximation to the molecular wave function by considering two $1s$ atomic orbitals $a(\mathbf{r}_1)$ and $b(\mathbf{r}_2)$ centered at $A$ and $B$, respectively, having an overlap $S$. If we Neglect the electron-electron repulsion term, $\dfrac{1}{r_{12}}$, the resulting Hartree-Fock equations are exactly the same as in the previous case. The most important difference, though, is that in this case we need to consider the spin of the two electrons. Proceeding similarly to what we have done for the many-electron atom in chapter 26, we can build an antisymmetric wave function for $\text{H}_2$ using a Slater determinant of doubly occupied MOs. For the ground state, we can use the lowest energy orbital obtained from the solution of the Hartree-Fock equations, which we already obtained in Equation 28.1.8. Using a notation that is based on the symmetry of the molecule, this bonding orbital in $\text{H}_2$ is usually called $\sigma_g$, where $\sigma$ refers to the $\sigma$ bond that forms between the two atoms. The Slater determinant for the ground state is therefore:$^1$
$\Psi (\mathbf{x}_{1},\mathbf{x}_{2})= |\sigma_{g}\phi_{\uparrow},\sigma_{g}\phi_{\downarrow}\rangle,=\sigma_{g}(\mathbf{r}_1)\sigma_{g}(\mathbf{r}_2) \dfrac{1}{\sqrt{2}} \left[ \phi_{\uparrow}\phi_{\downarrow} - \phi_{\downarrow}\phi_{\uparrow} \right], \label{28.2.2}$
where:
$\sigma_{g}=\varphi_{+}=\dfrac{\left(\psi_{100}\right)_A+\left(\psi_{100}\right)_B}{\sqrt{2+2S}}. \label{28.2.3}$
The energies and the resulting MO diagram is similar to that for $\mathrm{H}_2^+$, with the only difference that two electron will be described by the same $\sigma_g$ MO (figure $2$).
As for the many-electron atoms, the Hartree-Fock method is just an approximation to the exact solution. The accurate theoretical value for the bond energy at the bond distance of $R_e=1.4\;a_0$ is $E= -0.17447\;E_h$. The variational result obtained with the wave function in Equation \ref{28.2.2} is $E= -0.12778\;E_h$, which is $\sim 73 \%$ of the exact value. The variational coefficient (i.e., the orbital exponent, $c_0$, that enters the $1s$ orbital formula $\psi_{100}=\dfrac{1}{\pi}\exp[c_0r]$) is optimized at $c_0=1.1695$, a value that shows how the orbitals significantly contract due to spherical polarization.
If we scan the Born-Oppenheimer energy landscape using the wave function in Equation \ref{28.2.2} as we have done for $\mathrm{H}_2^+$, we obtain the plot in figure $3$.
As we can see, the Hartree-Fock results for $\mathrm{H}_2$ describes the formation of the bond qualitatively around the bond distance (minimum of the curve), but they fail to describe the molecule at dissociation. This happens because in Equation \ref{28.2.2} both electrons are in the same orbital with opposite spin (electrons are coupled), and the orbital is shared among both centers. At dissociation, this corresponds to an erroneous ionic dissociation state where both electron are localized on either one of the two centers (this center is therefore negatively charged), with the other proton left without electrons. This is in contrast with the correct dissociation, where each electron should be localized around each center (and therefore, it should be uncoupled from the other electron). This error is once again the result of the approximations that are necessary to treat the TISEq of a many-electron system. It is obviously not a failure of quantum mechanics, and it can be easily corrected using more accurate approximations on modern computers.
1. Compare this equation to (10.6) for the helium atom.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/27%3A_The_Chemical_Bond_in_Diatomic_Molecules/27.02%3A_The_Chemical_Bond_in_the_Hydrogen_Molecule.txt
|
The structure in space of polyatomic molecules depends on the stereochemistry of their chemical bonds and can be determined by solving the (approximated) TISEq using the Born—Oppenheimer approximation using a method that uses a linear combination of atomic orbitals to form molecular orbitals (LCAO-MO).
• 28.1: The Chemical Bond in the Water Molecule Using a Minimal Basis
For a minimal representation of the two hydrogen atoms, we need two 1s functions, one centered on each atom. Oxygen has electrons in the second principal quantum level, so we will need one 1s , one 2s , and three 2p functions (one each of px , py , and pz ).
• 28.2: Hartree-Fock Calculation for Water
To find the Hartree-Fock (HF) molecular orbitals (MOs) we need to solve the following secular determinant.
• 28.3: Shapes and Energies of Molecular Orbitals
If we analyze the optimized coefficients of the occupied MOs reported in Equation 29.2.10, we observe that the lowest energy orbital (by a lot!) is a nearly pure oxygen 1s orbital since the coefficient of the oxygen 1s basis function is very nearly 1 and all other coefficients are rather close to 0.
28: The Chemical Bond in Polyatomic Molecules
For a minimal representation of the two hydrogen atoms, we need two $1s$ functions, one centered on each atom. Oxygen has electrons in the second principal quantum level, so we will need one $1s$, one $2s$, and three $2p$ functions (one each of $p_x$, $p_y$, and $p_z$). Summarizing, for a minimal representation of the water wave function we need five orbitals on oxygen, plus one each on the hydrogen atoms, for a total of 7 functions. From these atomic functions, we can build a total wave function using the LCAO method of chapter 27, and then we can use the variational principle, in conjunction with the Hartree—Fock (HF) method, to build and solve a secular determinant that looks is similar to that in Equation 27.3.4, with $m=7$ being the total number of basis functions. The approximated Hamiltonian operator in the HF method is called the Fock operator, and it can be divided into one-electron integrals, comprising the kinetic and potential energy contributions:
\begin{aligned} \displaystyle K_{ij} &= \int \phi_i^* {\hat K} \phi_j\; d\mathbf{\tau}=\int \phi_i^* {\left(-\dfrac{1}{2}\nabla^2\right)} \phi_j\; d\mathbf{\tau} \ \displaystyle V_{ij} &= \int \phi_i^* {\hat V} \phi_j\;d\mathbf{\tau} = \int \phi_i^* {\left(-\sum_k^{\mathrm{nuclei}}\dfrac{Z_k}{r_k}\right)} \phi_j\; d\mathbf{\tau} , \end{aligned} \label{29.1.1}
as well as two-electron integrals describing the coulomb repulsion between electrons:
$V_{ijkl} = \iint \phi_i^* \phi_j^* {\hat r}_{12} \phi_k \phi_l\; d\mathbf{\tau_1}d\mathbf{\tau_2}=\iint \phi_i^* \phi_j^* \left(\dfrac{1}{r_{12}}\right) \phi_k \phi_l\; d\mathbf{\tau_1}d\mathbf{\tau_2}. \label{29.1.2}$
Despite the minimal basis set, the total number of integrals that need to be calculated for water is large, since $i$, $j$, $k$, and $l$ can be any one of the 7 basis functions. Hence there are $7\times7=49$ kinetic energy integrals, and the same number of potential energy integrals for each nucleus, resulting in $7\times 7 \times 3 = 147$. The grand total of one-electron integrals is thus 196. For the two-electron integrals, we have $7 \times 7 \times 7 \times 7 = 2{,}401$ integrals to calculate. Overall for this simple calculation on water, we need almost $2{,}600$ integrals.1
All this to find $5$ occupied molecular orbitals from which to form a final Slater determinant ($10$ electrons, two to an orbital, so $5$ orbitals). The situation sounds horrible, but it should be recognized that the solutions to all of the integrals are known to be analytic formulae involving only interatomic distances, cartesian exponents, and the values of a single exponent in the atomic functions. If we use slightly simpler gaussian functions instead of the more complicated hydrogenic solutions, the total number of floating-point operations to solve the integrals is roughly $1{,}000{,}000$. In computer speak that’s one megaflop (megaflop = million FLoating-point OPerations). A modern digital computer processor can achieve gigaflop per second performance, so the computer can accomplish all these calculations in under one second. An additional way in which things can be improved is to recognize that the molecule has symmetries that can be exploited to reduce the number of total integrals that needs to be calculated.
1. The numbers computed here involve the minimum amount of uncontracted “hydrogenic” functions that can be used for calculation on water. In real-life calculations a linear combination of simpler primitive functions (gaussians) is used to describe a single uncontracted function. For example in the simplest case, the STO-3G basis set, each uncontracted function is composed of 3 primitive gaussian functions. Thus, for any individual one-electron integral, there will be $3 \times 3 = 9$ separate integrals involving the primitives. There are thus $9 \times 196 = 1{,}764$ individual primitive one-electron integrals. As for the two-electron integrals, again, every individual integral will require considering every possible combination of constituent primitives which is $3 \times 3 \times 3 \times 3 = 81$. Thus, the total number of primitive two-electron integrals is $81 \times 2{,}401 = 194{,}481$ (gulp!) Notice that even for this small molecule the number of two-electron integrals totally dominates the number of one-electron integrals. The disparity only increases with molecular size. Notice: Portions of this section are based on Prof. C.J. Cramer’s lecture notes available (here)[http://pollux.chem.umn.edu/4502/3502_lecture_29.pdf]
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/28%3A_The_Chemical_Bond_in_Polyatomic_Molecules/28.01%3A_The_Chemical_Bond_in_the_Water_Molecule_Using_a_Minimal_Basis.txt
|
To find the Hartree-Fock (HF) molecular orbitals (MOs) we need to solve the following secular determinant:
$\begin{vmatrix} F_{11}-ES_{11} & F_{12}-ES_{12} & \cdots & F_{17}-ES_{17}\\ F_{21}-ES_{21} & F_{22}-ES_{22} & \cdots & F_{27}-ES_{27}\\ \vdots & \vdots & \ddots & \vdots\\ H_{71}-ES_{71} & F_{72}-ES_{72} & \cdots & F_{77}-ES_{77} \end{vmatrix}=0 \label{29.2.1}$
with $S_{ij}$ being the overlap integrals of Equation 27.3.2, and $F_{ij}$ the matrix elements of the Fock operator, defined using the one- and two-electron integrals in Equation 29.1.1 and Equation 29.1.2 as:
$F_{ij} = K_{ij} + V_{ij} + \sum_{kl} P_{kl} \left[ V_{ijkl} -\frac{1}{2}V_{ikjl} \right], \label{29.2.2}$
with the density matrix elements $P_{kl}$ defined as:
$P_{kl} = 2 \sum_{i}^{\mathrm{occupied}} a_{ki}a_{li}, \label{29.2.3}$
where the $a$ values are the coefficients of the basis functions in the occupied molecular orbitals. These values will be determined using the SCF procedure, which proceeds as follows: At the first step we simply guess what these are, then we iterate through solution of the secular determinant to derive new coefficients and we continue to do so until self-consistency is reached (i.e. the $N+1$ step provides coefficients and energies that are equal to those in the $N$ step).
We can try to solve the SCF procedure for water using a fixed geometry of the nuclei close to the experimental structure: O-H bond lengths of $0.95\,\dot{A}$ and a valence bond angle at oxygen of $104.5^\circ$. To do so, we can use a minimal basis functions composed of the following seven orbitals: basis function #1 is an oxygen $1s$ orbital, #2 is an oxygen $2s$ orbital, #3 is an oxygen $2p_x$ orbital, #4 is an oxygen $2p_y$ orbital, #5 is an oxygen $2p_z$ orbital, #6 is one hydrogen $1s$ orbital, and #7 is the other hydrogen $1s$ orbital. The corresponding integrals introduced in the previous section can be calculated using a quantum chemistry code. The calculated overlap matrix elements are:
$\mathbf{S}=\begin{bmatrix} \mathrm{O}\;1s & \mathrm{O}\;2s & \mathrm{O}\;2p_x & \mathrm{O}\;2p_y & \mathrm{O}\;2p_z & \mathrm{H}_a\;1s & \mathrm{H}_b\;1s & \\ 1.000 & & & & & & &\mathrm{O}\;1s \\ 0.237 & 1.000 & & & & & &\mathrm{O}\;2s \\ 0.000 & 0.000 & 1.000 & & & & &\mathrm{O}\;2p_x \\ 0.000 & 0.000 & 0.000 & 1.000 & & & &\mathrm{O}\;2p_y \\ 0.000 & 0.000 & 0.000 & 0.000 & 1.000 & & &\mathrm{O}\;2p_z \\ 0.055 & 0.479 & 0.000 & 0.313 & -0.242 & 1.000 & &\mathrm{H}_a\;1s \\ 0.055 & 0.479 & 0.000 & -0.313 & -0.242 & 0.256 & 1.000&\mathrm{H}_b\;1s \end{bmatrix} \label{29.2.4}$
There are many noteworthy features in $\mathbf{S}$. First, it is shown in a lower packed triangular form because every element $j,i$ is the same as the element $i,j$ by symmetry, and every diagonal element is $1$ because the basis functions are normalized. Note that, again by symmetry, every $p$ orbital on oxygen is orthogonal (overlap = zero) with every $s$ orbital and with each other, but the two $s$ orbitals do overlap (this is due to the fact that they are not pure hydrogenic orbitals—which would indeed be orthogonal—but they have been optimized, so $S_{12} = 0.237$). Note also that the oxygen $1s$ orbital overlaps about an order of magnitude less with any hydrogen $1s$ orbital than does the oxygen $2s$ orbital, reflecting how much more rapidly the first quantum-level orbital decays compared to the second. Note that by symmetry the oxygen $p_x$ cannot overlap with the hydrogen $1s$ functions (positive overlap below the plane exactly cancels negative overlap above the plane) and that the oxygen $p_y$ overlaps with the two hydrogen $1s$ orbitals equally in magnitude but with different sign because the $p$ orbital has different phase at its different ends. Finally, the overlap of the $p_z$ is identical with each H $1s$ because it is not changing which lobe it uses to interact. The kinetic energy matrix (in a.u.) is:
$\mathbf{K}= \begin{bmatrix} 29.003 & & & & & & \\ -0.168 & 0.808 & & & & & \\ 0.000 & 0.000 & 2.529 & & & & \\ 0.000 & 0.000 & 0.000 & 2.529 & & & \\ 0.000 & 0.000 & 0.000 & 0.000 & 2.529 & & \\ -0.002 & 0.132 & 0.000 & 0.229 & -0.177 & 0.760 & \\ -0.002 & 0.132 & 0.000 & -0.229 & -0.177 & 0.009 & 0.760 \end{bmatrix} \label{29.2.5}$
Notice that every diagonal term is much larger than any off-diagonal term. Recall that each each kinetic energy integral, Equation 29.1.1, involves the Laplacian operator, $\nabla^2$. The Laplacian reports back the sum of second derivatives in all coordinate directions. That is, it is a measure of how fast the slope of the function is changing in various directions. If we take two atomic orbitals $\mu$ and $\nu$ far apart from each other, then since gaussians go to zero at least exponentially fast with distance, $\nu$ is likely to be very flat where $\mu$ is large. The second derivative of a flat function is zero. So, every point in the integration will be roughly the amplitude of $\mu$ times zero, and not much will accumulate. For the diagonal element, on the other hand, the interesting second derivatives will occur where the function has maximum amplitude (amongst other places) so the accumulation should be much larger. Notice also that off-diagonal terms can be negative. That is because there is no real physical meaning to a kinetic energy expectation value involving two different orbitals. It is just an integral that appears in the complete secular determinant. Symmetry again keeps $p$ orbitals from mixing with $s$ orbitals or with each other. The nuclear attraction matrix is:
$\mathbf{V}= \begin{bmatrix} -61.733 & & & & & & \\ -7.447 & -10.151 & & & & & \\ 0.000 & 0.000 & -9.926 & & & & \\ 0.000 & 0.000 & 0.000 & -10.152 & & & \\ 0.000 & 0.000 & 0.000 & 0.000 & -10.088 & & \\ -1.778 & -3.920 & 0.000 & -0.228 & -0.184 & -5.867 & \\ -1.778 & -3.920 & 0.000 & 0.228 & 0.184 & -1.652 & -5.867 \end{bmatrix} \label{29.2.6}$
Again, diagonal elements are bigger than off-diagonal elements because the $1/r$ operator acting on a basis function $\nu$ will ensure that the largest contribution to the overall integral will come from the nucleus $k$ on which basis function $\nu$ resides. Unless $\mu$ also has significant amplitude around that nucleus, it will multiply the result by roughly zero and the whole integral will be small. Again, positive values can arise when two different functions are involved even though electrons in a single orbital must always be attracted to nuclei and thus diagonal elements must always be negative. Note that the $p$ orbitals all have different nuclear attractions. That is because, although they all have the same attraction to the O nucleus, they have different amplitudes at the H nuclei. The $p_x$ orbital has the smallest amplitude at the H nuclei (zero, since they are in its nodal plane), so it has the smallest nuclear attraction integral. The $p_z$ orbital has somewhat smaller amplitude at the H nuclei than the $p_y$ orbital because the bond angle is greater than $90^\circ$ (it is $104.5^\circ$; if it were $90^\circ$ the O-H bonds would bisect the $p_y$ and $p_z$ orbitals and their amplitudes at the H nuclei would necessarily be the same). Thus, the nuclear attraction integral for the latter orbital is slightly smaller than for the former.
The sum of the kinetic and nuclear attraction integrals is usually called the one- electron or core part of the Fock matrix and abbreviated $\mathbf{h}$ (i.e., $\mathbf{h} = \mathbf{K} + \mathbf{V}$). One then writes $\mathbf{F} = \mathbf{h} + \mathbf{G}$ where $\mathbf{F}$ is the Fock matrix, $\mathbf{h}$ is the one-electron matrix, and $\mathbf{G}$ is the remaining part of the Fock matrix coming from the two-electron four-index integrals (cf Equation \ref{29.2.2}). To compute those two-electron integrals, however, we need the density matrix, which itself comes from the occupied MO coefficients. So, we need an initial guess at those coefficients. We can get such a guess many ways, but ultimately any guess is as good as any other. With these coefficients we can compute the density matrix using Equation \ref{29.2.3}:
$\mathbf{P}=\begin{bmatrix} 2.108 & & & & & & \\ -0.456 & 2.010 & & & & & \\ 0.000 & 0.000 & 2.000 & & & & \\ 0.000 & 0.000 & 0.000 & 0.737 & & & \\ -0.104 & 0.618 & 0.000 & 0.000 & 1.215 & & \\ -0.022 & -0.059 & 0.000 & 0.539 & -0.482 & 0.606 & \\ -0.022 & -0.059 & 0.000 & -0.539 & -0.482 & -0.183 & 0.606 \end{bmatrix} \label{29.2.7}$
With $\mathbf{P}$, we can compute the remaining contribution of $\mathbf{G}$ to the Fock matrix. We will not list all 406 two-electron integrals here. Instead, we will simply write the total Fock matrix:
$\mathbf{F}= \begin{bmatrix} -20.236 & & & & & & \\ -5.163 & -2.453 & & & & & \\ 0.000 & 0.000 & -0.395 & & & & \\ 0.000 & 0.000 & 0.000 & -0.327 & & & \\ 0.029 & 0.130 & 0.000 & 0.000 & -0.353 & & \\ -1.216 & -1.037 & 0.000 & -0.398 & 0.372 & -0.588 & \\ -1.216 & -1.037 & 0.000 & 0.398 & 0.372 & -0.403 & -0.588 \end{bmatrix} \label{29.2.8}$
So, we’re finally ready to solve the secular determinant, since we have $\mathbf{F}$ and $\mathbf{S}$ fully formed. When we do that, and then solve for the MO coefficients for each root $E$, we get new occupied MOs. Then, we iterate again, and again, and again, until we are satisfied that further iterations will not change either our (i) energy, (ii) density matrix, or (iii) MO coefficients (it’s up to the quantum chemist to decide what is considered satisfactory).
In our water calculation, if we monitor the energy at each step we find:
\begin{aligned} E(RHF) &= \; -74.893\,002\,803\qquad\text{a.u. after 1 cycles} \ E(RHF) &= \; -74.961\,289\,145\qquad\text{a.u. after 2 cycles} \ E(RHF) &= \; -74.961\,707\,247\qquad\text{a.u. after 3 cycles} \ E(RHF) &= \; -74.961\,751\,946\qquad\text{a.u. after 4 cycles} \ E(RHF) &= \; -74.961\,753\,962\qquad\text{a.u. after 5 cycles} \ E(RHF) &= \; -74.961\,754\,063\qquad\text{a.u. after 6 cycles} \ E(RHF) &= \; -74.961\,754\,063\qquad\text{a.u. after 7 cycles} \ \end{aligned}\label{29.2.9}
Which means that our original guess was really not too bad—off by a bit less than $0.1\text{ a.u.}$ or roughly $60\text{ kcal mol}^{-1}$. Our guess energy is too high, as the variational principle guarantees that it must be. Our first iteration through the secular determinant picks up nearly $0.07\text{ a.u.}$, our next iteration an additional $0.000\,42$ or so, and by the end we are converged to within 1 nanohartree ($0.000\,000 6\text{ kcal mol}^{-1}$).
The final optimized MOs for water are:
$$\begin{matrix} & 1 & 2 & 3 & 4 & 5 & 6 & 7 \\ E & -20.24094 & -1.27218 & -.62173 & -.45392 & -.39176 & .61293 & .75095 \\ \\ 1 & .99411 & -.23251 & .00000 & -.10356 & .00000 & -.13340 & .00000 \\ 2 & .02672 & .83085 & .00000 & .53920 & .00000 & .89746 & .00000 \\ 3 & .00000 & .00000 & .00000 & .00000 & 1.0000 & .00000 & .00000 \\ 4 & .00000 & .00000 & .60677 & .00000 & .00000 & .00000 & .99474 \\ 5 & -.00442 & -.13216 & .00000 & .77828 & .00000 & -.74288 & .00000 \\ 6 & -.00605 & .15919 & .44453 & -.27494 & .00000 & -.80246 & -.84542 \\ 7 & -.00605 & .15919 & -.44453 & -.27494 & .00000 & -.80246 & .84542 \\ \end{matrix}$\label{29.2.10}$
where the first row reports the eigenvalues of each MO, in $E_h$ (i.e., the energy of one electron in the MO). The sum of all of the occupied MO energies should be an underestimation of the total electronic energy because electron-electron repulsion will have been double counted. So, if we sum the occupied orbital energies (times two, since there are two electrons in each orbital), we get $2(-20.24094{-}1.27218{-}0.62173{-}0.45392{-}0.39176)=-45.961\,060$. If we now subtract the electron-electron repulsion energy $38.265\,406$ we get $-84.226\,466$. If we add the nuclear repulsion energy $9.264\,701$ to this we get a total energy $-74.961\,765$. The difference between this and the converged result above ($-74.961\,754$) can be attributed to rounding in the MO energies, which are truncated after 5 places. Notice that the five occupied MOs all have negative energies. So, their electrons are bound within the molecule. The unoccupied MOs (called “virtual” MOs) all have positive energies, meaning that the molecule will not spontaneously accept an electron from another source.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/28%3A_The_Chemical_Bond_in_Polyatomic_Molecules/28.02%3A_Hartree-Fock_Calculation_for_Water.txt
|
If we analyze the optimized coefficients of the occupied MOs reported in Equation 29.2.10, we observe that the lowest energy orbital (by a lot!) is a nearly pure oxygen $1s$ orbital since the coefficient of the oxygen $1s$ basis function is very nearly 1 and all other coefficients are rather close to 0. Note, however, that the coefficient is not really a percentage measure. That’s because the basis functions are not necessarily orthogonal to one another. Let’s consider the next molecular orbital up, number 2. It has a dominant contribution from the oxygen $2s$ basis function, but non-trivial contributions from many other basis functions as well. In order to understand which kind of orbital it is, it is useful to try to visualize some of its properties. For example, recall that the square of the orbital at a particular point in space represents a probability density. As such, we can map values of the square of each orbital on a grid in 3-dimensional space, and then pick a value of probability density, say $0.04\; a_0^{-3}$, and plot that as a contour surface (remember that a probability density is a 4-dimensional quantity, so we need to take a slice at some constant density to be able to plot it in 3-D). That surface is called an “isodensity” surface. In addition to the square of the function, we can also regions where the wave function is positive blue and regions where it’s negative red. The five occupied and two unoccupied MOs mapped from their one-electron wave functions are plotted in figuere $1$.
Going back to the Lewis structure of water as taught in general chemistry courses, it says that there is one pair of electrons in one O–H $\sigma$ bond, one pair in another identical such $\sigma$ bond, and two equivalent pairs that constitute the lone pairs on oxygen. The two lone pairs and the O–H bonds should by pointing towards the apices of a tetrahedron because they are all considered to be $sp^3$ hybridized.
As you can see, the MOs look nothing like the Lewis picture. Instead, amongst other details, there is one lone pair that is pure $p$ (not $sp^3$), another that is, if anything, $sp^2$-like, but also enjoys contribution from hydrogen $1s$ components. There is one orbital that looks like both O–H $\sigma$ bonds are present, but another that has an odd “bonding-all-over” character to it.
Is it really possible that for something as simple as water all the things you’ve ever been told about the Lewis structure are wrong? Water must have two equivalent lone pairs, right?
It turns out that the molecular orbital results can be tested with spectroscopic experiments, and suffice to say, they agree perfectly.
But the $sp^3$-hybridized picture of water works well, for example, to explain its hydrogen-bonding behavior: In liquid water each water molecule makes two hydrogen bonds to other water molecules and accepts two more from different water molecules and the final structure has a net lattice-like form that is tetrahedral at each oxygen atom. How can the above MOs explain that? The key point to remember is that another molecule does not see the individual orbitals of water, it just sees the final effect of all of those electrons and nuclei together. To explain the tetrahedral H-bond lattice we can plot some constant level of electron density (i.e. \$0.02) and map onto this isodensity surface the values of the electrostatic potential. We can find these values by bringing a positive test charge onto that surface and recording how much would it find itself attracted (because of a net negative electrostatic potential) or repelled (because of a net positive electrostatic potential). This is done in figure $2$. Notice how the negative potential is entirely on the oxygen side and the positive potential entirely on the hydrogens side. Moreover, the negative potential splays out to the tetrahedral points and the positive potential does too (those points for the purple region being roughly where the H atoms are).
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/28%3A_The_Chemical_Bond_in_Polyatomic_Molecules/28.03%3A_Shapes_and_Energies_of_Molecular_Orbitals.txt
|
The primary method of measuring the energy levels of a material is through the use of electromagnetic radiation. Experiments involving electromagnetic radiation—matter interaction are called spectroscopies. Since the energy levels of atoms and molecules are discontinuous, they absorb or emit light only at specific energies. These specific values correspond to the energy level difference between the initial and final states and they can be measured as signals in spectroscopic experiments. The intensity of the experimental signals depends on the population of the initial state involved in the transition.
Depending on the type of radiation, as well as the shape of the molecules and the inner details of the instrument that is used, some transition might be visible by the experiment (allowed), while others might not be (forbidden). The analysis of allowed and forbidden transition for each type of spectroscopy results into some mathematical formula that are called selection rules.
To summarize, spectroscopy is mainly the result of the following three effects:
• The energy levels of the atoms or molecules (determining the position of the signals).
• The population of the energy levels (determining the intensity of the signals).
• The selection rules that account for the symmetry and the interaction with the instrument.
Spectroscopy is the most important experimental verification of quantum mechanics, since we can use it to validate its theoretical results on the energy levels of atoms and molecules.
• 29.1: Rotational Spectroscopy
Rotational spectroscopy is concerned with the measurement of the energies of transitions between quantized rotational states of molecules in the gas phase.
• 29.2: Vibrational Spectroscopy
Vibrational spectroscopy is concerned with the measurement of the energies of transitions between quantized vibrational states of molecules in the gas phase.
• 29.3: Electronic Spectroscopy
Electronic spectroscopy is concerned with the measurement of the energies of transitions between quantized electronic states of molecules.
Thumbnail: White light is dispersed by a prism into the colors of the visible spectrum. (CC BY-SA 3.0; D-Kuru).
29: Spectroscopy
Rotational spectroscopy is concerned with the measurement of the energies of transitions between quantized rotational states of molecules in the gas phase. Rotational transitions of molecules are usually measured in the range $1-10\; \text{cm}^{-1}$ (microwave radiation) and rotational spectroscopy is therefore usually referred to as microwave spectroscopy.
Rotational spectroscopy is actively used by astrophysicists to explore the chemical composition of the interstellar medium using radio telescopes.
The rotational energies are derived theoretically by considering the molecules to be rigid rotors and applying the same treatment that we saw in chapter 20. Correction terms might be applied to account for deviation from the ideal rigid rotor case. As we saw in chapter 20, the quantized rotational energy levels of a rigid rotor depend on the angular moment of inertia, which in turn depends on the masses of the nuclei and the internuclear distance. Reversing the theoretical procedure of obtaining the energy levels from the distances, we can use the experimental energy levels to derive very precise values of molecular bond lengths (and in some complex case, also of angles). We will discuss below the simplest case of a diatomic molecule. For non-linear molecules, the moments of inertia are multiple, and only a few analytical method of solving the TISEq are available. For the most complicated cases, numerical methods can be used.
Rotation of diatomic molecules
Transitions between rotational states can be observed in molecules with a permanent electric dipole moment. The rigid rotor is a good starting point from which to construct a model of a rotating molecule. It is assumed that component atoms are point masses connected by rigid bonds. A linear molecule lies on a single axis and each atom moves on the surface of a sphere around the center of mass. The two degrees of rotational freedom correspond to the spherical coordinates, $\theta$ and $\varphi$, which describe the direction of the molecular axis. The quantum state is determined by two quantum numbers $J$ and $M$. $J$ defines the magnitude of the rotational angular momentum, and $M$ its component about an axis fixed in space, such as an external electric or magnetic field. In the absence of external fields, the energy depends only on $J$. Under the rigid rotor model, the rotational energy levels, $F(J)$, of the molecule can be expressed as:
$F\left(J\right)=BJ\left(J+1\right)\qquad J=0,1,2,\ldots \label{30.1.1}$
where $B$ is the rotational constant of the molecule and is related to its moment of inertia. In a diatomic molecule the moment of inertia about an axis perpendicular to the molecular axis is unique, so:
$B=\dfrac{h}{8\pi ^{2}cI}, \label{30.1.2}$
with:
$I=\dfrac{m_1m_2}{m_1 +m_2}d^2, \label{30.1.3}$
where $m_1$ and $m_2$ are the masses of the atoms and $d$ is the distance between them.
The selection rule for rotational spectroscopy dictate that during emission or absorption the rotational quantum number has to change by unity:
$\Delta J = J^{\prime } - J^{\prime \prime } = \pm 1, \label{30.1.4}$
where $J^{\prime }$ denotes the lower level and $J^{\prime \prime }$ denotes the upper level involved in the transition. Thus, the locations of the lines in a rotational spectrum will be given by
${\tilde \nu }_{J^{\prime }}\leftrightarrow J^{\prime \prime }=F\left(J^{\prime }\right)-F\left(J^{\prime \prime }\right)=2B\left(J^{\prime \prime }+1\right)\qquad J^{\prime \prime }=0,1,2,\ldots \label{30.1.5}$
The diagram illustrates rotational transitions that obey the $\Delta J=1$ selection rule is in figure $1$.$^1$ The dashed lines show how these transitions map onto features that can be observed experimentally. Adjacent $J^{\prime \prime} \leftarrow J^{\prime }$ transitions are separated by $2B$ in the observed spectrum. Frequency or wavenumber units can also be used for the $x$ axis of this plot.
The probability of a transition taking place is the most important factor influencing the intensity of an observed rotational line. This probability is proportional to the population of the initial state involved in the transition. The population of a rotational state depends on two factors. The number of molecules in an excited state with quantum number $J$, relative to the number of molecules in the ground state, $N_J/N_0$ is given by the Boltzmann distribution:
$\dfrac{N_J}{N_0}=e^{-\dfrac{E_J}{kT}} =\exp\left[-\dfrac {BhcJ(J+1)}{kT}\right], \label{30.1.6}$
where $k$ is the Boltzmann constant and $T$ is the absolute temperature. This factor decreases as $J$ increases. The second factor is the degeneracy of the rotational state, which is equal to $2J+1$. This factor increases as $J$ increases. Combining the two factors we obtain:
$\mathrm{population} \propto (2J+1)\exp\left[\dfrac{E_J}{kT}\right], \label{30.1.7}$
in agreement with the experimental shape of rotational spectra of diatomic molecules.
1. This diagram is taken from Wikipedia by user Nnrw, and distributed under CC BY_SA 3.0 license.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/29%3A_Spectroscopy/29.01%3A_Rotational_Spectroscopy.txt
|
Vibrational spectroscopy is concerned with the measurement of the energies of transitions between quantized vibrational states of molecules in the gas phase. These transitions usually occur in the middle infrared (IR) region of the electromagnetic wave at approximately $4,000-400\;\text{cm}^{-1}$ ($2.5-25\;\mu \text{m}$). In the gas phase, vibrational transitions are almost always accompanied by changes in rotational energy. Transitions involving changes in both vibrational and rotational states are usually abbreviated as rovibrational transitions. Since changes in rotational energy levels are typically much smaller than changes in vibrational energy levels, changes in rotational state are said to give fine structure to the vibrational spectrum. For a given vibrational transition, the same theoretical treatment that we saw in the previous section for pure rotational spectroscopy gives the rotational quantum numbers, energy levels, and selection rules.
As we have done in the previous section, we will discuss below the simplest case of a diatomic molecule. For non-linear molecules the spectra becomes complicated to calculate, but their interpretation remains an important tool for the analysis of chemical structures.
Vibration of heteronuclear diatomic molecules
Diatomic molecules with the general formula $\mathrm{AB}$ have one normal mode of vibration involving stretching of the $\mathrm{A}-\mathrm{B}$ bond. The vibrational term values, $G(v)$ can be calculated with the harmonic approximation that we discussed in chapter 20. The resulting equidistant energy levels depend on one vibrational quantum number $v$:
$G(v) = \omega_e \left( v + \dfrac{1}{2} \right), \label{30.2.1}$
where $\omega_e$ is the harmonic frequency around equilibrium. When the molecule is in the gas phase, it can rotate about an axis, perpendicular to the molecular axis, passing through the center of mass of the molecule. As we discussed in the previous section, the rotational energy is also quantized, and depend on the rotational quantum number $J$. The values of the ro-vibrational states are found (in wavenumbers) by combining the expressions for vibration and rotation:
$G(v)+F_{v}(J)=\left[\omega_e \left(v + \dfrac{1}{2} \right) +B_{v}J(J+1)\right], \label{30.2.2}$
where $F_{v}(J)$ are the rotational levels at each vibrational state $v$.$^1$
The selection rule for electric dipole allowed ro-vibrational transitions, in the case of a diamagnetic diatomic molecule is:
$\Delta v=\pm 1\ (\pm 2,\pm 3,\ldots),\; \Delta J=\pm 1. \label{30.2.3}$
The transition with $\Delta v =\pm 1$ is known as the fundamental transition, while the others are called overtones. The selection rule has two consequences:
1. Both the vibrational and rotational quantum numbers must change. The transition $\Delta v=\pm 1,\;\Delta J=0$ (Q-branch) is forbidden.
2. The energy change of rotation can be either subtracted from or added to the energy change of vibration, giving the P- and R- branches of the spectrum, respectively.
A typical rovibrational spectrum is reported in figure $1$ for the $\mathrm{CO}$ molecule.$^2$ The intensity of the signals is—once again—proportional to the initial population of the levels. Notice how the signals in the spectrum are divided among two sides, the P-branch to the left, and the R-branch to the right. These signals correspond to the transitions reported in figure $2$.$^3$ Notice how the transitions corresponding to the Q-branch are forbidden by the selection rules, and therefore not observed in the experimental spectrum. The position of the missing Q-branch, however, can be easily obtained from the experimental spectrum as the missing signal between the P- and R- branches. Since the Q-branch transitions do not involve changes in the rotational energy level, their value is directly proportional to $\omega_e$. This fact makes rovibrational spectroscopy an important experimental tool in the determination of bond distances of diatomic molecules.
Vibration of homonuclear diatomic molecules
The quantum mechanics for homonuclear diatomic molecules is qualitatively the same as for heteronuclear diatomic molecules, but the selection rules governing transitions are different. Since the electric dipole moment of the homonuclear diatomics is zero, the fundamental vibrational transition is electric-dipole-forbidden and the molecules are infrared inactive.
The spectra of these molecules can be observed by a type of IR spectroscopy that is subject to different selection rules. This technique is called Raman spectroscopy, and allows identification of the rovibrational spectra of homonuclear diatomic molecules because their molecular vibration is Raman-allowed.
1. ︎This is just a first approximation to rovibrational spectroscopy. Corrections for anharmonicity centrifugal distortion are necessary to closely match experimental spectra.︎
2. This picture is taken from Wikipedia of anonimous user, and distributed under CC BY 3.0 license.︎
3. This picture is taken from Wikipedia by user David-i98, and under public domain.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/29%3A_Spectroscopy/29.02%3A_Vibrational_Spectroscopy.txt
|
Electronic spectroscopy is concerned with the measurement of the energies of transitions between quantized electronic states of molecules. Electronic transitions are always associated with simultaneous changes in vibrational levels. In the gas phase vibronic transitions are also accompanied by changes in rotational energy.
Electronic transitions are typically observed in the visible and ultraviolet regions, in the wavelength range approximately $200-700\; \text{nm }$ ($50,000-14,000\; \text{cm}^{-1}$). When the electronic and vibrational energy changes are drastically different, vibronic coupling (mixing of electronic and vibrational wave functions) can be neglected and the energy of a vibronic level can be taken as the sum of the electronic and vibrational (and rotational) energies; that is, the Born–Oppenheimer approximation applies. The overall molecular energy depends not only on the electronic state but also on the vibrational and rotational quantum numbers, $v$ and $J$. In this context, it is conventional to add a double prime $\left(v^{\prime\prime},J^{\prime\prime}\right)$ for levels of the electronic ground state and a single prime $\left(v^{\prime},J^{\prime}\right)$ for electronically excited states.
Each electronic transition may show vibrational coarse structure, and for molecules in the gas phase, rotational fine structure. This is true even when the molecule has a zero dipole moment and therefore has no vibration-rotation infrared spectrum or pure rotational microwave spectrum.
It is necessary to distinguish between absorption and emission spectra. With absorption the molecule starts in the ground electronic state, and usually also in the vibrational ground state $v^{\prime\prime}=0$ because at ordinary temperatures the energy necessary for vibrational excitation is large compared to the average thermal energy. The molecule is excited to another electronic state and to many possible vibrational states $v^{\prime}=0,1,2,3,\ldots$. With emission, the molecule can start in various populated vibrational states, and finishes in the electronic ground state in one of many populated vibrational levels. The emission spectrum is more complicated than the absorption spectrum of the same molecule because there are more changes in vibrational energy level.
As we did for the previous two cases, we will concentrate below on the electronic absorption spectroscopy of diatomic molecules.
Electronic spectroscopy of diatomic molecules
The vibronic spectra of diatomic molecules in the gas phase also show rotational fine structure. Each line in a vibrational progression will show P- and R- branches. For some electronic transitions there will also be a Q-branch. The transition energies of the lines for a particular vibronic transition are given (in wavenumbers) by:
$G(J^{\prime },J^{\prime \prime })={\bar \nu }_{v^{\prime }-v^{\prime \prime }}+B^{\prime }J^{\prime }(J^{\prime }+1)-B^{\prime \prime }J^{\prime \prime }(J^{\prime \prime }+1). \label{30.3.1}$
The values of the rotational constants, $B^{\prime}$ and $B^{\prime\prime}$ may differ appreciably because the bond length in the electronic excited state may be quite different from the bond length in the ground state. The rotational constant is inversely proportional to the square of the bond length. Usually $B^{\prime}<B^{\prime\prime}$, as is true when an electron is promoted from a bonding orbital to an antibonding orbital, causing bond lengthening.
The treatment of rotational fine structure of vibronic transitions is similar to the treatment of rotation-vibration transitions and differs principally in the fact that the ground and excited states correspond to two different electronic states as well as to two different vibrational levels. For the P-branch $J^{\prime }=J^{\prime \prime}-1$, so that:
\begin{aligned} \bar{\nu}_P &=\bar{\nu}_{v^{\prime}-v^{\prime \prime}}+B^{\prime}\left(J^{\prime \prime}-1\right) J^{\prime \prime}-B^{\prime \prime} J^{\prime \prime}\left(J^{\prime \prime}+1\right) \ &=\bar{\nu}_{v^{\prime}-v^{\prime \prime}}-\left(B^{\prime}+B^{\prime \prime}\right) J^{\prime \prime}+\left(B^{\prime}-B^{\prime \prime}\right) J^{\prime \prime} \end{aligned} \label{30.3.2}
Similarly, for the R-branch $J^{\prime\prime }=J^{\prime }-1$, and:
\begin{aligned} {\bar \nu }_{R} &={\bar \nu}_{v^{\prime}-v^{\prime\prime}}+B^{\prime}J^{\prime}(J^{\prime}+1)-B^{\prime\prime}J^{\prime}(J^{\prime}-1) \ &={\bar \nu }_{v^{\prime}-v^{\prime\prime}}+(B^{\prime}+B^{\prime\prime})J^{\prime}+(B^{\prime}-B^{\prime\prime}){J^{\prime}}^{2}. \end{aligned} \label{30.3.3}
Thus, the wavenumbers of transitions in both P- and R- branches are given, to a first approximation, by the single formula:
${\bar \nu }_{P,R}={\bar \nu }_{v^{\prime },v^{\prime \prime }}+(B^{\prime }+B^{\prime \prime })m+(B^{\prime }-B^{\prime \prime })m^{2},\quad m=\pm 1,\pm 2\, \ldots. \label{30.3.4}$
Here positive $m$ values refer to the R-branch (with $m=+J^{\prime}=J^{\prime\prime}+1$) and negative values refer to the P-branch (with $m=-J^{\prime\prime}$).
The intensity of allowed vibronic transitions is governed by the Franck-Condon principle, which states that during an electronic transition, a change from one vibrational energy level to another will be more likely to happen if the two vibrational wave functions overlap more significantly. A diagrammatic representation of electronic spectroscopy and the Frack-Condon principle for a diatomic molecule is presented in figure $1$.$^1$
1. This picture is taken from Wikipedia by user Samoza, and distributed under CC BY-SA 3.0 license.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/29%3A_Spectroscopy/29.03%3A_Electronic_Spectroscopy.txt
|
Substance: $\Delta_{\text{f}} H^{-\kern-6pt{\ominus}\kern-6pt-}$ $\scriptstyle{\text{[kJ/mol]}}$ $\Delta_{\text{f}} G^{-\kern-6pt{\ominus}\kern-6pt-}$ $\scriptstyle{\text{[kJ/mol]}}$ $S^{-\kern-6pt{\ominus}\kern-6pt-}$ $\scriptstyle{\text{[J/(mol K)]}}$ $C_P$ $\scriptstyle{\text{[J/(mol K)]}}$
Ag(g) 284.9 246 173 20.8
Ag(s) 0 0 42.6 25.4
Ag+(aq) 105.8 77.1 73.5
AgCN(s) 146 156.9 107.2 66.7
Ag2CO3(s) –505.8 –436.8 167.4 112.3
AgNO3(s) –124.4 –33.4 140.9 93.1
Ag2O(s) –31.1 –11.2 121.3 65.9
Ag2S(s) –32.6 –40.7 144 76.5
AgBr(s) –100.4 –96.9 107.1 52.4
AgCl(s) –127.0 –109.8 96.3 50.8
AgF(s) –204.6 –187 84
AgI(s) –61.8 –66.2 115.5 56.8
Al(g) 330 289.4 164.6 21.4
Al(s) 0 0 28.3 24.2
Al2O3(s) –1675.7 –1582.3 50.9 79.0
AlF3(s) –1510.4 –1431.1 66.5 75.1
AlI3(s) –302.9 195.9
AlBr3(s) –527.2 180.2 100.6
AlCl3(s) –704.5 –628.11 112.3 91.1
Al(OH)3(s) –1277
Al(OH)4-(aq) –1490 –1297 117
AlPO4(s) –1733.8 –1617.9 90.8 93.2
Ar(g) 0 154.9 20.8
B(s) 0 0 5.9 11.1
B(g) 565 521.0 153.4 20.8
BH(g) 442.7 412.7 171.8 29.2
BH3(g) 89.2 93.3 188.2 36.0
B2S3(s) –240.6 100.0 111.7
Ba(g) 180 146 170.2
Ba(s) 0 0 62.5 28.1
BaCO3(s) –1213.0 –1134.4 112.1 86.0
BaH2(s) –177 –138.2 63.0
BaBr2(s) –757.3 –736.8 146.0
BaCl2(s) –855 –806.7 123.7 75.1
BaF2(s) –1207.1 –1156.8 96.4 71.2
BaI2(s) –602.1 –597 167.0
BaO(s) –548.0 –520.3 72.1 47.3
BaSO4(s) –1473.2 –1362.2 132.2 101.8
Be(g) 324 286.6 136.3 20.8
Be(s) 0 0 9.5 13.4
BeBr2(s) –353.5 108 69.4
BeCl2(s) –490.4 –445.6 75.8 62.4
BeF2(s) –1026.8 –979.4 53.4 51.8
BeI2(s) –192.5 121 71.1
BeO(s) –609.4 –580.1 13.8 25.6
Be(OH)2(s) –902.5 –815.0 45.5 62.1
BeSO4(s) –1205.2 –1093.8 77.9 85.7
Bi(g) 207.1 168.2 187 20.8
Bi(s) 0 0 56.7 25.5
Bi2O3(s) –573.9 –493.7 151.5 113.5
BiCl3(s) –379.1 –315.0 177.0 105.0
Br(aq) –121.4 –104.0 82.6
Br(g) 111.9 82.4 175 20.8
Br2(g) 30.9 3.1 245.5 36.0
Br2(l) 0 0 152.2 75.7
BrCl(g) 14.6 –1 240.1 35.0
BrF(g) –93.8 –109.2 229 33.0
BrF3(g) –1136 1119.4 254.4 66.6
C(g) 716.7 671.3 158.1 0.8
C(s,diamond) 1.9 2.9 2.4 6.1
C(s,graphite) 0 0 5.7 8.5
CBr4(g) 83.9 67 358.1
CBr4(s) 29.4 47.7 212.5
CCl2F2(g) –477.4 –439.4 300.8
CCl2O(g) –219.1 –204.9 283.5
CCl4(g) –95.7 –53.6 309.9
CCl4(l) –128.2 –62.6 216.2
CF4(g) –933.6 –888.3 261.6
CS2(g) 116.7 67.1 237.8 45.4
CS2(l) 89 64.6 151.3 76.4
CO(g) –110.5 –137.2 197.7 29.1
CO2(g) –393.5 –394.4 213.8 37.1
Ca(g) 177.8 144 154.9 20.8
Ca(s) 0 0 41.6 25.9
Ca(OH)2(s) –985.2 –897.5 83.4 87.5
CaBr2(s) –682.8 –663.6 130
CaCl2(s) –795.4 –748.8 108.4 72.9
CaCN(s) –184.5
CaCO3(s,arag.) –1207.8 –1128.2 88 82.3
CaCO3(s,calc.) –1207.6 –1129.1 91.7 83.5
CaF2(s) –1228.0 –1175.6 68.5 67.0
CaH2(s) –181.5 –142.5 41.4 41.0
CaI2(s) –533.5 –528.9 142
CaO(s) –634.9 –603.3 38.1 42.0
CaSO4(s) –1434.5 –1322.0 106.5 99.7
Cd(g) 111.8 167.7 20.8
Cd(s) 0 0 51.8 26.0
CdBr2(s) –316.2 –296.3 137.2 76.7
CdCl2(s) –391.5 –343.9 115.3 74.7
CdCO3(s) –750.6 –669.4 92.5
CdF2(s) –700.4 –647.7 77.4
CdS(s) –161.9 –156.5 64.9
CdSO4(s) –933.3 –822.7 123.0 99.6
Cl(aq) –167.1 –131.2 56.6
Cl(g) 121.3 105.3 165.2 21.8
Cl2(g) 0 0 223.1 33.9
ClF(g) –50.3 –51.8 217.9 32.1
ClF3(g) –163.2 –123.0 281.6 63.9
ClO2(g) 89.1 105 263.7 46.0
Cl2O(g) 80.3 97.9 266.2 45.4
Co(g) 424.7 380.3 179.5 23.0
Co(s) 0 0 30 24.8
CoCl2(s) –312.5 –269.8 109.2 78.5
Cr(g) 396.6 351.8 174.5 20.8
Cr(s) 0 0 23.8 23.4
Cr2O3(s) –1139.7 –1058.1 81.2 118.7
CrCl2(s) –395.4 –356 115.3 71.2
CrCl3(s) –556.5 –486.1 123 91.8
CrO2(g) –598
CrO3(g) –292.9 266.2 56.0
Cs(g) 76.5 49.6 175.6 20.8
Cs(s) 0 0 85.2 32.2
CsCl(s) –443.0 –414.5 101.2 52.5
Cu(g) 337.4 297.7 166.4 20.8
Cu(s) 0 0 33.2 24.2
Cu2O(s) –168.6 –146.0 93.1 63.6
CuO(s) –157.3 –129.7 42.6
Cu2S(s) –79.5 –86.2 120.9 76.3
CuS(s) –53.1 –53.6 66.5 47.8
CuSO4(s) –771.4 –662.2 109.2
CuBr(s) –104.6 –100.8 96.1 54.7
CuBr2(s) –141.8
CuCl(s) –137.2 –119.9 86.2 48.5
CuCl2(s) –220.1 –175.7 108.1 71.9
CuCN(s) 96.2 111.3 84.5
F(aq) –335.4 –278.8 –13.8 |
F(g) 79.4 62.3 158.8 22.7
F2(g) 0 0 202.8 32.3
F2O(g) 24.5 41.8 247.5 43.3
FO(g) 109 105.3 216.4 32.0
FB(g) –122.2 –149.8 200.5 58.6
Fe(g) 416.3 370.7 180.5 25.7
Fe(s) 0 0 27.3 25.1
FeO(s) –272.0 –251.4 60.7
Fe2+(aq) –89.1 –78.9 –137.7 |
Fe2O3(s) –824.2 –742.2 87.4 103.9
Fe3+(aq) –48.5 –4.7 –315.9 |
Fe3O4(s) –1118.4 –1015.4 146.4 143.4
FeCO3(s) –740.6 –666.7 92.9 82.1
FeS2(s) –178.2 –166.9 52.9 62.2
FeCl2(s) –341.8 –302.3 118 75.7
FeCl3(s) –399.5 –334.0 142.3 96.7
FeBr2(s) –249.8 –238.1 140.6
FeBr3(s) –268.2
Fe3C(s) 25.1 20.1 104.6 105.9
H(g) 218.0 203.3 114.7 20.8
H+(aq) 0 0 0
H2(g) 0 0 130.7 28.8
H2O(g) –241.8 –228.6 188.8 33.6
H2O(l) –285.8 –237.1 70.0 75.3
H2O2(g) –136.3 –105.6 232.7 43.1
H2O2(l) –187.8 –120.4 109.6 89.1
H2S(g) –20.6 –33.4 205.8 34.2
H2Se(g) 29.7 15.9 219 34.7
H2SO4(aq) –909.3 –744.5 20.1
H2SO4(l) –814.0 –690.0 156.9 138.9
H3PO4(l) –1271.7 –1123.6 150.8 145.0
H3PO4(s) –1284.4 –1124.3 110.5 106.1
HBr(aq) –121.6 –104.0 82.4
HBr(g) –36.3 –53.4 198.7 29.1
HCl(aq) –167.2 –131.2 56.5
HCl(g) –92.3 –95.3 186.9 29.1
HCN(g) 135.1 124.7 201.8 35.9
HCN(l) 108.9 125 112.8 70.6
HF(aq) –332.6 –278.8 –13.8 |
HF(g) –273.3 –275.4 173.8
HI(aq) –55.2 –51.6 111.3
HI(g) 26.5 1.7 206.6 29.2
HNO2(g) –79.5 –46.0 254.1
HNO3(aq) –207.4 –111.3 146.4
HNO3(g) –133.9 –73.5 266.9 54.1
HNO3(l) –174.1 –80.7 155.6 109.9
He(g) 0 0 126.2 20.8
Hg(g) 61.4 31.8 175
Hg(l) 0 0 75.9 28.0
Hg2(g) 108.8 68.2 288.1
HgO(s) –90.8 –58.5 70.3 44.1
HgS(s,red) –58.2 –50.6 82.4 48.4
Hg2SO4(s) –743.1 –625.8 200.7 132.0
HgSO4(s) –707.5
Hg2Cl2(s) –265.4 –210.7 191.6 191.6
HgCl2(s) –224.3 –178.6 146.0 146.0
Hg2Br2(s) –206.9 –181.1 218.0 218.0
HgBr2(s) –170.7 –153.1 172.0 172.0
Hg2I2(s) –121.3 –111 233.5 233.5
HgI2(s) –105.4 –101.7 180.0 180.0
I(aq) –56.8 –51.6 106.5
I(g) 106.8 70.2 180.8 20.8
I2(g) 62.4 19.3 260.7 36.9
I2(s) 0 0 116.1 54.4
HIO3(s) –230.1
IBr(g) 40.8 3.7 258.8 36.4
ICl(g) 17.8 –5.5 247.6 35.6
IF(g) –95.7 –118.5 236.2 33.4
K(g) 89.0 60.5 160.3 20.8
K(s) 0 0 64.7 29.6
K2CO3(s) –1151.0 –1063.5 155.5 114.4
K2O(s) –361.5 –322.1 94.1
K2O2(s) –494.1 –425.1 102.1
K2SO4(s) –1437.8 –1321.4 175.6 131.5
KBr(s) –393.8 –380.7 95.9 52.3
KCl(s) –436.5 –408.5 82.6 51.3
KF(s) –567.3 –537.8 66.6 49.0
KI(s) –327.9 –324.9 106.3 52.9
KClO3(s) –397.7 –296.3 143.1 100.3
KMnO4(s) –837.2 –737.6 171.7 117.6
KNO2(s) –369.8 –306.6 152.1 107.4
KNO3(s) –494.6 –394.9 133.1 96.4
KSCN(s) –200.2 –178.3 124.3 88.5
Kr(g) 0 0 164.1 20.8
Li(g) 159.3 126.6 138.8 20.8
Li(s) 0 0 29.1 24.9
Li+(aq) –278.5 –293.3 12.4
Li2O(s) –597.9 –561.2 37.6 54.1
LiOH(s) –487.5 –441.5 42.8 49.6
LiNO3(s) –483.1 –381.1 90.0
LiBr(s) –351.2 –342 74.3
LiCl(s) –408.6 –384.4 59.3 48.0
LiF(s) –616 –587.7 35.7 41.6
LiI(s) –270.4 –270.3 86.8 51.0
Mg(g) 147.1 112.5 148.6 20.8
Mg(s) 0 0 32.7 24.9
MgO(s) –601.6 –569.3 27.0 37.2
Mg(OH)2(s) –924.5 –833.5 63.2 77.0
MgS(s) –346.0 –341.8 50.3 45.6
MgSO4(s) –1284.9 –1170.6 91.6 96.5
MgBr2(s) –524.3 –503.8 117.2
MgCl2(s) –641.3 –591.8 89.6 71.4
MgF2(s) –1124.2 –1071.1 57.2 61.6
Mn(g) 280.7 238.5 173.7 20.8
Mn(s) 0 0 32 26.3
MnO(s) –385.2 –362.9 59.7 45.4
MnO2(s) –520.0 –465.1 53.1 54.1
MnO4(aq) –541.4 –447.2 191.2
MnBr2(s) –384.9
MnCl2(s) –481.3 –440.5 118.2 72.9
Mo(g) 658.1 612.5 182 20.8
Mo(s) 0 0 28.7 24.1
MoO2(s) –588.9 –533.0 46.3 56.0
MoO3(s) –745.1 –668.0 77.7 75.0
MoS2(s) –235.1 –225.9 62.6 63.6
MoS3(s) –364 –354 119
N(g) 472.7 455.5 153.3 20.8
N2(g) 0 0 191.6 29.1
NF3(g) –132.1 –90.6 260.8 53.4
NH3(g) –45.9 –16.4 192.8 35.1
NH4+(aq) –133.3 –79.3 111.2
NH4Cl(s) –314.4 –202.9 94.6 84.1
NH4NO3(s) –365.6 –183.9 151.1 139.3
NH4OH(l) –361.2 –254.0 165.6 154.9
(NH4)2SO4(s) –1180.9 –901.7 220.1 187.5
N2H4(g) 95.4 159.4 238.5
N2H4(l) 50.6 149.3 121.2
NO2(g) 33.2 51.3 240.1 37.2
N2O(g) 81.6 103.7 220 38.6
NO(g) 91.3 87.6 210.8
N2O4(g) 11.1 99.8 304.4 79.2
N2O4(l) –19.5 97.5 209.2 142.7
Na(g) 107.5 77 153.7 20.8
Na(s) 0 0 51.3 28.2
Na+(aq) –240.2 –261.9 58.5
Na2CO3(s) –1130.7 –1044.4 135 112.3
Na2O(s) –414.2 –375.5 75.1 69.1
Na2O2(s) –510.9 –447.7 95 89.2
Na2SO4(s) –1387.1 –1270.2 149.6 128.2
NaBr(aq) –361.7 –365.8 141.4
NaBr(g) –143.1 –177.1 241.2 36.3
NaBr(s) –361.1 –349.0 86.8 51.4
NaCl(aq) –407.3 –393.1 115.5
NaCl(s) –411.2 –384.1 72.1 50.5
NaCN(s) –87.5 –76.4 115.6 70.4
NaF(aq) –572.8 –540.7 45.2
NaF(s) –576.6 –546.3 51.1 46.9
NaN3(s) 21.7 93.8 96.9 76.6
NaNO3(aq) –447.5 –373.2 205.4
NaNO3(s) –467.9 –367.0 116.5 92.9
NaO2(s) –260.2 –218.4 115.9 72.1
NaOH(s) –425.8 –379.7 64.4 59.5
NaH(s) –56.3 –33.6 40 36.4
Ne(g) 0 0 146.3 20.8
Ni(g) 429.7 384.5 182.2 23.4
Ni(s) 0 0 29.9 26.1
Ni2O3(s) –489.5
Ni(OH)2(s) –529.7 –447.2 88
NiBr2(s) –212.1
NiCl2(s) –305.3 –259.0 97.7 71.7
NiF2(s) –651.4 –604.1 73.6 64.1
O(g) 249.2 231.7 161.1 21.9
O2(g) 0 0 205.2 29.4
O3(g) 142.7 163.2 238.9 39.2
OH(aq) –230.0 –157.2 –10.9 |
Os(g) 791 745 192.6 20.8
Os(s) 0 0 32.6 24.7
OsO4(g) –337.2 –292.8 293.8 74.1
OsO4(s) –394.1 –304.9 143.9
P(g,white) 316.5 280.1 163.2 20.8
P(s,black) –39.3
P(s,red) –17.6 –12.5 22.8 21.2
P(s,white) 0 0 41.1 23.8
P2(g) 144.0 103.5 218.1
P4(g) 58.9 24.4 280.0
PCl3(g) –287.0 –267.8 311.8 71.8
PCl3(l) –319.7 –272.3 217.1
PCl5(g) –374.9 –305.0 364.6 112.8
PH3(g) 5.4 13.5 210.2 37.1
POCl3(g) –558.5 –512.9 325.5
POCl3(l) –597.1 –520.8 222.5
Pb(g) 195.2 162.2 175.4 20.8
Pb(s) 0 0 64.8 26.8
PbCl2(s) –359.4 –314.1 136
PbCO3(s) –699.1 –625.5 131 87.4
PbO(s,litharge) –219.0 –188.9 66.5 45.8
PbO(s,massic.) –217.3 –187.9 68.7 45.8
PbO2(s) –277.4 –217.3 68.6 64.6
Pb(NO3)2(aq) –416.3 –246.9 303.3
Pb(NO3)2(s) –451.9
PbS(s) –100.4 –98.7 91.2 49.5
PbSO4(s) –920.0 –813.0 148.5 103.2
Rb(g) 80.9 53.1 170.1 20.8
Rb(s) 0 0 76.8 31.1
RbCl(s) –435.4 –407.8 95.9 52.4
S(g,rhombic) 277.2 236.7 167.8 23.7
S(s,rhombic) 0 0 32.1 22.6
SO2(g) –296.8 –300.1 248.2 39.9
SO3(g) –395.7 –371.1 256.8 50.7
SO42–(aq) –909.3 –744.5 18.5
SOCl2(g) –212.5 –198.3 309.8
Se(g,gray) 227.1 187 176.7
Se(s,gray) 0 0 42.4 25.4
Si(g) 450 405.5 168.0 22.3
Si(s) 0 0 18.8 20.0
SiC(s,cubic) –65.3 –62.8 16.6 26.9
SiC(s,hexag.) –62.8 –60.2 16.5 26.7
SiCl4(g) –657.0 –617.0 330.7
SiCl4(l) –687.0 –619.8 239.7
SiH4(g) 34.3 56.9 204.6 42.8
Sn(g,white) 301 266.2 168.5 21.3
Sn(s,gray) –2.1 0.1 44.1 25.8
Sn(s,white) 0 0 51.2 27.0
SnCl4(g) –471.5 –432.2 365.8 98.3
SnCl4(l) –511.3 –440.1 258.6 165.3
SnO2(s) –557.6 –515.8 49 52.6
Ti(g) 473 428.4 180.3 24.4
Ti(s) 0 0 30.7 25.1
TiCl2(s) –513.8 –464.4 87.4 69.8
TiCl3(s) –720.9 –653.5 139.7 97.2
TiCl4(g) –763 –726.3 353 95.4
TiCl4(l) –804.2 –737.2 252.3 145.2
TiO2(s) –944.0 –888.8 50.6 55.0
U(g) 533 488.4 199.8 23.7
U(s) 0 0 50.2 27.7
UF4(g) –1598.7 –1572.7 368 91.2
UF4(s) –1914.2 –1823.3 151.7 116.0
UF6(g) –2147.4 –2063.7 377.9 129.6
UF6(s) –2197.0 –2068.5 227.6 166.8
UO2(g) –465.7 –471.5 274.6 51.4
UO2(s) –1085.0 –1031.8 77.0 63.6
V(g) 514.2 754.4 182.3 26.0
V(s) 0 0 28.9 24.9
V2O5(s) –1550.6 –1419.5 131.0 127.7
VCl3(s) –580.7 –511.2 131.0 93.2
VCl4(g) –525.5 –492.0 362.4 96.2
VCl4(l) –569.4 –503.7 255.0
Xe(g) 0 0 169.7 20.8
Zn(g) 130.4 94.8 161.0 20.8
Zn(s) 0 0 41.6 25.4
ZnBr2(s) –328.7 –312.1 138.5
ZnCl2(s) –415.1 –369.4 111.5 71.3
ZnF2(s) –764.4 –713.3 73.7 65.7
ZnI2(s) –208.0 –209.0 161.1
Zn(NO3)2(s) –483.7
ZnS(s,sphaler.) –206.0 –201.3 57.7 46.0
ZnSO4(s) –982.8 –871.5 110.5 99.2
Zr(g) 608.8 566.5 181.4 26.7
Zr(s) 0 0 39 25.4
ZrCl2(s) –502.0 –386 110
ZrCl4(s) –980.5 –889.9 181.6 119.8
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/30%3A_Appendix/30.01%3A_Thermodynamic_Data_of_Inorganic_Substances_at_298_K.txt
|
Formula: Name: $\Delta_{\text{f}} H^{-\kern-6pt{\ominus}\kern-6pt-}$ $\scriptstyle{\text{[kJ/mol]}}$ $\Delta_{\text{f}} G^{-\kern-6pt{\ominus}\kern-6pt-}$ $\scriptstyle{\text{[kJ/mol]}}$ $S^{-\kern-6pt{\ominus}\kern-6pt-}$ $\scriptstyle{\text{[J/(mol K)]}}$ $C_P$ $\scriptstyle{\text{[J/(mol K)]}}$
CHBr3 Bromoform(g) 25 16 331 71
CHCl3 Chloroform(l) –134.1 –73.7 201.7 114.2
CHCl3 Chloroform(g) –102.7 6.0 295.7 65.7
CH2O2 Formic acid(l) –425 –361.4 129 99
CH2O2 Formic acid(g) –378.7
CH3 Methyl(g) 145.7 47.9 194.2 38.7
CH3Br Bromomethane(l) –59.8
CH3Br Bromomethane(g) –35.4 –26.3 246.4 42.4
CH3Cl Chloromethane(g) –81.9 –63 234.6 40.8
CH3F Fluormethane(g) –234 –210 222.9 37.5
CH3I Iodomethane(l) –13.6 136.2 126
CH3I Iodomethane(g) 14.4 16 254.1 44.1
CH3NO2 Nitromethane(l) –112.6 –14.4 171.8 106.6
CH3NO2 Nitromethane(g) –80.8 –7 282.9 55.5
CH4 Methane(g) –74.6 –50.5 186.3 35.7
CH4O Methanol(l) –239.2 –166.6 126.8 81.1
CH4O Methanol(g) –201 –162.3 239.9 44.1
CH5N Methylamine(l) –47.3 35.7 150.2 102.1
CH5N Methylamine(g) –22.5 32.7 242.9 50.1
C2H2 Ethyne (acetylene)(g) 226.9 209 201 44
C2H4 Ethene(g) 52.5 68.4 219.3 42.9
C2H4O2 Acetic acid(l) –484.3 –389.9 159.8 123.3
C2H4O2 Acetic acid(g) –432.2 –374.2 283.5 63.4
C2H5Br Bromoethane(l) –90.5 –25.8 198.7 100.8
C2H5Br Bromoethane(g) –61.9 –23.9 286.7 64.5
C2H5Cl Chloroethane(l) –136.8 –59.3 190.8 104.3
C2H5Cl Chloroethane(g) –112.1 –60.4 276 62.8
C2H5NO2 Nitroethane(l) –143.9 134.4
C2H5NO2 Nitroethane(g) –103.8 –5 320.5 79
C2H6 Ethane(g) –84 –32 229.2 52.5
C2H6O Ethanol(l) –277.6 –174.8 160.7 112.3
C2H6O Ethanol(g) –234.8 –167.9 281.6 65.6
C2H6O Methoxymethane(l) –203,3
C2H6O Methoxymethane(g) –184.1 –112.6 266.4
C2H7N Ethylamine(l) –74.1 130
C2H7N Ethylamine(g) –47.5 36.3 283.8 71.5
C3H4 Cyclopropene(g) 277.1 286 244 53
C3H4 Propyne(g) 185 194 248 61
C3H6 Cyclopropane(l) 35.2
C3H6 Cyclopropane(g) 53.3 104.5 237.5 55.6
C3H6 Propene(g) 20 62 267 64
C3H6O Acetone(l) –248.4 199.8 126.3
C3H6O Acetone(g) –217.1 –152.7 295.3 74.5
C3H6O2 Propanoic acid(l) –510.7 191 152.8
C3H6O2 Propanoic acid(g) –455.7
C3H8 Propane(l) –120.9
C3H8 Propane(g) –103.8 –23.4 270.3 73.6
C3H8O 1-Propanol(l) –302.6 193.6 143.9
C3H8O 1-Propanol(g) –255.1 322.6 85.6
C3H8O 2-Propanol(l) –318.1 181.1 156.5
C3H8O 2-Propanol(g) –272.6 309.2 89.3
C3H9N 1-Propanamine(g) –72 40 324
C4H6 1-Butyne(l) 141.4
C4H6 1-Butyne(g) 165.2 202 291 81
C4H6 2-Butyne(l) 119.1
C4H6 2-Butyne(g) 145.7 185 283 78
C4H6 Cyclobutene(g) 156.7 175 64 64
C4H8 2-Methyl–1-propene(g) –17 58 294 89
C4H8 1-Butene(l) –20.8 227 118
C4H8 1-Butene(g) 0.1 71 306 86
C4H8 Cyclobutane(l) 3.7
C4H8 Cyclobutane(g) 27.7 110 265
C4H8O2 Butanoic acid(l) –533.8 222.2 178.6
C4H8O2 Butanoic acid(g) –475.9
C4H10 2-Methylpropane(g) –135 –21 295 97
C4H10 Butane(l) –147.3 140.9
C4H10 Butane(g) –125.7 –17 310 98
C4H10O 1-Butanol(l) –327.3 225.8 177.2
C4H10O 1-Butanol(g) –274.9
C4H10O 2-Butanol(l) –342.6 214.9 196.9
C4H10O 2-Butanol(g) –292.8 359.5 112.7
C5H8 1-Pentyne(g) 144 210 330 105
C5H8 2-Pentyne(g) 129 194 332 99
C5H8 Cyclopentene(l) 4.3 201.2 122.4
C5H8 Cyclopentene(g) 34 111 290 75
C5H10 1-Pentene(l) –46.9 262.6 154
C5H10 1-Pentene(g) –21.1 79 346 110
C5H10 2-Methyl–1-butene(g) –35.2 66 340 112
C5H10 2-Methyl–1-butene(l) –61.1 254 157.2
C5H10 Cyclopentane(l) –105.1 204.5 128.8
C5H10 Cyclopentane(g) –76.4 39 293 83
C5H10O2 Pentanoic acid(l) –559.4 259.8 210.3
C5H10O2 Pentanoic acid(g) –491.9
C5H12 2,2-Dimethylpropane(g) –166 –15 306 122
C5H12 2-Methylbutane(g) –155 –15 344 119
C5H12 Pentane(l) –173.5 167.2
C5H12 Pentane(g) –146.9 –8 349 120
C5H12O 1-Pentanol(l) –351.6 208.1
C5H12O 1-Pentanol(g) –294.6
C5H12O 2-Pentanol(l) –365.2
C5H12O 2-Pentanol(g) –311
C5H12O 3-Pentanol(l) –368.9 239.7
C5H12O 3-Pentanol(g) –314.9
C5H12O Methyl tert-butyl ether(l) –313.6 265.3 187.5
C5H12O Methyl tert-butyl ether(g) –283.7
C6H6 Benzene(l) 49.1 124.5 173.4 136
C6H6 Benzene(g) 82.9 129.7 269.2 82.4
C6H7N Aniline(l) 191.9
C6H7N Aniline(g) 87.5 –7 317.9 107.9
C6H10 1-Hexyne(g) 124 219 369 128
C6H10 Cyclohexene(l) –28.5 214.6 148.3
C6H10 Cyclohexene(g) –5 107 311 105
C6H12 1-Hexene(l) –74.2 295.2 183.3
C6H12 1-Hexene(g) –43.5 87 385 132
C6H12 2-Methyl–1-pentene(g) –59.4
C6H12 2-Methyl–1-pentene(l) –90
C6H12 Cyclohexane(l) –156.4 154.9
C6H12 Cyclohexane(g) –123.4 32 298 106
C6H12 Methylcyclopentane(g) –106.2
C6H12 Methylcyclopentane(l) –137.9
C6H12O2 Hexanoic acid(l) –583.8
C6H12O2 Hexanoic acid(g) –511.9
C6H14 2,2-Dimethylbutane(g) –185.9 –10 358 142
C6H14 2,2-Dimethylbutane(l) –213.8 272.5 191.9
C6H14 2-Methylpentane(g) –174.6 –5 381 144
C6H14 2-Methylpentane(l) –204.6 290.6 193.7
C6H14 3-Methylpentane(g) –171.9 –2 380 143
C6H14 3-Methylpentane(l) –202.4 292.5 190.7
C6H14 Hexane(l) –198.7 195.6
C6H14 Hexane(g) –166.9 –0.3 388 143
C6H14O 1-Hexanol(l) –377.5 287.4 240.4
C6H14O 1-Hexanol(g) 315.9
C6H14O 2-Hexanol(l) –392
C6H14O 2-Hexanol(g) –333.5
C7H6O Phenol(s) –165.1 144 127.4
C7H6O Phenol(g) –96.4 –33 316 104
C7H8 Methylbenzene(l) 12.0 220 156
C7H8 Methylbenzene(g) 50.0 122 321 104
C7H14 1-Heptene(l) –97.9 327.6 211.8
C7H14 1-Heptene(g) –62.3 96 424 15
C7H14 Cycloheptane(l) –156.6
C7H14 Cycloheptane(g) –118.1
C7H14 Ethylcyclopentane(l) –163.4 279.9
C7H14 Ethylcyclopentane(g) –126.9
C7H14 Methylcyclohexane(g) –154.7
C7H14 Methylcyclohexane(l) –190.1 184.8
C7H14O2 Heptanoic acid(l) –610.2 265.4
C7H14O2 Heptanoic acid(g) –536.2
C7H16 2,2-Dimethylpentane(g) –205.7
C7H16 2,2-Dimethylpentane(l) –238.3 300.3 221.1
C7H16 2-Methylhexane(g) –194.5
C7H16 2-Methylhexane(l) –229.5 323.3 222.9
C7H16 3-Methylhexane(g) –191.3
C7H16 3-Methylhexane(l) –226.4
C7H16 Heptane(l) –224.2 224.7
C7H16 Heptane(g) –187.6 8 428 166
C7H16O 1-Heptanol(l) –403.3 272.1
C7H16O 1-Heptanol(g) –336.5
C8H10 Ethylbenzene(l) –12.3 183.2
C8H10 Ethylbenzene(g) 29.9 131 361 128
C8H16 1-Octene(l) –124.5 241
C8H16 1-Octene(g) –81.3 104 463 178
C8H16 Cyclooctane(l) –167.7
C8H16 Cyclooctane(g) –124.4
C8H16 Ethylcyclohexane(l) –212.1 280.9 211.8
C8H16 Ethylcyclohexane(g) –171.5
C8H16O2 Octanoic acid(l) –636 297.9
C8H16O2 Octanoic acid(g) –554.3
C8H18 2-Methylheptane(g) –215.3
C8H18 2-Methylheptane(l) –255 356.4 252
C8H18 3-Methylheptane(g) –212.5
C8H18 3-Methylheptane(l) –252.3 362.6 250.2
C8H18 Octane(l) –250.1 254.6
C8H18 Octane(g) –208.5 16 467 189
C8H18O 1-Octanol(l) –426.5 305.2
C8H18O 1-Octanol(g) –355.6
C8H19 2,2-Dimethylhexane(g) –224.5
C8H19 2,2-Dimethylhexane(l) –261.9
C9H18 Propylcyclohexane(g) –192 420 242
C9H12 Propylbenzene(g) 8 137 401 154
C9H16 1-Nonyne(l) 16.3
C9H16 1-Nonyne(g) 62.3
C9H18O2 Nonanoic acid(l) –659.7 362.4
C9H18O2 Nonanoic acid(g) –577.3
C9H20 2,2-Dimethylheptane(g) –246
C9H20 2,2-Dimethylheptane(l) –288.1
C9H20 Nonane(l) –274.7 284.4
C9H20 Nonane(g) –228.2 25 506 212
C9H20O 1-Nonanol(l) –453.4
C9H20O 1-Nonanol(g) –376.5
C10H8 Naphthalene(g) 151 224 336
C10H14 Butylbenzene(l) –63.2 321.2 243.4
C10H14 Butylbenzene(g) –11.8
C10H20 1-Decene(l) –173.8 425 300.8
C10H20 1-Decene(g) –123.3 301
C10H20 Butylcyclohexane(l) –263.1 345 271
C10H20 Butylcyclohexane(g) –213.7
C10H20O2 Decanoic acid(s) –713.7
C10H20O2 Decanoic acid(l) –684.3
C10H20O2 Decanoic acid(g) –594.9
C10H22 2-Methylnonane(g) –260.2
C10H22 2-Methylnonane(l) –309.8 420.1 313.3
C10H22 Decane(l) –300.9 314.4
C10H22 Decane(g) –249.5 33 545 235
C10H22O 1-Decanol(l) –478.1 370.6
C10H22O 1-Decanol(g) –396.6
C11H10 1-Methylnaphthalene(l) 56.3 254.8 224.4
C11H10 2-Methylnaphthalene(s) 44.9 220 196
C11H10 2-Methylnaphthalene(g) 106.7
C11H22 1-Undecene(g) 344.9
C11H24 Undecane(l) –327.2 344.9
C11H24 Undecane(g) –270.8 42 584 257
C11H24O 1-Undecanol(l) –504.8
C12H24 1-Dodecene(l) –226.2 484.8 360.7
C12H24 1-Dodecene(g) –165.4
C12H24O2 Dodecanoic acid(s) –774.6 404.3
C12H24O2 Dodecanoic acid(l) –737.9
C12H24O2 Dodecanoic acid(g) 642
C12H26 Dodecane(l) –350.9 375.8
C12H26 Dodecane(g) –289.4 50 623 280
C12H26O 1-Dodecanol(l) –528.5 438.1
C12H26O 1-Dodecanol(g) –436.6
C14H10 Anthracene(g) 231
C14H10 Phenantrene(g) 207
C15H30 Decylcyclopentane(l) –367.3
C16H26 Decylbenzene(l) –218.3
C16H26 Decylbenzene(g) –138.6
C16H32 1-Hexadecene(l) –328.7 587.9 488.9
C16H32 1-Hexadecene(g) –248.4
C16H32O2 Hexadecanoic acid(s) –891.5 452.4 460.7
C16H32O2 Hexadecanoic acid(l) –838.1
C16H32O2 Hexadecanoic acid(g) –737.1
C16H34 N-hexadecane(l) –456.1 501.6
C16H34 N-hexadecane(g) –374.8
C18H12 Chrysene(s) 145.3
C18H12 Chrysene(g) 269.8
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/30%3A_Appendix/30.02%3A_Thermodynamic_Data_of_Organic_Substances_at_298_K.txt
|
Physical chemistry encompasses a wide variety of ideas that are intimately linked. For the most part, we cannot understand one without having some understanding of many others. We overcome this problem by looking at the same idea from a series of slightly different and increasingly sophisticated perspectives. This book focuses on the theories of physical chemistry that describe and make predictions about chemical equilibrium. We omit many topics that are usually understood to be included in the subject of physical chemistry. In particular, we treat quantum mechanics only briefly and spectroscopy not at all.
The goals of chemistry are to predict molecular structures and chemical reactivity. For a given empirical formula, we want to be able to predict all of the stable, three-dimensional atomic arrangements that can exist. We also want to be able to predict all of the reactions in which each such molecule can participate. And, while we are drawing up our wish list, we want to be able to predict how fast each reaction goes at any set of reaction conditions that may happen to be of interest.
Many different kinds of theories enable us to make useful predictions about chemical reactivity. Sometimes we are able to make predictions based on detailed quantum mechanical calculations. When thermodynamic data are available, we can make precise predictions about the extent to which a particular reaction can occur. Lacking such data about a compound, but given its structure, we can usually make some worthwhile predictions based on generalizations (models) that reflect our accumulated experience with particular classes of compounds and their known reactions. Usually these predictions are qualitative, and they fail to include many noteworthy features that emerge when experimental studies are made.
Physical chemistry is the general theory of the properties of chemical substances, with chemical reactivity being a pivotally important property. This book focuses on the core ideas in the subjects of chemical kinetics, chemical thermodynamics, and statistical thermodynamics. These ideas apply to the characterization, correlation, and prediction of the extent to which any chemical reaction can occur. Because predicting how a system will react is substantially equivalent to predicting its equilibrium position, we direct our efforts to understanding how each of these subjects contributes to our understanding of the equilibrium processes that are important in chemistry. In this chapter, we review the general characteristics of these subjects.
The study of chemical kinetics gives us one way to think about chemical equilibrium that is simple and direct. Classical chemical thermodynamics gives us a way to predict chemical equilibria for a wide range of reactions from experimental observations made under a much smaller range of conditions. That is, once we have measured the thermodynamic functions that characterize a compound, we can use these values to predict the behavior of the compound in a wide variety of reactions. Statistical mechanics gives us a conceptual basis for understanding why the laws of chemical thermodynamics take the form that they do. It also provides a way to obtain accurate values for the thermodynamic properties of many compounds.
01: Introduction - Background and a Look Ahead
The concept of ideal gas behavior plays a pivotal role in the development of science and particularly in the development of thermodynamics. As we shall emphasize, intermolecular forces do not influence the behavior of an ideal gas. Ideal gas molecules are neither attracted to one another nor repelled by one another. For this reason, the properties of an ideal gas are particularly simple. Because ideal gas behavior is so important, we begin by studying ideal gases from both an experimental and a theoretical perspective.
In Chapter 2, we review the experimental observations that we can make on gases and the idealizations that we introduce to extrapolate the behavior of ideal gases from the observations we make on real gases. We also develop Boyle’s law from a very simple model for the interactions between point-mass gas molecules and the walls of their container.
In Chapter 4, we develop a detailed model for the behavior of an ideal gas. The physical model is the one we use in Chapter 2, but the mathematical treatment is much more sophisticated. For this treatment we need to develop a number of ideas about probability, distribution functions, and statistics. Chapter 3 introduces these topics, all of which again play important roles when we turn to the development of statistical thermodynamics in Chapter 19.
1.02: Chemical Kinetics
Chemical kinetics is the study of how fast chemical reactions occur. In Chapter 5, we see that there is a unique way to specify what we mean by “how fast.” We call this specification the reaction rate. Chemical kinetics is the study of the factors that determine the rate of a particular reaction. There are many such factors, among them:
• temperature
• pressure
• concentrations of the reactants and products
• nature and concentrations of “spectator species” like a solvent or dissolved salts
• isotopic substitution
• presence or absence of a catalyst.
We will look briefly at all of these, but the thrust of our development will be to understand how the rate of a reaction depends on the concentrations of the reaction’s reactants and products.
Many reactions that we observe actually occur as a sequence of more simple reactions. Such a sequence of simple reaction steps is called a reaction mechanism. Our principal goal is to understand the relationships among concentrations, reaction rates, reaction mechanisms, and the conditions that must be satisfied when a particular reaction reaches equilibrium. We will find that two related ideas characterize equilibrium from a reaction-rate perspective. One is that concentrations no longer change with time. The other is a fundamental postulate, called the principle of microscopic reversibility, about the relative rates of individual steps in an overall chemical reaction mechanism when the reacting system is at equilibrium.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/01%3A_Introduction_-_Background_and_a_Look_Ahead/1.01%3A_The_Role_of_the_Ideal_Gas.txt
|
One goal of chemical thermodynamics is to predict whether a particular chemical reaction can occur. We say can, not will, because chemical thermodynamics is unable to make predictions about reaction rates. If we learn from our study of chemical thermodynamics that a particular reaction can occur, we still do not know whether it will occur in a millisecond—or so slowly that no change is detectable. The science of thermodynamics builds on the idea that a particular chemical system can be characterized by the values of certain thermodynamic functions. These state functions include such familiar quantities as pressure, temperature, volume, concentrations, and energy, as well as some that are not so well known, notably enthalpy, entropy, Gibbs free energy, Helmholtz free energy, chemical potential, fugacity, and chemical activity. We can think of a state function as a quasi-mathematical function whose argument is a physical system. That is, a state function maps a real system onto a real number. When we insert a thermometer into a mixture, the measurement that we make maps the state of the mixture onto the real number that we call temperature.
The word “thermodynamics” joins roots that convey the ideas of heat and motion. In general, motion involves kinetic energy and mechanical work. The interconversion of heat and mechanical work is the core concern of the science of thermodynamics. We are familiar with the idea that kinetic energy can be converted into work; given a suitable arrangement of ropes and pulleys, a falling object can be used to lift another object. Kinetic energy can also be converted—or, as we often say, degraded—into heat by the effects of friction. We view such processes as the conversion of the kinetic energy of a large object into increased kinetic energy of the atoms and molecules that comprise the warmed objects. We can say that easily visible mechanical motions are converted into invisible mechanical motions. The idea that heating an object increases the kinetic energy of its component atoms is called the kinetic theory of heat.
It is often convenient to use the term microscopic process to refer to an event that occurs at the atomic or molecular level. We call a process that occurs on a larger scale a macroscopic process, although the usual connotation is that a macroscopic process is observable in a quantity of bulk matter. When friction causes the degradation of macroscopic motion to heat, we can say that macroscopic motion is converted to microscopic motion.
While this terminology is convenient, it is not very precise. Changes visible under an optical microscope are macroscopic processes. Of course, all macroscopic changes are ultimately attributable to an accumulation of molecular-level processes. The Brownian motion of a colloidal particle suspended in a liquid medium is noteworthy because this relationship is visible. Viewed with an optical microscope, a suspended, macroscopic, colloidal particle is seen to undergo a rapid and random jiggling motion. Each jiggle is the accumulated effect of otherwise invisible collisions between the particle and the molecules of the liquid. Each collision imparts momentum to the particle. Over long times, the effects average out; momentum transfer is approximately equal in every direction. During the short time of a given jiggle, there is an imbalance of collisions such that more momentum is transferred to the particle in the direction of the jiggle than in any other.
We are also familiar with the idea that heat can be converted into mechanical motion. In an earlier era, steam engines were the dominant means by which heat was converted to work. Steam turbines remain important in large stationary facilities like power plants. For applications we encounter in daily life, the steam engine has been replaced by the internal-combustion engine. When we want to create mechanical motion (do work) with a heat engine, it is important to know how much heat we need in order to produce a given quantity of work. Sadi Carnot was the first to analyze this problem theoretically. In doing so, he discovered the idea that we call the second law of thermodynamics.
The interconversion of heat and work involves an important asymmetry. We readily appreciate that the conversion of kinetic energy to heat can be complete, because we have seen countless examples of objects coming to a complete standstill as the result of frictional forces. However, ordinary experience leaves us less prepared to deal with the question of whether heat can be completely converted into work. Possibly, we remember hearing that it cannot be done and that the reason has something to do with the second law of thermodynamics. If we have heard more of the story, we may remember that it is slightly more complicated. Under idealized circumstances, heat can be converted into work completely. If we confine an ideal gas in a frictionless piston and arrange to add heat to the gas while increasing the volume of the piston in a coordinated way, such that the temperature of the gas remains constant, the expanding piston will do work on some external entity, and the amount of this work will be just equal to the thermal energy added to the gas. We call this process a reversible isothermal expansion. This process does not involve a cycle; the volume of the gas at the end of the process is greater than its volume at the start.
What Carnot realized is that an engine must operate in a cyclic fashion, and that no device—not even an idealized frictionless device—operating around a cycle can convert heat to work with 100% efficiency. Carnot analyzed the process of converting heat into work in terms of an ideal engine that accepts thermal energy (heat) at a high temperature, uses some of this thermal energy to do work on its surroundings, and rejects the rest of its thermal-energy intake to the surroundings in the form of thermal energy at a lower temperature. Carnot’s analysis preceded the development of our current ideas about the nature of thermal energy. He expressed his ideas using a now-abandoned theory of heat. In this theory, heat is considered to be a fluid-like quantity—called caloric. Transfers of heat comprise the flow of caloric from one object to another. Carnot’s ideas originated as an analogy between the flow of caloric through a steam engine and the flow of water through a water wheel. In this view, the temperature of the steam, entering and leaving the engine, is analogous to the altitude of the water entering and leaving the wheel\({}^{1}\).
Such considerations are obviously relevant if we are interested in building engines, but we are interested in chemical reactivity. How does chemical change relate to engines and the conversion of heat into work? Well, rather directly, actually; after all, a chemical reaction usually liberates or absorbs heat. If we can relate mechanical work to heat, and we can relate the amount of heat liberated to the extent of a chemical reaction, then we can imagine allowing the reaction to go to equilibrium in a machine that converts heat to work. We can expect that the amount of work produced will have some relationship to the extent of the reaction. The nature of this relationship is obscure at this point, but we can reasonably expect that one exists.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/01%3A_Introduction_-_Background_and_a_Look_Ahead/1.03%3A_Classical_Thermodynamics.txt
|
Statistical thermodynamics is a theory that uses molecular properties to predict the behavior of macroscopic quantities of compounds. While the origins of statistical thermodynamics predate the development of quantum mechanics, the modern development of statistical thermodynamics assumes that the quantized energy levels associated with a particular system are known. From these energy-level data, a temperature-dependent quantity called the partition function can be calculated. From the partition function, all of the thermodynamic properties of the system can be calculated. We begin our development of statistical thermodynamics by using the energy levels of an individual molecule to find its molecular partition function and the thermodynamic properties of a system that contains \(N\) non-interacting molecules of that substance. Later, we see that the partition function of a system containing molecules that do interact with one another can be found by very similar arguments.
Statistical thermodynamics has also been applied to the general problem of predicting reaction rates. This application is called transition state theory or the theory of absolute reaction rates. In principle, we should be able to predict the rate of any reaction. To do so, we need only to solve the quantum mechanical equations that give the energy levels associated with the reactants and the energy levels associated with a transitory chemical structure called the transition state for the reaction. From the energy levels we calculate partition functions; from partition functions we calculate thermodynamic functions; and from these thermodynamic functions we obtain the reaction rate. There is a big difference between “in principle” and “in practice.” While increases in computer speed make it increasingly feasible to do quantum mechanical calculations to useful degrees of accuracy, the results of such calculations remain too inaccurate to give generally reliable reaction rate predictions. The theory of absolute reaction rates is an important application of statistical thermodynamics. However, it is not included in this book.
Quantum mechanical calculations are not the only way to obtain the energy-level information that is needed to evaluate partition functions. Particularly for small molecules, these energy levels can be deduced from spectroscopic data. In these cases, the theory of statistical thermodynamics enables us to calculate thermodynamic properties from spectroscopic measurements. Excellent agreement is obtained between the values of thermodynamic functions obtained from classical thermodynamic (thermochemical) measurements and those obtained from statistical-thermodynamic calculations based on energy levels derived from spectroscopic measurements. In Chapter 24, we consider a particular example to illustrate this point.
1.05: Heat Transfer in Practical Devices
The amount of heat transferred to or from a system undergoing change is an important thermodynamic variable. In practical devices, the rate at which heat can be transferred to or from a system plays a very important role also. Consider again the work produced by heating a gas that is confined in a cylinder that is closed by a piston. Clearly, the rate at which heat can be transferred from the outside to the gas determines the rate at which the piston moves outward and thus the rate at which work is done on the environment.
Does it matter whether the heat-transfer process is fast or slow? If the heat cost nothing, would we care if our engine produced work only very slowly? After all, if we want more work and the heat is free, we need only build more engines; eventually we will have enough of them to produce any required amount of work. Of course, heat is not free; more significantly for our present considerations, the engines are not free either. Engineers and accountants call the cost of heat an operating costcost:operating. There are many other operating costs, like labor, supplies, insurance, and taxes. The cost of the engine is called a capital costcost:capital. To find the total cost of a unit of work, we need to add up the various operating costs and a part of the cost of the engine.
$\text{cost of a unit of work } = \text{ fuel cost } + \text{ other operating costs } + \text{ capital cost } \nonumber$
The difference between an operating cost and a capital cost is that an operating cost is incurred at (about) the same time that the product, in this case a unit of work, is created. In contrast, a capital cost is incurred well before the product is created. The purchase of a machine is a typical capital expense. The cost of the machine is incurred long before the machine makes its last product. This occurs because the machine must be paid for when it is acquired, but it continues to function over a useful lifetime that is typically many years. For example, if an engine that costs $1,000,000 can produce a maximum of 1,000,000 units of work before it wears out, the minimum contribution that the cost of the engine makes to the cost of the work it produces is$1 per unit. The life of the engine also enters into the estimation of capital cost. If some of the work done by the engine will be produced ten years in the future, we will be foregoing the interest that we could otherwise have earned on the money that we invested in the engine while we wait around to get the future work. Operating costs are well defined because they are incurred here and now. Capital costs are more problematic, because they depend upon assumptions about things like the life of the machine and the variation in interest rates during that life.
Suppose that we are developing a new engine. All else being equal, we can decrease the capital-cost component of the work our engine produces by decreasing the time it needs to produce a unit of work. The savings occurs because we can get the same amount of work from a smaller and hence less-costly engine. Since each unit of work requires that the same amount of heat be moved, we can make the engine smaller only if we can move heat around more quickly. In internal combustion engines, we get heat into the engine:internal combustion with a combustion reaction (an explosion) and take most of it out again by venting the combustion products (the exhaust gas). So internal combustion engines have the great advantage that both of these steps can be fast. Steam engines are successful because we can get heat into the engine quickly by allowing steam to flow from a boiler into the engine. We can remove heat from the steam engine quickly by venting the spent steam, which is feasible because the working fluid is water. The Stirling engine is a type of external combustion engine that works by alternately heating (expanding) and cooling (compressing) an enclosed working fluid. Stirling engines have theoretical advantages, but they are not economically competitive, essentially because heat transfer to and from the working fluid cannot be made fast enough.
Why does anyone care about capital cost? Well, we can be sure that the owner of an engine will be keenly interested in minimizing the dollars that come out of his pocket. But capital cost is also a measure of the consumption of resources—resources that may have more valuable alternative uses. So if any segment of an economy uses resources inefficiently, other segments of that economy must give up other goals that could have been achieved using the wasted resources. Economic activity benefits many people besides the owners of capital. If capital is used inefficiently, society as a whole is poorer as a result.
Heat transfer has a profound effect also on the design of the machines that manufacture chemicals. This occurs most conspicuously in processes that involve very exothermic reactions. If heat cannot be removed from the reacting material fast enough, the temperature of the material rises. The higher temperature may cause side reactions that decrease the yield of the product. If the temperature rises enough, there may be an explosion. For such reactions, the equipment needed to achieve rapid heat transfer, and to manage the rate of heat production and dissipation, may account for a large fraction of the cost of the whole plant. In some cases, chemical reactions used for the production of chemicals produce enough heat that it is practical to use this “waste heat” for the production of electricity.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/01%3A_Introduction_-_Background_and_a_Look_Ahead/1.04%3A_Statistical_Thermodynamics.txt
|
We are familiar with the idea that a system undergoing change eventually reaches equilibrium. We say that a system is at equilibrium when no further change is possible. When we talk about change, we always have in mind some particular property. We measure the change in the system by the amount of change in this property. When the property stops changing, we infer that the system has stopped changing, and we say that the system has come to equilibrium. Of course, we may interest ourselves in a system in which many properties undergo change. In such cases, we recognize that the system as a whole cannot be at equilibrium until all of these properties stop changing.
On the other hand, we also recognize that the absence of observable change is not enough to establish that a system is at equilibrium with respect to all of the possible changes that it could undergo. We know that hydrogen burns readily in oxygen to form water, but a mixture of hydrogen and oxygen undergoes no change under ordinary conditions. This unchanging mixture is plainly not at equilibrium with respect to the combustion reaction. Only when a catalyst or an ignition source is introduced does reaction begin.
It is also possible, indeed probable, that a system can be at equilibrium with respect to one process and not be at equilibrium with respect to other processes, which, while possible, simply do not occur under the conditions at hand. For example, if an aqueous solution of the oxygen-carrying protein hemoglobin is added to the hydrogen—air system, the protein will add or lose coordinated oxygen molecules until the equilibrium composition is reached. If our investigation is focused on the protein–oxygenation reaction, we do not hesitate to say that the system is at equilibrium. The non-occurrence of the oxygen—hydrogen reaction is not relevant to the phenomenon we are studying.
It is even possible to reach a non-equilibrium state in which the concentrations of the reactants and products are constant. Such a system is said to have reached a steady state. In order for this to occur, the reaction must occur in an open system; that is, one in which materials are being added or removed; there must be continuous addition of reactants, and continuous removal of products. In Chapter 5, we discuss a simple system in which this can be achieved. A closed system is one that can neither gain nor lose material. An isolated system is a closed system that can neither gain nor lose energy; in consequence, its volume is fixed. In an isolated system, change ceases when equilibrium is reached, and conversely.
We will consider several commonly encountered kinds of change, including mechanical motions, heat transfers, phase changes, partitioning of a solute between two phases, and chemical reactions. Here we review briefly what occurs in each of these kinds of change. In Chapter 6, we review the characteristics that each of these kinds of change exhibits at equilibrium.
A system in mechanical equilibrium is stationary because the net force acting on any macroscopic portion of the system is zero. Another way of describing such a situation is to say that the system does not move because of the presence of constraints that prevent movement.
Two macroscopic objects are in thermal equilibrium if they are at the same temperature. We take this to be equivalent to saying that, if the two objects are in contact with one another, no heat flows between them. Moreover, if object A is in thermal equilibrium with each of two other objects, B and C, then we invariably find that objects B and C are in thermal equilibrium with one another. This observation is sometimes called the zeroth law of thermodynamics. It justifies the concept of temperature and the use of a standard system—a thermometer—to measure temperature.
For an isolated system to be in phase equilibrium, it must contain macroscopic quantities of two or more phases, and the amount of each phase present must be unchanging. For example, at 273.15 K and 1 bar, and in the presence of one atmosphere of air, liquid water and ice are in equilibrium; the amounts of water and ice remain unchanged so long as the system remains isolated. Similarly, a saturated aqueous solution of copper sulfate is in equilibrium with solid copper sulfate; if the system is isolated, the amounts of solid and dissolved copper sulfate remain constant.
If a system is in phase equilibrium, we can remove a portion of any phase without causing any change in the other phases. At equilibrium, the concentrations of species present in the various phases are independent of the absolute amount of each phase present. It is only necessary that some amount of each phase be present. To describe this property, we say that the condition for equilibrium for is the same irrespective of the amounts of the phases present in the particular system. For example, if one of the species is present in both a gas phase and a condensed phase, we can specify the equilibrium state by specifying the pressure and temperature of the system. However, we can change the relative amounts of the phases present in this equilibrium state by changing the volume of the system. (If its volume can change, the system is not isolated.)
Partitioning of a solute between two immiscible condensed phases is important in many chemical systems. If we add water and chloroform to the same vessel, two immiscible liquid phases are formed. Elemental iodine is very sparingly soluble in water and substantially more soluble in chloroform. If we add a small amount of iodine to the water–chloroform system, some of the iodine dissolves in the water and the remainder dissolves in the chloroform layer. We say that the iodine is distributed between the two phases. When the iodine concentrations become constant, we say that the system has reached distribution equilibrium.
In a chemical reaction, one or more chemical substances (reactants) undergo a change to produce one or more new chemical substances (products). We are accustomed to representing chemical substances by symbols and representing their reactions by chemical equations. Thus, for the hydrolysis of ethyl acetate, we write
$CH_3CO_2CH_2CH_3+H_2O\rightleftharpoons CH_3CO_2H+CH_3CH_2OH \nonumber$
A chemical equation like this expresses a stoichiometric relationship between reactants and products. Often we invoke it as a symbol for various distinctly different physical situations. For example:
1. We may view the equation as a symbolic representation of a single solution that contains the four compounds ethyl acetate, water, acetic acid, and ethanol—and possibly other substances.
2. We may view the equation as a symbolic representation of a relationship between two systems whose proportions are arbitrary. The first system comprises ethyl acetate and water. The second system comprises acetic acid and ethanol. The equation represents the idea that the first system can be converted into the second.
3. We may view the symbols on each side of the equation as representing mixtures of the indicated chemical substances in the specified stoichiometric proportions.
4. We may view the equation as representing the specified stoichiometric proportions of pure, unmixed chemical substances. When we are discussing changes in “standard” thermodynamic properties that accompany a chemical reaction, this is the interpretation that we have in mind.
When we discuss a chemical equation, the intended interpretation is normally evident from the context. Indeed, we often skip back and forth among these interpretations in the course of a single discussion. Nevertheless, it is important to avoid confusing them.
By doing experiments, we can discover that there is an equation that uniquely defines the position of a chemical reaction at equilibrium, an equation that we usually think of as the definition of the equilibrium constant. If our measurements are not too accurate, or we confine our study to a limited range of concentrations, or the system is particularly well behaved, we can express the equilibrium constant as a function of concentrations${}^{2}$. For the hydrolysis of ethyl acetate, we find
$K=\frac{\left[CH_3CO_2H\right][HOCH_2CH_3]}{\left[CH_3CO_2CH_2CH_3\right][H_2O]} \nonumber$
In general, for the reaction
$a\ A+b\ B\ \rightleftharpoons c\ C + d\ D \nonumber$
we find $K=\frac{{[C]}^c{[D]}^d}{{[A]}^a{[B]}^b} \nonumber$
That is, at equilibrium the indicated function of reactant concentrations always computes to approximately the same numerical value.
When our concentration measurements are more accurate, we find that we must introduce new quantities that we call chemical activities. We can think of an activity as a corrected concentration. The correction compensates for the effects of intermolecular attraction and repulsion. Denoting the activity of substance $X$ as ${\tilde{a}}_x$, we find that $K_a=\frac{{\tilde{a}}^c_C{\tilde{a}}^d_D}{{\tilde{a}}^a_A{\tilde{a}}^b_B} \nonumber$
gives a fully satisfactory characterization of the equilibrium states that are possible for systems in which this reaction occurs. $K_a$, the equilibrium constant computed as a function of reactant activities, always has exactly the same numerical value.
We can develop the equilibrium constant expression from three distinctly different theoretical treatments. We develop it first from some basic ideas about the rates of chemical reactions. Then we obtain same result from both the macroscopic-behavior considerations of classical thermodynamics and the molecular-property considerations of statistical thermodynamics.
Our most basic concept of equilibrium is based on the observation that change in an isolated system eventually ceases; once change ceases, it never resumes. In this book, we call the idea of a static state of an isolated system the primitive equilibrium. We also observe that change eventually ceases in a closed system that is not isolated but whose temperature, pressure, and volume are kept constant. Conversely, if a system is at equilibrium, its temperature, pressure, and volume are necessarily constant; all interactions between such a system and its surroundings can be severed without changing any of the properties of the system. We can view any particular equilibrium state as a primitive equilibrium state.
A system whose temperature, pressure, or volume is established by interactions between the system and its surroundings is inherently more variable than an isolated system. For a given isolated system, only one equilibrium state exists; for a system that interacts with its surroundings, many different equilibrium states may be possible. In chemical thermodynamics, our goal is to develop mathematical models that specify the equilibrium states available to a system; we seek models in which the independent variables include pressure, temperature, volume, and other conditions that can be imposed on the system by its surroundings. In this conception, an equilibrium system is characterized by a set of points in a variable space. We can think of this set of points as a surface or a manifold in the variable space; every point in the set is a different primitive-equilibrium state of the system. By imposing particular changes on some variables, a particular equilibrium system can be made to pass continuously through a series of primitive-equilibrium states.
For reasons that become apparent as we proceed, we use the name Gibbsian equilibrium to denote this more general conception. When we talk about equilibrium in thermodynamics, we usually mean Gibbsian equilibrium. In Chapter 6, we see that the idea of (Gibbsian) equilibrium is closely related to the idea of a reversible process. We also introduce Gibbs’ phase rule, which amounts to a more precise definition, from the perspective of classical thermodynamics, of what we mean by (Gibbsian) equilibrium in chemical systems.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/01%3A_Introduction_-_Background_and_a_Look_Ahead/1.06%3A_The_Concept_of_Equilibrium.txt
|
When we talk about predicting chemical reactions, we imagine taking quantities of various pure compounds and mixing them under some set of conditions. We suppose that they react until they reach a position of equilibrium in which one or more new compounds are present. We want to predict what these new compounds are and how much of each will be produced.
For any given set of reactants, we can accomplish this predictive program in two steps. First we find all of the sets of products that can be obtained from the given reactants. Each such set represents a possible reaction. We suppose that, for each set of possible products, we are able to predict the equilibrium composition. Predicting which reaction will occur is equivalent to finding the reaction whose position of equilibrium lies farthest in the direction of its products. From this perspective, being able to predict the position of equilibrium for the reactants and any stoichiometrically consistent set of products is the same thing as being able to predict what reaction will occur. (If there is no single reaction whose position of equilibrium is much further in the direction of its products than that of any other reaction, multiple reactions can occur simultaneously.)
This two-step procedure corresponds to the sense in which chemical thermodynamics enables us to predict reaction products. We measure values for certain characteristic thermodynamic functions for all relevant compounds. Given the values of these functions for all of the compounds involved in a hypothesized reaction, we calculate the position of equilibrium. That we must begin by guessing the products makes this approach cumbersome and uncertain. We can never be positive that the true products are among the possibilities that we consider. Nevertheless, as a practical matter for most combinations of reactants, the number of plausible product sets is reasonably small.
1.08: Equilibrium and Classical Thermodynamics
We develop classical thermodynamics by reasoning about reversible processes—processes in which a system passes through a series of equilibrium states. Any such process corresponds to a path on one or more of the Gibbsian manifolds that are available to the system. The resulting theory consists of equations that relate the changes in the values of the system’s state functions as the system undergoes a reversible change. For this reason, the body of theory that we are calling classical thermodynamics is often called equilibrium thermodynamics or reversible thermodynamics.
As we discuss further in Chapter 6, any change that we can actually observe in a real system must be the result of a spontaneous process. In a reversible process, both the initial and the final states are equilibrium states. In a spontaneous process, the initial state of the system is not an equilibrium state. A spontaneous process begins with the system in a non-equilibrium state and proceeds until an equilibrium state is reached.
The domain of classical thermodynamics—reversible processes—is distinct from the domain of real observations, because real observations can be made only for spontaneous processes. We bridge this gap by careful selection of real-world systems to serve as models for the reversible systems that inhabit our theory. That is, we find that we can make measurements on non-equilibrium systems and irreversible processes from which we can estimate the properties of equilibrium systems and reversible processes. Saying almost the same thing from another perspective, we find that the classical thermodynamic equations that apply to equilibrium states can also be approximately valid for non-equilibrium states. For many non-equilibrium states, notably those whose individual phases are homogenous, the approximations can be very good. For other non-equilibrium states, notably those whose individual phases are markedly inhomogeneous, these approximations may be very poor.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/01%3A_Introduction_-_Background_and_a_Look_Ahead/1.07%3A_Chemical_Equilibrium_and_Predicting_Chemical_Change.txt
|
The goal of science is to create theories that accurately describe physical reality. In this book, we explore some of the most useful scientific theories that exist. They have been tested extensively. We know that there are limits to their applicability. We expect that further thought and experimentation will expand their scope. We expect that some elements of these theories will need to be modified in ways that we cannot anticipate, but we do not expect that the core concepts will be invalidated.
We love theories because they rationalize our environment. This is true not only of scientific theories but also of the panoply of conceptual frameworks that we use to organize our views about—and responses to— all of life’s issues. We are addicted to theories. We find nothing more disconcerting than information that we cannot put into a coherent context. Indeed, the term cognitive dissonance has entered the language to describe the feeling of disorientation that we experience when “things just don’t add up.”
Logically, confronted with a fact that contradicts one of our theories, we are compelled to give up the theory. We are not always logical. A fact that contradicts a pet theory is unlikely to be accepted at face value. We challenge it, as indeed we should. We scrutinize the offending fact and try to convince ourselves that it is no fact at all, merely a spurious artifact. Often, of course, this proves to be the case. Sometimes we conclude that the offending fact is spurious when it is not. We get stuffy about our theories. When we find one that suits us, we resist giving it up. It has been observed that a revolutionary scientific theory often achieves universal acceptance only after all those who grew up with the predecessor theory have died.
Science is ultimately a social enterprise. To develop and test theories about physical reality, participants in this enterprise must be in general agreement about the criteria that are to be applied. These criteria are frequently called “the philosophy of science.” To summarize the philosophy of science, we begin by observing that the goal of science is to explain the world that we experience through our sensory perceptions. It is easy to generate putative explanations that have little or no real value. Unfortunate experiences with past explanations have led to a broad consensus that scientific theories must have the following properties:
• operational definitions,
• logical structure,
• predictive capability and testability,
• internal consistency, and
• consistency with any and all experimental observations.
The theory must be about the properties of some set of things. By operational definitions, we mean that the subjects of the theory must measurable, and the theory must specify a set of operations for making each of these measurements. By logical structure, we mean that a satisfactory theory must include well-defined rules to specify how the subjects of the theory relate to one another. By predictive capability we mean that a satisfactory theory must be capable of predicting the results of experiments that have not been performed. By internal consistency we mean that a satisfactory theory must not allow us to logically derive contradictory conclusions, which also means that it must not predict more than one outcome for any particular experiment. Because a satisfactory theory makes predictions, it is also testable. It is possible to check whether the predictions correspond to reality. We require that the theory’s predictions be consistent with the results that we observe when we do the experiment.
The first four of these requirements really detail the characteristics that a theory must have in order to be considered a proper subject for scientific investigation. Only the last requirement speaks to the all-important issue of whether the theory accurately mirrors physical reality. We can never prove that any theory is true. What we can prove is that a theory fails to meet one of our criteria. Science progresses when we discover a fatal flaw in a currently accepted theory.
Let us think further about what we mean when we say that a theory must be a logical structure. Consider a simple classical syllogism.
Major premise: All dogs are cats.
Minor premise: All cats are white.
Conclusion: All dogs are white.
As a logical structure, this seems to be satisfactory. We can represent the whole of its content in a simple diagram (See Figure 1), so if we want to view this syllogism as a theory about nature, its internal consistency is more or less self-evident. Moreover, viewed as a theory, it makes a prediction: All dogs are white. If we have operational definitions for “dog”, “cat”, and “white” that conform to customary usage, we can say that this syllogism meets our criteria for a proper subject for scientific investigation. Of course, as a mirror of reality, it fails.
To see the issue of logical structure from another perspective, let us consider the theory of evolution. Some people summarize the theory of evolution as teaching that the fittest individuals survive and defining survivors as those individuals who are most fit. They then point out that these are circular statements and proceed from this observation to the conclusion that the theory of evolution is devoid of real content. So it can be dismissed. Now, if we are not closed-minded about evolution, this analysis looks like a case of throwing out the baby with the bath water. Even so, we are likely to be troubled, because the circularity is undeniable. Does this circularity mean that the theory of evolution is bad science?
A tautology is a statement that must be true. Our analysis attempts to recast the entire content of the theory of evolution as one tautologous statement. If the whole of a theory is a single statement, and that statement must be true, then the theory cannot be tested. Our rules require that we reject it. However, our tautologous summary fails to capture the whole of the theory of evolution. If a theory that contains tautologous statements also makes predictions that are not tautologous, then it can be tested. In the present example, we can predict from the theory of evolution that selection of a particular trait through any process will cause increased expression of that trait in succeeding generations. Evolution is based on natural selection, but it postulates a mechanism that must be valid for any consistent selection process. It predicts that a farmer who selects for cows that produce more milk will eventually get cows that produce more milk. Thus, attempts to apply selective breeding are tests of the central element of the theory of evolution, and the success of selective breeding in every aspect of agriculture verifies a prediction of the theory.
There is no reason to object to a theory that has tautological elements so long as the content of the theory has real substance. What we require is that a theory’s predictions be non-trivial. We object when substantially all of a theory’s purported predictions are merely restatements of its premises, so that the whole of the theory is an exercise in verbiage.
We require scientific theories to be internally consistent. (Normally, we do not expect to be able to prove internal consistency. What we really mean is that we will discard any theory that we can show to be internally inconsistent.) The presence of tautologous statements cannot make a theory internally inconsistent. Indeed, we can expect any internally consistent theory to have tautological elements. After all, if we try to define all of the subjects of a theory, at least some of our definitions will inevitably be circular.
Another way to describe the logical structure of a physical theory is to say that a theory is a model for some observable part of the world. We want the model to include things and rules. The rules should specify how the things of the model change. When we talk about comparing predictions of the theory to the results of experiment, we mean that the changes that occur in the model when we apply the rules of the theory should parallel the changes that occur in the real world when we do the corresponding experiment. The idea is analogous to the mathematical concept of a homomorphism. A dictionary definition of a homomorphism\({}^{3}\) is “a mapping of a mathematical group, ring, or vector space onto another in such a way that the result obtained by applying an operation to elements of the domain is mapped onto the result of applying the same or a corresponding operation to their images in the range.” Stated more picturesquely, the idea of a homomorphism is that two mathematical structures have the same form: If one is “laid on top of the other”, there is a perfect correspondence between them. (See Figure 2.)
The idea of a one-to-one correspondence between two structures can be extended to the case where one structure is a logical structure and the other is a physical structure. Consider the logical structure, usually called a truth table, associated with the truth of the proposition “sentence A is true and sentence B is true.” The proposition is false unless both A and B are true. Now consider the performance of an electrical circuit that consists of a battery, switches A and B, and a light bulb, all connected in series. The light is off unless both A and B are on. There is a perfect correspondence between the elements of the logical structure and the performance of the circuit. (See Figure 3.) In fact, this is a special case of a much more extensive parallelism. For any sentence in propositional logic, there is a corresponding circuit—involving batteries, switches, and a light bulb—such that the bulb is on if the corresponding sentence is true and off if the sentence is false. Moreover, there is a mathematical structure, called Boolean algebra, which is homomorphic to propositional logic and exhibits the same parallelism to circuits. Digital computers carry this parallelism to structures of great complexity.
If we stretch our definition of a homomorphism to include comparisons of logical constructs with physical things, we can view a scientific theory as a logical construct that we are trying to make homomorphic with physical reality. We want our theory to map onto reality in such a way that changes in the logical construct map onto changes in physical reality. Of course, what we mean by this is the same thing we meant previously when we said that the predictions of our theory should agree with the results of experiment.
It is implicit in this view that any scientific theory describes an abstraction from reality. If, for example, we say that a physical system is at a particular temperature, this statement represents an approximation in at least two respects. First, we believe that we can measure temperature in continually more refined ways. If we know the temperature to six significant figures, this represents a very good approximation, but we believe that the “true” temperature can only be expressed using indeterminably many significant figures. To say that the temperature of the system has some value expressed to six significant figures is an approximation, albeit a very good one. Second, it is unlikely that all parts of any real system are actually at the same temperature. When we say that a physical system has a particular temperature, we mean that any differences in temperature between different parts of the system are too small to affect the behavior in which we are currently interested. In our discussions and deliberations about the physical system, we replace its actual properties by our best measurement. In doing so, we abstract something that we believe to be an essential feature from the reality we actually observe.
Implicit also is the idea that our theories about reality are subject to inevitable limitations. In the example above, any theory that, however accurately, predicts the single temperature that we use to describe the real system is inadequate to describe whatever small variations there may be from point to point within it. If we expand our theory to encompass variations over, say, distances of millimeters, then the expanded theory will be inadequate to describe variations over some smaller distance. Any “perfect” theory must exactly describe the motions of all of the system’s constituent particles. This is impossible not only because it conflicts with the basis premises of quantum mechanics, but also because it requires the theory to contain information at a level of detail equal to that in the physical system itself.
One way to express all of the same information at exactly the same level of detail is to have an exact replica of the system. This idea has been expressed by saying that an absolutely accurate map of Oklahoma City would have to be as large as Oklahoma City itself. For use as a map, such a thing would be useless. Similarly, a theory that predicts temperature is useful only if it predicts the temperature we measure (measurement being a part of the process by which we effect abstraction) experimentally. We can make use of multiple theories of the same phenomenon, if each of them has advantages and limitations that we recognize and respect. We see however, that in the end, there can be no single, all-encompassing theory. Any theory must model an approximation to reality. In the final analysis, reality is reality.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/01%3A_Introduction_-_Background_and_a_Look_Ahead/1.09%3A_A_Few_Ideas_from_the_Philosophy_of_Science.txt
|
Formal logic deals with relationships among propositions, where a proposition is any statement of (alleged) fact. Any proposition can be expressed as an ordinary English sentence, although it may be more convenient to use mathematical symbols or some other notation. The following are all propositions:
• Albert Einstein is deceased.
• Tulsa is in Oklahoma.
• Two plus two equals four.
• $2+2=4$
• $\int{x^2dx=x+1}$
A proposition need not be true. The last of these examples is a false proposition. We represent an arbitrary proposition by any convenient symbol, usually a letter of the alphabet. Thus, we could stipulate that “$p$” represents any of the propositions above. Once we have associated a symbol with a particular proposition, the symbol itself is taken to represent an assertion that the proposition is true. It is an axiom of ordinary logic that any proposition must be either true or false. If we associate the symbol “$p$” with a particular proposition, we write “$\sim p$” to represent the statement: “The proposition represented by the symbol ‘$p$’ is false.” $\sim p$ is called the negation of p. We can use the negation of $p$, $\sim p$, to state the axiom that a proposition must be either true or false. To do so, we write: Either $p$ or $\sim p$ is true. We can write this as the proposition “$p$ or $\sim p$”. The negation of the negation of $p$ is an assertion that $p$ is true; that is, $\sim \ \sim \ p\ =\ p$.
Logic is concerned with relationships among propositions. One important relationship is that of implication. If a proposition, $q$, follows logically from another proposition, $p$, we say that $q$ is implied by $p$. Equivalently, we say that proposition $p$ implies proposition $q$. The double-shafted arrow, $\mathrm{\Rightarrow }$, is used to symbolize this relationship. We write “$p\Rightarrow q$” to mean, “That proposition $p$ is true implies that proposition $q$ is true.” We usually read this more tersely, saying, “$p$ implies $q$.” Of course, “$p\Rightarrow q$” is itself a proposition; it asserts the truth of a particular logical relationship between propositions $p$ and $q$.
For example, let $p$ be the proposition, “Figure A is a square.” Let $q$ be the proposition, “Figure A is a rectangle.” Then, writing out the proposition, $p\Rightarrow q$, we have: Figure A is a square implies figure A is a rectangle. This is, of course, a valid implication; for this example, the proposition $p\Rightarrow q$ is true. For reasons that will become clear shortly, $p\Rightarrow q$ is called the conditional of $p$ and $q$. Proposition $p$ is often called a sufficient condition, while proposition $q$ is called a necessary condition. That is, the truth of $p$ is sufficient to establish the truth of $q$.
$\text{sufficient condition}\, \mathrm{\Rightarrow} \, \text{necessary condition} \nonumber$
Now, if proposition $p\Rightarrow q$ is true, and proposition $q$ is also true, can we infer that proposition $p$ is true? We most certainly cannot! In the example we just considered, the fact that figure A is a rectangle does not prove that figure A is a square. We call $q\Rightarrow p$ the converse of $p\Rightarrow q$. The conditional of $p$ and $q$ can be true while the converse is false. Of course, it can happen that both $p\Rightarrow q$ and $q\Rightarrow p$ are true. We often write “$p\Leftrightarrow q$” to express this relationship of mutual implication. We say that, “$p$ implies $q$ and conversely.”
What if $p\Rightarrow q$, and $q$ is false? That is, $\sim q$ is true. In this case, $p$ must be false! If $\sim q$ is true, it must also be that $\sim p$ is true. Using our notation, we can express this fact as
$(p\Rightarrow q\, \text{and} \sim q) \Rightarrow \sim p \nonumber$
Equivalently, we can write
$(p\Rightarrow q) \mathrm{\Leftrightarrow } (\sim q\Rightarrow \sim p) \nonumber$
That is, $p\Rightarrow q$ and $\sim q\Rightarrow \sim p$ are equivalent propositions; if one is true, the other must be true. $\sim q\Rightarrow \sim p$ is called the contrapositive of $p\Rightarrow q$. The equivalence of the conditional and its contrapositive is a theorem that can be proved rigorously in an axiomatic formulation of logic. In our later reasoning about thermodynamic principles, we use the equivalence of the conditional and the contrapositive of $p$ and $q$.
The equivalence of the conditional, $p\Rightarrow q$, and the contrapositive, $\sim q\Rightarrow \sim p$, is the reason that $q$ is called a necessary condition. If $p\Rightarrow q$, it is necessary that $q$ be true for $p$ to be true. (If figure A is to be a square, it must be a rectangle.)
It is also intimately related to proof by contradiction. Suppose that we know $p$ to be true. If, by assuming that $q$ is false ($\sim q$ is true), we can validly demonstrate that $p$ must also be false $(\sim q\mathrm{\Rightarrow }\sim p$, so that $\sim p$ is true), we have the contradiction that $p$ is both true and false ($p$ and $\sim p$). Since $p$ cannot be both true and false, it must be false that q is false ($\sim \sim q=q$). Otherwise stated, the equivalence of the conditional and the contrapositive leads not only to ($p$ and $\sim p$) but also to ($q$ and $\sim q$).
$\sim q \mathrm{\Rightarrow } \sim p \nonumber$
implies
$p \mathrm{\Rightarrow } q. \nonumber$
In summary, since we know p to be true, our assumption that $q$ is false, together with the valid implication $\sim q\mathrm{\Rightarrow }\sim p$, leads to the conclusion that $q$ is true, which contradicts our original assumption, so that the assumption is false, and $q$ is true.
1.11: Problems
1. Philosophers argue about the feasibility of a private language, a language that is known by only one individual. The case against the possibility of a private language is based on the assumption that language comes into existence as a tool for communication in a society comprising two or more individuals. Solipsism is a philosophical conception in which your sensory perceptions are internally generated; they are not the result of your interactions with the world. Solipsism assumes that the world you think you perceive does not in fact exist. Since a solipsistic individual is the only being that exists, any language he uses is necessarily a private language. Evidently, the existence of a solipsistic individual who uses some language to think and the impossibility of a private language are mutually exclusive: If a private language is impossible, a solipsistic individual cannot use any language. Since you are reading this, you are using a language, and therefore you cannot be a solipsistic individual. Does this argument convince you that you are not a solipsistic individual; that is, does this argument convince you of the existence of a physical reality that is external to yourself? Why or why not? Can a solipsistic individual engage in scientific inquiry?
2. Many people find the theory of evolution deeply repugnant. Some argue that the theory of evolution has not been proved and that creationism is an alternative scientific theory, where creationism is the Biblical description of God’s creation of the world in six days.
(a) Is it valid to say, “The theory of evolution is unproved.”?
(b) Comment very briefly on whether or not the theory of evolution meets each of our criteria for a scientific theory.
(c) Comment very briefly on whether or not creationism meets each of our criteria for a scientific theory.
3. Use an ordinary English sentence to state the meaning of propositions (a) and (b):
(a) $\sim (p\ \mathrm{and}\ q)$ $\mathrm{\Rightarrow }$ ($\sim p\ \ \mathrm{or}\ \sim q)$
(b) $\sim (p\ \mathrm{or}\ q)$ $\mathrm{\Rightarrow }$ ($\sim p\ \ \mathrm{and}\ \sim q)$
Are propositions (a) and (b) true or false?
Using propositions (a) and (b), prove propositions (c) and (d):
(c) [$(p\ \mathrm{and}\ q)$ $\mathrm{\Rightarrow }$$r$ ] $\mathrm{\Rightarrow }$ [$\sim r$ $\mathrm{\Rightarrow }$ ($\sim p\ \ \mathrm{or}\ \sim q)$]
(d) [$(p\ \mathrm{or}\ q)$ $\mathrm{\Rightarrow }$$r$ ] $\mathrm{\Rightarrow }$ [$\sim r$ $\mathrm{\Rightarrow }$ ($\sim p\ \ \mathrm{and}\ \sim q)$]
Notes
${}^{1}$${}^{\ }$R. Clausius, The Mechanical Theory of Heat, translated by Walter R. Browne, Macmillan and Co., London, 1879, p. 76.
${}^{2}$${}^{\ }$We use square brackets around the symbol for a chemical substance to denote the concentration of that substance in molarity (moles per liter of solution) units.
${}^{3}$${}^{\ }$Webster’s Ninth New Collegiate Dictionary, Merriam-Webster, Inc., Springfield, Massachusetts, 1988.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/01%3A_Introduction_-_Background_and_a_Look_Ahead/1.10%3A_A_Few_Ideas_from_Formal_Logic.txt
|
Early experimenters discovered that the pressure, volume, and temperature of a gas are related by simple equations. The classical gas laws include Boyle’s law, Charles’ law, Avogadro’s hypothesis, Dalton’s law of partial pressures, and Amagat’s law of partial volumes. These laws were inferred from experiments done at relatively low pressures and at temperatures well above those at which the gases could be liquefied. We begin our discussion of gas laws by reviewing the experimental results that are obtained under such conditions. As we extend our experiments to conditions in which gas densities are greater, we find that the accuracy of the classical gas laws decreases.
• 2.1: Boyle's Law
Robert Boyle discovered Boyle’s law in 1662. Boyle’s discovery was that the pressure, P, and volume, V, of a gas are inversely proportional to one another if the temperature, T, is held constant. We can imagine rediscovering Boyle’s law by trapping a sample of gas in a tube and then measuring its volume as we change the pressure.
• 2.2: Charles' Law
Charles’ law relates the volume and temperature of a gas when measurements are made at constant pressure. We can imagine rediscovering Charles’ law by trapping a sample of gas in a tube and measuring its volume as we change the temperature, while keeping the pressure constant. This presumes that we have a way to measure temperature, perhaps by defining it in terms of the volume of a fixed quantity of some other fluid—like liquid mercury.
• 2.3: Avogadro's Hypothesis
Avogadro’s hypothesis is another classical gas law. It can be stated: At the same temperature and pressure, equal volumes of different gases contain the same number of molecules. When the mass, in grams, of an ideal gas sample is equal to the gram molar mass (traditionally called the molecular weight) of the gas, the number of molecules in the sample is equal to Avogadro’s number.
• 2.4: Finding Avogadro's Number
There are numerous ways to measure Avogadro’s number. One such method is to divide the charge of one mole of electrons by the charge of a single electron. We can obtain the charge of a mole of electrons from electrolysis experiments. The charge of one electron can be determined in a famous experiment devised by Robert Millikan, the “Millikan oil-drop experiment”.
• 2.5: The Kelvin Temperature Scale
• 2.6: Deriving the Ideal Gas Law from Boyle's and Charles' Laws
• 2.7: The Ideal Gas Constant and Boltzmann's Constant
• 2.8: Real Gases Versus Ideal Gases
We imagine that the results of a large number of experiments are available for our analysis. Our characterization of these results has been that all gases obey the same equations—Boyle’s law, Charles’ law, and the ideal gas equation—and do so exactly. This is an oversimplification. In fact they are always approximations. They are approximately true for all gases under all “reasonable” conditions, but they are not exactly true for any real gas under any condition.
• 2.9: Temperature and the Ideal Gas Thermometer
We can define temperature in terms of the expansion of any constant-pressure gas that behaves ideally. In principle, we can measure the same temperature using any gas, so long as the constant operating pressure is low enough. When we do so, our device is called the ideal gas thermometer. In so far as any gas behaves as an ideal gas at a sufficiently low pressure, any real gas can be used in an ideal gas thermometer and to measure any temperature accurately.
• 2.10: Deriving Boyle's Law from Newtonian Mechanics
We can derive Boyle’s law from Newtonian mechanics. This derivation assumes that gas molecules behave like point masses that do not interact with one another. The pressure of the gas results from collisions of the gas molecules with the walls of the container. The contribution of one collision to the force on the wall is equal to the change in the molecule’s momentum divided by the time between collisions. The magnitude of this force depends on the molecule’s speed and angle it strikes the wall.
• 2.11: The Barometric Formula
We can measure the pressure of the atmosphere at any location by using a barometer. A mercury barometer is a sealed tube that contains a vertical column of liquid mercury. The space in the tube above the liquid mercury is occupied by mercury vapor. Since the vapor pressure of liquid mercury at ordinary temperatures is very low, the pressure at the top of the mercury column is very low and can usually be ignored.
• 2.12: Van der Waals' Equation
We often assume that gas molecules do not interact with one another, but simple arguments show that this can be only approximately true. Real gas molecules must interact with one another. At short distances they repel one another. At somewhat longer distances, they attract one another. Van der Waals’ equation fits pressure-volume-temperature data for a real gas better than the ideal gas equation does. The improved fit is obtained by introducing two experimentally determined parameters.
• 2.13: Virial Equations
Expanding the compressibility factor to a polynomial in the pressure results in a better description of real gas behavior. The values for the parameters of this expansion are often tabulated for each gas independently.
• 2.14: Gas Mixtures - Dalton's Law of Partial Pressures
Thus far, our discussion of the properties of a gas has implicitly assumed that the gas is pure. We turn our attention now to mixtures of gases—gas samples that contain molecules of more than one compound. Mixtures of gases are common, and it is important to understand their behavior in terms of the properties of the individual gases that make it up. The ideal-gas laws we have for mixtures are approximations. Fortunately, these approximations are often very good.
• 2.15: Gas Mixtures - Amagat's Law of Partial Volums
Amagat’s law of partial volumes asserts that the volume of a mixture is equal to the sum of the partial volumes of its components.
• 2.16: Problems
${}^{1}$${}^{\ }$We use the over-bar to indicate that the quantity is per mole of substance. Thus, we write $\overline{N}$ to indicate the number of particles per mole. We write $\overline{M}$ to represent the gram molar mass. In Chapter 14, we introduce the use of the over-bar to denote a partial molar quantity; this is consistent with the usage introduced here, but carries the further qualification that temperature and pressure are constant at specified values. We also use the over-bar to indicate the arithmetic average; such instances will be clear from the context.
${}^{2}$${}^{\ }$The unit of temperature is named the kelvin, which is abbreviated as K.
${}^{3}$${}^{\ }$A redefinition of the size of the unit of temperature, the kelvin, is under consideration. The practical effect will be inconsequential for any but the most exacting of measurements.
${}^{4}$${}^{\ }$For a thorough discussion of the development of the concept of temperature, the evolution of our means to measure it, and the philosophical considerations involved, see Hasok Chang, Inventing Temperature, Oxford University Press, 2004.
${}^{5}$${}^{\ }$See T. L. Hill, An Introduction to Statistical Thermodynamics, Addison-Wesley Publishing Company, 1960, p 286.
${}^{6}$${}^{\ }$See S. M. Blinder, Advanced Physical Chemistry, The Macmillan Company, Collier-Macmillan Canada, Ltd., Toronto, 1969, pp 185-18926
02: Gas Laws
Robert Boyle discovered Boyle’s law in 1662. Boyle’s discovery was that the pressure, P, and volume, V, of a gas are inversely proportional to one another if the temperature, T, is held constant. We can imagine rediscovering Boyle’s law by trapping a sample of gas in a tube and then measuring its volume as we change the pressure. We would observe behavior like that in Figure 1. We can represent this behavior mathematically as
$PV={\alpha }^*(n,T) \nonumber$
where we recognize that the “constant”, ${\alpha }^*$, is actually a function of the temperature and of the number of moles, $n$, of gas in the sample. That is, the product of pressure and volume is constant for a fixed quantity of gas at a fixed temperature.
A little thought convinces us that we can be more specific about the dependence on the quantity of gas. Suppose that we have a volume of gas at a fixed pressure and temperature, and imagine that we introduce a very thin barrier that divides the volume into exactly equal halves, without changing anything else. In this case, the pressure and temperature of the gas in each of the new containers will be the same as they were originally. But the volume is half as great, and the number of moles in each of the half-size containers must also be half of the original number. That is, the pressure–volume product must be directly proportional to the number of moles of gas in the sample:
$PV=n\alpha (T) \nonumber$
where $\alpha (T)$ is now a function only of temperature. When we repeat this experiment using different gaseous substances, we discover a further remarkable fact: Not only do they all obey Boyle’s law, but also the value of $\alpha (T)$ is the same for any gas.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/02%3A_Gas_Laws/2.01%3A_Boyle%27s_Law.txt
|
Quantitative experiments establishing the law were first published in 1802 by Gay-Lussac, who credited Jacques Charles with having discovered the law earlier. Charles’ law relates the volume and temperature of a gas when measurements are made at constant pressure. We can imagine rediscovering Charles’ law by trapping a sample of gas in a tube and measuring its volume as we change the temperature, while keeping the pressure constant. This presumes that we have a way to measure temperature, perhaps by defining it in terms of the volume of a fixed quantity of some other fluid—like liquid mercury. At a fixed pressure, $P_1$, we observe a linear relationship between the volume of a sample of gas and its temperature, like that in Figure 2. If we repeat this experiment with the same gas sample at a higher pressure, $P_2$, we observe a second linear relationship between the volume and the temperature of the gas. If we extend these lines to their intersection with the temperature axis at zero volume, we make a further important discovery: Both lines intersect the temperature axis at the same point.
We can represent this behavior mathematically as
$V={\beta }^*\left(n,P\right)T^*+{\gamma }^*(n,P) \nonumber$
where we recognize that both the slope and the V-axis intercept of the graph depend on the pressure of the gas and on the number of moles of gas in the sample. A little reflection shows that here too the slope and intercept must be directly proportional to the number of moles of gas, so that we can rewrite our equation as
$V=n\beta \left(P\right)T^*+n\gamma (P) \nonumber$
When we repeat these experiments with different gaseous substances, we discover an additional important fact: $\beta (P)$ and $\gamma (P)$ are the same for any gas. This means that the temperature at which the volume extrapolates to zero is the same for any gas and is independent of the constant pressure we maintain as we vary the temperature (Figure 2).
2.03: Avogadro's Hypothesis
Avogadro’s hypothesis is another classical gas law. It can be stated: At the same temperature and pressure, equal volumes of different gases contain the same number of molecules.
When the mass, in grams, of an ideal gas sample is equal to the gram molar mass (traditionally called the molecular weight) of the gas, the number of molecules in the sample is equal to Avogadro’s number, $\overline{N}$${}^{1}$. Avogadro’s number is the number of molecules in a mole. In the modern definition, one mole is the number of atoms of $C^{12}$ in exactly 12 g of $C^{12}$. That is, the number of atoms of $C^{12}$ in exactly 12 g of $C^{12}$ is Avogadro’s number. The currently accepted value is $\mathrm{6.02214199\times }{\mathrm{10}}^{\mathrm{23}}$ molecules per mole. We can find the gram atomic mass of any other element by finding the mass of that element that combines with exactly 12 g of $C^{12}$ in a compound whose molecular formula is known.
The validity of Avogadro’s hypothesis follows immediately either from the fact that the Boyle’s law constant, $\alpha (T)$, is the same for any gas or from the fact that the Charles’ law constants, $\beta (P)$ and $\gamma \left(P\right)$, are the same for any gas. However, this entails a significant circularity; these experiments can show that $\alpha (T)$, $\beta (P)$, and $\gamma \left(P\right)$ are the same for any gas only if we know how to find the number of moles of each gas that we use. To do so, we must know the molar mass of each gas. Avogadro’s hypothesis is crucially important in the history of chemistry: Avogadro’s hypothesis made it possible to determine relative molar masses. This made it possible to determine molecular formulas for gaseous substances and to create the atomic mass scale.
2.04: Finding Avogadro's Number
This use of Avogadro’s number raises the question of how we know its value. There are numerous ways to measure Avogadro’s number. One such method is to divide the charge of one mole of electrons by the charge of a single electron. We can obtain the charge of a mole of electrons from electrolysis experiments. The charge of one electron can be determined in a famous experiment devised by Robert Millikan, the “Millikan oil-drop experiment”. The charge on a mole of electrons is called the faraday. Experimentally, it has the value $96,485\ \mathrm{C\ }{\mathrm{mol}}^{\mathrm{-1}}$${}^{\ }$(coulombs per mole). As determined by Millikan’s experiment, the charge on one electron is $1.6022\times {10}^{-19}\ \mathrm{C}$. Then
$\left(\frac{96,485\ C}{\mathrm{mole\ electrons}}\right)\left(\frac{1\ \mathrm{electron}}{1.6022\times {10}^{-19\ }\ C}\right) =6.022\times {10}^{23\ \ }\frac{\mathrm{electrons}}{\mathrm{mole\ electrons}} \nonumber$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/02%3A_Gas_Laws/2.02%3A_Charles%27_Law.txt
|
Thus far, we have assumed nothing about the value of the temperature corresponding to any particular volume of our standard fluid. We could define one unit of temperature to be any particular change in the volume of our standard fluid. Historically, Fahrenheit defined one unit (degree) of temperature to be one one-hundredth of the increase in volume of a fixed quantity of standard fluid as he warmed it from the lowest temperature he could achieve, which he elected to call 0 degrees, to the temperature of his body, which he elected to call 100 degrees. Fahrenheit’s zero of temperature was achieved by mixing salt with ice and water. This is not a very reproducible condition, so the temperature of melting ice (with no salt present), soon became the calibration standard. Fahrenheit’s experiments put the melting point of ice at 32 F. The normal temperature for a healthy person is now taken to be 98.6 F; possibly Fahrenheit had a slight fever when he was doing his calibration experiments. In any case, human temperatures vary enough so that Fahrenheit’s 100-degree point was not very practical either. The boiling point of water, which Fahrenheit’s experiments put at 212 F, became the calibration standard. Later, the centigrade scale was developed with fixed points at 0 degrees and 100 degrees at the melting point of ice and the boiling point of water, respectively. The centigrade scale is now called the Celsius scale after Anders Celsius, Anders, a Swedish astronomer. In 1742, Celsius proposed a scale on which the temperature interval between the boiling point and the freezing point of water was divided into 100 degrees; however, a more positive number corresponded to a colder condition.
Further reflection convinces us that the Charles’ law equation can be simplified by defining a new temperature scale. When we extend the straight line in any of our volume-versus-temperature plots, it always intersects the zero-volume horizontal line at the same temperature. Since we cannot associate any meaning with a negative volume, we infer that the temperature at zero volume represents a natural minimum point for our temperature scale. Let the value of $T^*$ at this intersection be $T^*_0$. Substituting into our volume-temperature relationship, we have
$0=n\beta \left(P\right)T^*_0+n\gamma (P) \nonumber$
or
$\gamma \left(P\right)=-\beta (P)T^*_0 \nonumber$
So that
\begin{align} V&= n\beta \left(P\right)T^*-n\beta (P)T^*_0 \[4pt] &=n\beta \left(P\right)[T^*-T^*_0] \[4pt] &=n\beta \left(P\right)T \end{align} \nonumber
where we have created a new temperature scale. Temperature values on our new temperature scale, T, are related to temperature values on the old temperature scale, $T^*$, by the equation
$T=T^*-T^*_0 \nonumber$
When the size of one unit of temperature is defined using the Celsius scale (i.e., $T^*$ is the temperature in degrees Celsius), this is the origin of the Kelvin temperature scale ${}^{2}$. Then, on the Kelvin temperature scale, $T^*_0$ is -273.15 degrees. (That is, $T=0$ when $T^*_0$ = 273.15; 0 K is $-$273.15 degrees Celsius.) The temperature at which the volume extrapolates to zero is called the absolute zero of temperature. When the size of one unit of temperature is defined using the Fahrenheit scale and the zero of temperature is set at absolute zero, the resulting temperature scale is called the Rankine scale, after William Rankine, a Scottish engineer who proposed it in 1859.
2.06: Deriving the Ideal Gas Law from Boyle's and Charles' Laws
We can solve Boyle’s law and Charles’ law for the volume. Equating the two, we have
$\dfrac{n\alpha (T)}{P}=n\beta \left(P\right)T \nonumber$
The number of moles, $n$, cancels. Rearranging gives
$\dfrac{\alpha (T)}{T}=P\beta (P) \nonumber$
In this equation, the left side is a function only of temperature, the right side only of pressure. Since pressure and temperature are independent of one another, this can be true only if each side is in fact constant. If we let this constant be $R$, we have
$\alpha \left(T\right)=RT \nonumber$
and
$\beta \left(P\right)={R}/{P} \nonumber$
Since the values of $\alpha (T)$ and $\beta (P)$ are independent of the gas being studied, the value of $R$ is also the same for any gas. $R$ is called the gas constant, the ideal gas constant, or the universal gas constant. Substituting the appropriate relationship into either Boyle’s law or Charles’ law gives the ideal gas equation
$PV=nRT \nonumber$
The product of pressure and volume has the units of work or energy, so the gas constant has units of energy per mole per degree. (Remember that we simplified the form of Charles’s law by defining the Kelvin temperature scale; temperature in the ideal gas equation is in degrees Kelvin.)
2.07: The Ideal Gas Constant and Boltzmann's Constant
Having developed the ideal gas equation and analyzed experimental results for a variety of gases, we will have found the value of R. It is useful to have R expressed using a number of different energy units. Frequently useful values are
\begin{aligned} R & = 8.314 \text{ Pa m}^{3} \text{ K}^{-1} \text{ mol}^{-1} \ ~ & = 8.314 \text{ J K}^{-1} \text{ mol}^{-1} \ ~ & = 0.08314 \text{ L bar K}^{-1} \text{ mol}^{-1} \ ~ & = 1.987 \text{ cal K}^{-1} \text{ mol}^{-1} \ ~ & = 0.08205 \text{ L atm K}^{-1} \text{ mol}^{-1} \end{aligned} \nonumber
We also need the gas constant expressed per molecule rather than per mole. Since there is Avogadro’s number of molecules per mole, we can divide any of the values above by $\overline{N}$ to get $R$ on a per-molecule basis. Traditionally, however, this constant is given a different name; it is Boltzmann’s constant, usually given the symbol $k$.
$k={R}/{\overline{N}}=1.381\times {10}^{-23}\ \mathrm{J}\ {\mathrm{K}}^{-1}\ {\mathrm{molecule}}^{-1} \nonumber$
This means that we can also write the ideal gas equation as $PV=nRT=n\overline{N}kT$. Because the number of molecules in the sample, $N$, is $N=n\overline{N}$, we have
$PV=NkT. \nonumber$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/02%3A_Gas_Laws/2.05%3A_The_Kelvin_Temperature_Scale.txt
|
Now, we need to expand on the qualifications with which we begin this chapter. We imagine that the results of a large number of experiments are available for our analysis. Our characterization of these results has been that all gases obey the same equations—Boyle’s law, Charles’ law, and the ideal gas equation—and do so exactly. This is an oversimplification. In fact they are always approximations. They are approximately true for all gases under all “reasonable” conditions, but they are not exactly true for any real gas under any condition. It is useful to introduce the idea of hypothetical gases that obey the classical gas equations exactly. In the previous section, we call the combination of Boyle’s law and Charles’ law the ideal gas equation. We call the hypothetical substances that obey this equation ideal gases. Sometimes we refer to the classical gas laws collectively as the ideal gas laws.
At very high gas densities, the classical gas laws can be very poor approximations. As we have noted, they are better approximations the lower the density of the gas. In fact, experiments show that the pressure—volume—temperature behavior of any real gasreal gas becomes arbitrarily close to that predicted by the ideal gas equation in the limit as the pressure goes to zero. This is an important observation that we use extensively.
At any given pressure and temperature, the ideal gas laws are better approximations for a compound that has a lower boiling point than they are for a compound with a higher boiling point. Another way of saying this is that they are better approximations for molecules that are weakly attracted to one another than they are for molecules that are strongly attracted to one another.
Forces between molecules cause them to both attract and repel one another. The net effect depends on the distance between them. If we assume that there are no intermolecular forcesintermolecular forces acting between gas molecules, we can develop exact theories for the behavior of macroscopic amounts of the gas. In particular, we can show that such substances obey the ideal gas equation. (We shall see that a complete absence of repulsive forces implies that the molecules behave as point masses.) Evidently, the difference between the behavior of a real gas and the behavior it would exhibit if it were an ideal gas is just a measure of the effects of intermolecular forces.
The ideal gas equation is not the only equation that gives a useful representation for the interrelation of gas pressure–volume–temperature data. There are many such equations of state. They are all approximations, but each can be a particularly useful approximation in particular circumstances. We discuss van der Waal’s equation equation and the virial equations later in this chapter. Nevertheless, we use the ideal gas equation extensively.
We will see that much of chemical thermodynamics is based on the behavior of ideal gases. Since there are no ideal gases, this may seem odd, at best. If there are no ideal gases, why do we waste time talking about them? After all, we don’t want to slog through tedious, long-winded, pointless digressions. We want to understand how real stuff behaves! Unfortunately, this is more difficult. The charm of ideal gases is that we can understand their behavior; the ideal gas equation expresses this understanding in a mathematical model. Real gases are another story. We can reasonably say that we can best understand the behavior of a real gas by understanding how and why it is different from the behavior of a (hypothetical) ideal gas that has the same molecular structure.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/02%3A_Gas_Laws/2.08%3A_Real_Gases_Versus_Ideal_Gases.txt
|
In Section 2.2 we suppose that we have a thermometer that we can use to measure the temperature of a gas. We suppose that this thermometer uses a liquid, and we define an increase in temperature by the increase in the volume of this liquid. Our statement of Charles’ law asserts that the volume of a gas is a linear function of the volume of the liquid in our thermometer, and that the same linear function is observed for any gas. As we note in Section 2.8, there is a problem with this statement. Careful experiments with such thermometers produce results that deviate from Charles’ law. With sufficiently accurate volume measurements, this occurs to some extent for any choice of the liquid in the thermometer. If we make sufficiently accurate measurements, the volume of a gas is not exactly proportional to the volume of any liquid (or solid) that we might choose as the working substance in our thermometer. That is, if we base our temperature scale on a liquid or solid substance, we observe deviations from Charles’ law. There is a further difficulty with using a liquid as the standard fluid on which to base our temperature measurements: temperatures outside the liquid range of the chosen substance have to be measured in some other way.
Evidently, we can choose to use a gas as the working fluid in our thermometer. That is, our gas-volume measuring device is itself a thermometer. This fact proves to be very useful because of a further experimental observation. To a very good approximation, we find: If we keep the pressures in the thermometer and in some other gaseous system constant at low enough values, both gases behave as ideal gases, and we find that the volumes of the two gases are proportional to each other over any range of temperature. Moreover, this proportionality is observed for any choice of either gas. This means that we can define temperature in terms of the expansion of any constant-pressure gas that behaves ideally. In principle, we can measure the same temperature using any gas, so long as the constant operating pressure is low enough. When we do so, our device is called the ideal gas thermometer. In so far as any gas behaves as an ideal gas at a sufficiently low pressure, any real gas can be used in an ideal gas thermometer and to measure any temperature accurately. Of course, practical problems emerge when we attempt to make such measurements at very high and very low temperatures.
The (very nearly) direct proportionality of two low-pressure real gas volumes contrasts with what we observe for liquids and solids. In general, the volume of a given liquid (or solid) substance is not exactly proportional to the volume of a second liquid (or solid) substance over a wide range of temperatures.
In practice, the ideal-gas thermometer is not as convenient to use as other thermometers—like the mercury-in-glass thermometer. However, the ideal-gas thermometer is used to calibrate other thermometers. Of course, we have to calibrate the ideal-gas thermometer itself before we can use it.
We do this by assigning a temperature of 273.16 K to the triple point of water. (It turns out that the melting point of ice isn’t sufficiently reproducible for the most precise work. Recall that the triple point is the temperature and pressure at which all three phases of water are at equilibrium with one another, with no air or other substances present. The triple-point pressure is 611 Pa or $\mathrm{6.03\times }{\mathrm{10}}^{\mathrm{-3\ }}$atm. See Section 6.3.) From both theoretical considerations and experimental observations, we are confident that no system can attain a temperature below absolute zero. Thus, the size${}^{3}$ of the kelvin (one degree on the Kelvin scale) is fixed by the difference in temperature between a system at the triple point of water and one at absolute zero. If our ideal gas thermometer has volume $V$ at thermal equilibrium with some other constant-temperature system, the proportionality of $V$ and $T$ means that
$\frac{T}{V}=\frac{273.16}{V_{273.16}} \nonumber$
With the triple point fixed at 273.16 K, experiments find the freezing point of air-saturated water to be 273.15 K when the system pressure is 1 atmosphere. (So the melting point of ice is 273.15 K, and the triple-point is 0.10 C. We will find two reasons for the fact that the melting point is lower than the triple point: In Section 6.3 we find that the melting point of ice decreases as the pressure increases. In Section 16.10 we find that solutes usually decrease the temperature at which the liquid and solid states of a substance are in phase equilibrium.)
If we could use an ideal gas in our ideal-gas thermometer, we could be confident that we had a rigorous operational definition of temperature. However, we note in Section 2.8 that any real gas will exhibit departures from ideal gas behavior if we make sufficiently accurate measurements. For extremely accurate work, we need a way to correct the temperature value that we associate with a given real-gas volume. The issue here is the value of the partial derivative
${\left(\frac{\partial V}{\partial T}\right)}_P \nonumber$
For one mole of an ideal gas,
${\left(\frac{\partial V}{\partial T}\right)}_P=\frac{R}{P}=\frac{V}{T} \nonumber$
is a constant. For a real gas, it is a function of temperature. Let us assume that we know this function. Let the molar volume of the real gas at the triple point of water be $V_{273.16}$ and its volume at thermal equilibrium with a system whose true temperature is $V$ be $V_T$. We have
$\int_{273.16}^T \left( \frac{ \partial V}{ \partial T} \right)_P dT = \int_{V_{273.16}}^{V_T} dV = V_T - V_{273.16} \nonumber$
When we know the integrand on the left as a function of temperature, we can do the integration and find the temperature corresponding to any measured volume, $V_T$.
When the working fluid in our thermometer is a real gas we make measurements to find ${\left({\partial V}/{\partial T}\right)}_P$ as a function of temperature. Here we encounter a circularity: To find ${\left({\partial V}/{\partial T}\right)}_P$ from pressure-volume-temperature data we must have a way to measure temperature; however, this is the very thing that we are trying to find.
In principle, we can surmount this difficulty by iteratively correcting the temperature that we associate with a given real-gas volume. As a first approximation, we use the temperatures that we measure with an uncorrected real-gas thermometer. These temperatures are a first approximation to the ideal-gas temperature scale. Using this scale, we make non-pressure-volume-temperature measurements that establish ${\left({\partial V}/{\partial T}\right)}_P$ as a function of temperature for the real gas. [This function is
${\left(\frac{\partial V}{\partial T}\right)}_P=\frac{V+{\mu }_{JT}C_P}{T} \nonumber$
where $C_P$ is the constant-pressure heat capacity and ${\mu }_{JT}$ is the Joule-Thomson coefficient. Both are functions of temperature. We introduce $C_P$ in Section 7.9. We discuss the Joule-Thomson coefficient further in Section 2.10 below, and in detail in Section 10.14. Typically $V\gg C_P$, and the value of ${\left({\partial V}/{\partial T}\right)}_P$ is well approximated by ${V}/{T}={R}/{P}$. With ${\left({\partial V}/{\partial T}\right)}_P$ established using this scale, integration yields a second-approximation to the ideal-gas temperatures. We could repeat this process until successive temperature scales converge at the number of significant figures that our experimental accuracy can support.
In practice, there are several kinds of ideal-gas thermometers, and numerous corrections are required for very accurate measurements. There are also numerous other ways to measure temperature, each of which has its own complications. Our development has considered some of the ideas that have given rise to the concept${}^{4}$ that temperature is fundamental property of nature that can be measured using a thermodynamic-temperature scale on which values begin at zero and increase to arbitrarily high values. This thermodynamic temperature scale is a creature of theory, whose real-world counterpart would be the scale established by an ideal-gas thermometer whose gas actually obeyed $PV=nRT$ at all conditions. We have seen that such an ideal-gas thermometer is itself a creature of theory.
The current real-world standard temperature scale is the International Temperature Scale of 1990 (ITS-90). This defines temperature over a wide range in terms of the pressure-volume relationships of helium isotopes and the triple points of several selected elements. The triple points fix the temperature at each of several conditions up to 1357.77 K (the freezing point of copper). Needless to say, the temperatures assigned at the fixed points are the results of painstaking experiments designed to give the closest possible match to the thermodynamic scale. A variety of measuring devices—thermometers—can be used to interpolate temperature values between different pairs of fixed points.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/02%3A_Gas_Laws/2.09%3A_Temperature_and_the_Ideal_Gas_Thermometer.txt
|
We can derive Boyle’s law from Newtonian mechanics. This derivation assumes that gas molecules behave like point masses that do not interact with one another. The pressure of the gas results from collisions of the gas molecules with the walls of the container. The contribution of one collision to the force on the wall is equal to the change in the molecule’s momentum divided by the time between collisions. The magnitude of this force depends on the molecule’s speed and the angle at which it strikes the wall. Each such collision makes a contribution to the pressure that is equal to the force divided by the area of the wall. To find the pressure from this model, it is necessary to average over all possible molecular speeds and all possible collision angles. In Chapter 4, we derive Boyle’s law in this way.
We can do a simplified derivation by making a number of assumptions. We assume that all of the molecules in a sample of gas have the same speed. Let us call it $u$. As sketched in Figure 3, we assume that the container is a cubic box whose edge length is $d$. If we consider all of the collisions between molecules and walls, it is clear that each wall will experience ${1}/{6}$ of the collisions; or, each pair of opposing walls will experience ${\mathrm{1}}/{\mathrm{3}}$ of the collisions. Instead of averaging over all of the possible angles at which a molecule could strike a wall and all of the possible times between collisions, we assume that the molecules travel at constant speed back and forth between opposite faces of the box. Since they are point masses, they never collide with one another. If we suppose that ${\mathrm{1}}/{\mathrm{3}}$ of the molecules go back and forth between each pair of opposite walls, we can expect to accomplish the same kind of averaging in setting up our artificial model that we achieve by averaging over the real distribution of angles and speeds. In fact, this turns out to be the case; the derivation below gets the same result as the rigorous treatment we develop in Chapter 4.
Since each molecule goes back and forth between opposite walls, it collides with each wall once during each round trip. At each collision, the molecule’s speed remains constant, but its direction changes by 180${}^{o}$; that is, the molecule’s velocity changes from $\mathop{u}\limits^{\rightharpoonup}$ to $-\mathop{u}\limits^{\rightharpoonup}$. Letting $\Delta t$ be the time required for a round trip, the distance traversed in a round trip is
\begin{aligned} 2d & =\left|\mathop{u}\limits^{\rightharpoonup}\right|\Delta t \ ~ & =u\Delta t \end{aligned} \nonumber
The magnitude of the momentum change for a molecule in one collision is
\begin{align*} \left|\Delta (m\mathop{u}\limits^{\rightharpoonup})\right| &=\left|m{\mathop{u}\limits^{\rightharpoonup}}_{final}-m{\mathop{u}\limits^{\rightharpoonup}}_{initial}\right| \[4pt] &=\left|m{\mathop{u}\limits^{\rightharpoonup}}_{final}-\left({-m\mathop{u}\limits^{\rightharpoonup}}_{final}\right)\right| \[4pt] &=2mu \end{align*}
The magnitude of the force on the wall from one collision is
$F=\frac{\left|\Delta \left(m\mathop{u}\limits^{\rightharpoonup}\right)\right|}{\Delta t}=\frac{2mu}{\left({2d}/{u}\right)}=\frac{mu^2}{d} \nonumber$
and the pressure contribution from one collision on the wall, of area $d^2$, is
$P=\frac{F}{A}=\frac{mu^2}{d\bullet d^2}=\frac{{mu}^2}{d^3}=\frac{{mu}^2}{V} \nonumber$
so that we have
$PV=mu^2 \nonumber$
from the collision of one molecule with one wall.
If the number of molecules in the box is $N$, $N/3$ of them make collisions with this wall, so that the total pressure on one wall attributable to all $N$ molecules in the box is
$P=\frac{mu^2}{V}\frac{N}{3} \nonumber$
or
$PV=\frac{Nmu^2}{3} \nonumber$
Since the ideal gas equation can be written as $PV=NkT$ we see that ${Nmu^2}/{3}=NkT$ so that $mu^2=3kT$ and
$u=\sqrt{\frac{3kT}{m}} \nonumber$
Thus we have found a relationship between the molecular speed and the temperature of the gas. (The actual speed of a molecule, $v$, can have any value between zero and—for present purposes—infinity. When we average the values of $v^2$ for many molecules, we find the average value of the squared speeds, $\overline{v^2}$. In Chapter 4, we find that $u^2=\overline{v^2}$. That is, the average speed we use in our derivation turns out to be a quantity called the root-mean-square speed, $v_{rms}=u=\sqrt{\overline{v^2}}$.) This result also gives us the (average) kinetic energy of a single gas molecule:
$KE=\frac{mu^2}{2}=\frac{3kT}{2} \nonumber$
From this derivation, we have a simple mechanical model that explains Boyle’s law as the logical consequence of point-mass molecules colliding with the walls of their container. By combining this result with the ideal gas equation, we find that the average speed of ideal gas molecules depends only on the temperature. From this we have the very important result that the translational kinetic energy of an ideal gas depends only on temperature.
Since our non-interacting point-mass molecules have no potential energy arising from their interactions with one another, their translational kinetic energy is the whole of their energy. (Because two such molecules neither attract nor repel one another, no work is required to change the distance between them. The work associated with changing the volume of a confined sample of an ideal gas arises because of the pressure the molecules exert on the walls of the container; the pressure arises because of the molecules’ kinetic energy.) The energy of one mole of monatomic ideal gas molecules is
$KE=\left({3}/{2}\right)RT \nonumber$
When we expand our concept of ideal gases to include molecules that have rotational or vibrational energy, but which neither attract nor repel one another, it remains true that the energy of a macroscopic sample depends only on temperature. However, the molar energy of such a gas is greater than $\left({3}/{2}\right)RT$, because of the energy associated with these additional motions.
We make extensive use of the conclusion that the energy of an ideal gas depends only on temperature. As it turns out, this conclusion follows rigorously from the second law of thermodynamics. In Chapter 10, we show that
${\left(\frac{\partial E}{\partial V}\right)}_T={\left(\frac{\partial E}{\partial P}\right)}_T=0 \nonumber$
for a substance that obeys the ideal gas equation; at constant temperature, the energy of an ideal gas is independent of the volume and independent of the pressure. So long as pressure, volume, and temperature are the only variables needed to specify its state, the laws of thermodynamics imply that the energy of an ideal gas depends only on temperature.
While the energy of an ideal gas is independent of pressure, the energy of a real gas is a function of pressure at a given temperature. At ordinary pressures and temperatures, this dependence is weak and can often be neglected. The first experimental investigation of this issue was made by James Prescott Joule, for whom the SI unit of energy is named. Beginning in 1838, Joule did a long series of careful measurements of the mechanical equivalent of heat. These measurements formed the original experimental basis for the kinetic theory of heat. Among Joule’s early experiments was an attempt to measure the heat absorbed by a gas as it expanded into an evacuated container, a process known as a free expansion. No absorption of heat was observed, which implied that the energy of the gas was unaffected by the volume change. However, it is difficult to do this experiment with meaningful accuracy.
Subsequently, Joule collaborated with William Thomson (Lord Kelvin) on a somewhat different experimental approach to essentially the same question. The Joule-Thomson experiment provides a much more sensitive measure of the effects of intermolecular forces of attraction and repulsion on the energy of a gas during its expansion. Since our definition of an ideal gas includes the stipulation that there are no intermolecular forces, the Joule-Thomson experiment is consistent with the conclusion that the energy of an ideal gas depends only on temperature. However, since intermolecular forces are not zero for any real gas, our analysis reaches this conclusion in a somewhat indirect way. The complication arises because the Joule-Thomson results are not entirely consistent with the idea that all properties of a real gas approach those of an ideal gas at a sufficiently low pressure. (The best of models can have limitations.) We discuss the Joule-Thomson experiment in Section 10.14.
2.11: The Barometric Formula
We can measure the pressure of the atmosphere at any location by using a barometer. A mercury barometer is a sealed tube that contains a vertical column of liquid mercury. The space in the tube above the liquid mercury is occupied by mercury vapor. Since the vapor pressure of liquid mercury at ordinary temperatures is very low, the pressure at the top of the mercury column is very low and can usually be ignored. The pressure at the bottom of the column of mercury is equal to the pressure of a column of air extending from the elevation of the barometer all the way to the top of the earth’s atmosphere. As we take the barometer to higher altitudes, we find that the height of the mercury column decreases, because less and less of the atmosphere is above the barometer.
If we assume that the atmosphere is composed of an ideal gas and that its temperature is constant, we can derive an equation for atmospheric pressure as a function of altitude. Imagine a cylindrical column of air extending from the earth’s surface to the top of the atmosphere (Figure 4). The force exerted by this column at its base is the weight of the air in the column; the pressure is this weight divided by the cross-sectional area of the column. Let the cross-sectional area of the column be $A$.
Consider a short section of this column. Let the bottom of this section be a distance $h$ from the earth’s surface, while its top is a distance $h+\Delta h$ from the earth’s surface. The volume of this cylindrical section is then $V_S=A\Delta h$. Let the mass of the gas in this section be $M_S$. The pressure at $h+\Delta h$ is less than the pressure at $h$ by the weight of this gas divided by the cross-sectional area. The weight of the gas is $M_Sg$. The pressure difference is $\Delta P=-{M_Sg}/{A}$. We have
$\frac{P\left(h+\Delta h\right)-P\left(h\right)}{\Delta h}=\frac{\Delta P}{\Delta h}=\frac{-M_Sg}{A\Delta h}=\frac{-M_Sg}{V_S} \nonumber$
Since we are assuming that the sample of gas in the cylindrical section behaves ideally, we have $V_S={n_SRT}/{P}$. Substituting for $V_S$ and taking the limit as $\Delta h\to 0$, we find
$\frac{dP}{dh}=\left(\frac{{-M}_Sg}{n_SRT}\right)P=\left(\frac{{-n}_S\overline{M}g}{n_SRT}\right)P=\left(\frac{-mg}{kT}\right)P \nonumber$
where we introduce $n_S$ as the number of moles of gas in the sample, $\overline{M}$ as the molar mass of this gas, and $m$ as the mass of an individual atmosphere molecule. The last equality on the right makes use of the identities $\overline{M}=m\overline{N}$ and $R=\overline{N}k$. Separating variables and integrating between limits $P\left(0\right)=P_0$ and $P\left(h\right)=P$, we find
$\int^P_{P_0}{\frac{dP}{P}}=\left(\frac{-mg}{kT}\right)\int^h_0{dh} \nonumber$
so that ${ \ln \left(\frac{P}{P_0}\right)\ }=\frac{-mgh}{kT} \nonumber$
and
$P=P_0\mathrm{exp}\left(\frac{-mgh}{kT}\right) \nonumber$
Either of the latter relationships is frequently called the barometric formula.
If we let $\eta$ be the number of molecules per unit volume, $\eta ={N}/{V}$, we can write $P={NkT}/{V}=\eta kT$ and $P_0={\eta }_0kT$ so that the barometric formula can be expressed in terms of these number densities as
$\eta ={\eta }_0\mathrm{exp}\left(\frac{-mgh}{kT}\right) \nonumber$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/02%3A_Gas_Laws/2.10%3A_Deriving_Boyle%27s_Law_from_Newtonian_Mechanics.txt
|
An equation due to van der Waals extends the ideal gas equation in a straightforward way. Van der Waals’ equation is
$\left(P+\frac{an^2}{V^2}\right)\left(V-nb\right)=nRT \nonumber$
It fits pressure-volume-temperature data for a real gas better than the ideal gas equation does. The improved fit is obtained by introducing two parameters (designated “$a$” and “$b$”) that must be determined experimentally for each gas. Van der Waals’ equation is particularly useful in our effort to understand the behavior of real gases, because it embodies a simple physical picture for the difference between a real gas and an ideal gas.
In deriving Boyle’s law from Newton’s laws, we assume that the gas molecules do not interact with one another. Simple arguments show that this can be only approximately true. Real gas molecules must interact with one another. At short distances they repel one another. At somewhat longer distances, they attract one another. The ideal gas equation can also be derived from the basic assumptions that we make in §10 by an application of the theory of statistical thermodynamics. By making different assumptions about molecular properties, we can apply statistical thermodynamics to derive${}^{5}$ van der Waals’ equation. The required assumptions are that the molecules occupy a finite volume and that they attract one another with a force that varies as the inverse of a power of the distance between them. (The attractive force is usually assumed to be proportional to $r^{-6}$.)
To recognize that real gas molecules both attract and repel one another, we need only remember that any gas can be liquefied by reducing its temperature and increasing the pressure applied to it. If we cool the liquid further, it freezes to a solid. Now, two distinguishing features of a solid are that it retains its shape and that it is almost incompressible. We attribute the incompressibility of a solid to repulsive forces between its constituent molecules; they have come so close to one another that repulsive forces between them have become important. To compress the solid, the molecules must be pushed still closer together, which requires inordinate force. On the other hand, if we throw an ice cube across the room, all of its constituent water molecules fly across the room together. Evidently, the water molecules in the solid are attracted to one another, otherwise they would all go their separate ways—throwing the ice cube would be like throwing a handful of dry sand. But water molecules are the same molecules whatever the temperature or pressure, so if there are forces of attraction and repulsion between them in the solid, these forces must be present in the liquid and gas phases also.
In the gas phase, molecules are far apart; in the liquid or the solid phase, they are packed together. At its boiling point, the volume of a liquid is much less than the volume of the gas from which it is condensed. At the freezing point, the volume of a solid is only slightly different from the volume of the liquid from which it is frozen, and it is certainly greater than zero. These commonplace observations are readily explained by supposing that any molecule has a characteristic volume. We can understand this, in turn, to be a consequence of the nature of the intermolecular forces; evidently, these forces become stronger as the distance between a pair of molecules decreases. Since a liquid or a solid occupies a definite volume, the repulsive force must increase more rapidly than the attractive force when the intermolecular distance is small. Often it is useful to talk about the molar volume of a condensed phase. By molar volume, we mean the volume of one mole of a pure substance. The molar volume of a condensed phase is determined by the intermolecular distance at which there is a balance between intermolecular forces of attraction and repulsion.
Evidently molecules are very close to one another in condensed phases. If we suppose that the empty spaces between molecules are negligible, the volume of a condensed phase is approximately equal to the number of molecules in the sample multiplied by the volume of a single molecule. Then the molar volume is Avogadro’s number times the volume occupied by one molecule. If we know the density, D, and the molar mass, $\overline{M}$, we can find the molar volume, $\overline{V}$, as
$\overline{V}=\frac{\overline{M}}{D} \nonumber$
The volume occupied by a molecule, V${}_{molecule}$, becomes
$V_{molecule}=\frac{\overline{V}}{\overline{N}} \nonumber$
The pressure and volume appearing in van der Waals’ equation are the pressure and volume of the real gas. We can relate the terms in van der Waals’ equation to the ideal gas equation: It is useful to think of the terms $\left(P+{{an}^2}/{V^2}\right)$ and $\left(V-nb\right)$ as the pressure and volume of a hypothetical ideal gas. That is
\begin{align*} P_{ideal\ gas}V_{ideal\ gas} &=\left(P_{real\ gas}+\frac{an^2}{V^2_{real\ gas}}\right)\left(V_{real\ gas}-nb\right) \[4pt] &=nRT \end{align*}
Then we have
$V_{real\ gas}=V_{ideal\ gas}+nb \nonumber$
We derive the ideal gas equation from a model in which the molecules are non-interacting point masses. So the volume of an ideal gas is the volume occupied by a gas whose individual molecules have zero volume. If the individual molecules of a real gas effectively occupy a volume ${b}/{\overline{N}}$, then $n$ moles of them effectively occupy a volume
$\left({b}/{\overline{N}}\right)\left(n\overline{N}\right)=nb. \nonumber$
Van der Waals’ equation says that the volume of a real gas is the volume that would be occupied by non-interacting point masses, $V_{ideal\ gas}$, plus the effective volume of the gas molecules themselves. (When data for real gas molecules are fit to the van der Waals’ equation, the value of $b$ is usually somewhat greater than the volume estimated from the liquid density and molecular weight. See problem 24.)
Similarly, we have
$P_{\text{real gas}}=P_{\text{ideal gas}}-\frac{an^2}{V^2_{\text{real gas}}} \nonumber$
We can understand this as a logical consequence of attractive interactions between the molecules of the real gas. With $a>0$, it says that the pressure of the real gas is less than the pressure of the hypothetical ideal gas, by an amount that is proportional to ${\left({n}/{V}\right)}^2$. The proportionality constant is $a$. Since ${n}/{V}$ is the molar density (moles per unit volume) of the gas molecules, it is a measure of concentration. The number of collisions between molecules of the same kind is proportional to the square of their concentration. (We consider this point in more detail in Chapters 4 and 5.) So ${\left({n}/{V}\right)}^2$ is a measure of the frequency with which the real gas molecules come into close contact with one another. If they attract one another when they come close to one another, the effect of this attraction should be proportional to ${\left({n}/{V}\right)}^2$. So van der Waals’ equation is consistent with the idea that the pressure of a real gas is different from the pressure of the hypothetical ideal gas by an amount that is proportional to the frequency and strength of attractive interactions.
But why should attractive interactions have this effect; why should the pressure of the real gas be less than that of the hypothetical ideal gas? Perhaps the best way to develop a qualitative picture is to recognize that attractive intermolecular forces tend to cause the gas molecules to clump up. After all, it is these attractive forcesattractive force that cause the molecules to aggregate to a liquid at low temperatures. Above the boiling point, the ability of gas molecules to go their separate ways limits the effects of this tendency; however, even in the gas, the attractive forces must act in a way that tends to reduce the volume occupied by the molecules. Since the volume occupied by the gas is dictated by the size of the container—not by the properties of the gas itself—this clumping-up tendency finds expression as a decrease in pressure.
It is frequently useful to describe the interaction between particles or chemical moieties in terms of a potential energy versus distance diagram. The van der Waals’ equation corresponds to the case that the repulsive interaction between molecules is non-existent until the molecules come into contact. Once they come into contact, the energy required to move them still closer together becomes arbitrarily large. Often this is described by saying that they behave like “hard spheres”. The attractive force between two molecules decreases as the distance between them increases. When they are very far apart the attractive interaction is very small. We say that the energy of interaction is zero when the molecules are infinitely far apart. If we initially have two widely separated, stationary, mutually attracting molecules, they will spontaneously move toward one another, gaining kinetic energy as they go. Their potential energy decreases as they approach one another, reaching its smallest value when the molecules come into contact. Thus, van der Waals’ equation implies the potential energy versus distance diagram sketched in Figure 5.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/02%3A_Gas_Laws/2.12%3A_Van_der_Waals%27_Equation.txt
|
It is often useful to fit accurate pressure-volume-temperature data to polynomial equations. The experimental data can be used to compute a quantity called the compressibility factor, $Z$, which is defined as the pressure–volume product for the real gas divided by the pressure–volume product for an ideal gas at the same temperature.
We have
${\left(PV\right)}_{ideal\ gas}=nRT \nonumber$
Letting P and V represent the pressure and volume of the real gas, and introducing the molar volume, $\overline{V}={V}/{n}$, we have
$Z=\frac{\left(PV\right)_{real\ gas}}{\left(PV\right)_{ideal\ gas}}=\frac{PV}{nRT}=\frac{P\overline{V}}{RT} \nonumber$
Since $Z=1$ if the real gas behaves exactly like an ideal gas, experimental values of Z will tend toward unity under conditions in which the density of the real gas becomes low and its behavior approaches that of an ideal gas. At a given temperature, we can conveniently ensure that this condition is met by fitting the Z values to a polynomial in P or a polynomial in ${\overline{V}}^{-1}$. The coefficients are functions of temperature. If the data are fit to a polynomial in the pressure, the equation is
$Z=1+B^*\left(T\right)P+C^*\left(T\right)P^2+D^*\left(T\right)P^3+\dots \nonumber$
For a polynomial in ${\overline{V}}^{-1}$, the equation is
$Z=1+\frac{B\left(T\right)}{\overline{V}}+\frac{C\left(T\right)}{\overline{V}^2}+\frac{D\left(T\right)}{\overline{V}^3}+\dots \nonumber$
These empirical equations are called virial equations. As indicated, the parameters are functions of temperature. The values of $B^*\left(T\right)$, $C^*\left(T\right)$, $D^*\left(T\right)$, , and $B\left(T\right)$, $C\left(T\right)$, $D\left(T\right)$,, must be determined for each real gas at every temperature. (Note also that $B^*\left(T\right)\neq B\left(T\right)$, $C^*\left(T\right)\neq C\left(T\right)$, $D^*\left(T\right)\neq D\left(T\right)$, etc. However, it is true that $B^*={B}/{RT}$.) Values for these parameters are tabulated in various compilations of physical data. In these tabulations, $B\left(T\right)$ and $C\left(T\right)$ are called the second virial coefficient and third virial coefficient, respectively.
2.14: Gas Mixtures - Dalton's Law of Partial Pressures
Thus far, our discussion of the properties of a gas has implicitly assumed that the gas is pure. We turn our attention now to mixtures of gases—gas samples that contain molecules of more than one compound. Mixtures of gases are common, and it is important to understand their behavior in terms of the properties of the individual gases that make it up. The ideal-gas laws we have for mixtures are approximations. Fortunately, these approximations are often very good. When we think about it, this is not surprising. After all, the distinguishing feature of a gas is that its molecules do not interact with one another very much. Even if the gas is composed of molecules of different kinds, the unimportance of molecule—molecule interactions means that the properties of one kind of molecules should be nearly independent of the properties of the other kinds.
Consider a sample of gas that contains a fixed number of moles of each of two or more compounds. This sample has a pressure, a volume, a temperature, and a specified composition. Evidently, the challenge here is to describe the pressure, volume, and temperature of the mixture in terms of measurable properties of the component compounds.
There is no ambiguity about what we mean by the pressure, volume, and temperature of the mixture; we can measure these properties without difficulty. Given the nature of temperature, it is both reasonable and unambiguous to say that the temperature of the sample and the temperature of its components are the same. However, we cannot measure the pressure or volume of an individual component in the mixture. If we hope to describe the properties of the mixture in terms of properties of the components, we must first define some related quantities that we can measure. The concepts of a component partial pressure and a component partial volume meet this need.
We define the partial pressure of a component of a gas mixture as the pressure exerted by the same number of moles of the pure component when present in the volume occupied by the mixture, $V_{mixture}$, at the temperature of the mixture. In a mixture of $n_A$ moles of component $A$, $n_B$ moles of component $B$, etc., it is customary to designate the partial pressure of component $A$ as $P_A$. It is important to appreciate that the partial pressure of a real gas can only be determined by experiment.
We define the partial volume of a component of a gas mixture as the volume occupied by the same number of moles of the pure component when the pressure is the same as the pressure of the mixture, $P_{mixture}$, at the temperature of the mixture. In a mixture of components $A$, $B$, etc., it is customary to designate the partial volume of component $A$ as $V_A$. The partial volume of a real gas can only be determined by experiment.
Dalton’s law of partial pressures asserts that the pressure of a mixture is equal to the sum of the partial pressures of its components. That is, for a mixture of components A, B, C, etc., the pressure of the mixture is
$P_{mixture}=P_A+P_B+P_C+\dots \label{Dalton}$
Under conditions in which the ideal gas law is a good approximation to the behavior of the individual components, Dalton’s law is usually a good approximation to the behavior of real gas mixtures. For mixtures of ideal gases, it is exact. To see this, we recognize that, for an ideal gas, the definition of partial pressure becomes
$P_A=\frac{n_ART}{V_{mixture}} \nonumber$
The ideal-gas mixture contains $n_{mixture}=n_A+n_B+n_C+\dots \text{moles}$, so that
\begin{align*} P_{mixture} &=\frac{n_{mixture}RT}{V_{mixture}} \[4pt] &=\frac{\left(n_A+n_B+n_C+\dots \right)RT}{V_{mixture}} \[4pt] &=\frac{n_ART}{V_{mixture}}+\frac{n_BRT}{V_{mixture}}+\frac{n_CRT}{V_{mixture}}+\dots \[4pt] &=P_A+P_B+P_C+\dots \end{align*}
Applied to the mixture, the ideal-gas equation yields Dalton’s law (Equation \ref{Dalton}). When $x_A$ is the mole fraction of A in a mixture of ideal gases,
$P_A=x_AP_{mixture}. \nonumber$
2.15: Gas Mixtures - Amagat's Law of Partial Volums
Amagat’s law of partial volumes asserts that the volume of a mixture is equal to the sum of the partial volumes of its components. For a mixture of components $A$, $B$, $C$, etc., Amagat’s law gives the volume as
$V_{mixture}=V_A+V_B+V_C+\dots \nonumber$
For real gases, Amagat’s law is usually an even better approximation than Dalton’s law${}^{6}$. Again, for mixtures of ideal gases, it is exact. For an ideal gas, the partial volume is
$V_A=\frac{n_ART}{P_{mixture}} \nonumber$
Since $n_{mixture}=n_A+n_B+n_C+\dots$, we have, for a mixture of ideal gases,
\begin{align*} V_{mixture}&=\frac{n_{mixture}RT}{P_{mixture}} \[4pt] &=\frac{\left(n_A+n_B+n_C+\dots \right)RT}{P_{mixture}} \[4pt] &=V_A+V_B+V_C+\dots \end{align*}
Applied to the mixture, the ideal-gas equation yields Amagat’s law. Also, we have $V_A=x_AV_{mixture}$.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/02%3A_Gas_Laws/2.13%3A_Virial_Equations.txt
|
1. If $A$ is an ideal gas in a mixture of ideal gases, prove that its partial pressure, $P_A$, is given by $P_A=x_AP_{mixture}$.
2. If $A$ is an ideal gas in a mixture of ideal gases, prove that its partial volume, $V_A$, is given by $V_A=x_AV_{mixture}$.
3. A sample of hydrogen chloride gas, $HCl$, occupies 0.932 L at a pressure of 1.44 bar and a temperature of 50 C. The sample is dissolved in 1 L of water. What is the resulting hydronium ion, $H_3O^+$, concentration?
4. Ammonia gas, $NH_3$, also dissolves quantitatively in water. If it is measured at 0.720 bar and 50 C, what volume of $NH_3$ gas is required to neutralize the solution prepared in problem 3? For present purposes, assume that the neutralization reaction occurs quantitatively.
5. Two pressure vessels are separated by a closed valve. One contains 10.0 moles of helium, $He$, at 5.00 bar. The other contains 5.00 moles of neon, $Ne$, at 20.0 bar. Both vessels are at the same temperature. The valve is opened and the gases are allowed to mix. The temperature remains constant. What is the final pressure?
6. What is the average velocity of a molecule of nitrogen, $N_2$, at 300 K? Of a molecule of hydrogen, $H_2$, at the same temperature?
7. The Homestake gold mine near Lead, South Dakota, is excavated to 8000 feet below the surface. Lead is nearly a mile high; the bottom of the Homestake is about 900 m below sea level. Nearby Custer Peak is about 2100 m above sea level. What is the ratio of the barometric pressure on top of Custer Peak to the barometric pressure at the bottom of the Homestake? Assume that the entire atmosphere is at 300 K and that it behaves as a single ideal gas whose molar mass is 29.
8. On the sidewalk in front of a tall building, the barometric pressure is 740 torr and the temperature is 25 C. On the roof of this building, the barometric pressure is 732 torr. Assuming that the entire atmosphere behaves as an ideal gas of molecular weight 29 at 25 C, estimate the height of the building. Comment on the likely accuracy of this estimate.
9. At 1 bar, the boiling point of water is 372.78 K. At this temperature and pressure, the density of liquid water is 958.66 kg m${}^{--}$${}^{3}$ and that of gaseous water is 0.59021 kg m${}^{--3}$. What are the molar volumes, in ${\mathrm{m}}^3\ {\mathrm{mol}}^{-1}$, of liquid and gaseous water at this temperature and pressure? In $\mathrm{L}\ {\mathrm{mol}}^{-1}$?
10. Refer to your results in Problem 9. Assuming that a water molecule excludes other water molecules from a cubic region centered on itself, estimate the average distance between nearest-neighbor water molecules in the liquid and in the gas.
11. Calculate the molar volume of gaseous water at 1 bar and 372.78 K from the ideal gas equation. What is the error, expressed as a percentage of the value you calculated in Problem 9?
12. At 372.78 K, the virial coefficient B* for water is $-1.487\times {10}^{-7}$ ${\mathrm{Pa}}^{-1}$. Calculate the molar volume of gaseous water at 1 bar and 372.78 K from the virial equation: $Z={P\overline{V}}/{RT}=1+B^*P$. What is the error, expressed as a percentage of the value you calculated in Problem 9?
13. Calculate the molar volume of gaseous water at 1 bar and 372.78 K from van der Waals’ equation. The van der Waals’ parameters for water are $a=5.537\ \mathrm{bar}\ {\mathrm{L}}^2\ {\mathrm{mol}}^{-1}$ and $b=0.0305\ \mathrm{L}\mathrm{\ }{\mathrm{mol}}^{-1}$. What is the error, expressed as a percentage of the value you calculated in Problem 9?
14. Comment on the results in Problems 11 – 13. At this temperature, would you expect the accuracy to increase or decrease at lower pressures?
15. The critical temperature for water is 647.1 K. At ${10}^3$bar and 700 K, the density of supercritical water is 651.37 $\mathrm{kg}\ {\mathrm{m}}^3$. Note that this is about 68% of the value for liquid water at the boiling point at 1 bar. What is the molar volume, in ${\mathrm{m}}^3\mathrm{\ }{\mathrm{mol}}^{-1}$, of water at this temperature and pressure? In $\mathrm{L}\mathrm{\ }{\mathrm{mol}}^{-1}$?
16. Refer to your results in Problem 15. Assuming that a water molecule excludes other water molecules from a cubic region centered on itself, estimate the average distance between nearest-neighbor water molecules in supercritical water at ${10}^3$ bar and 700 K.
17. Calculate the molar volume of supercritical water at ${10}^3$ bar and 700 K from the ideal gas equation. What is the error, expressed as a percentage of the value you calculated in Problem 15?
18. At 700 K, the virial coefficient B* for water is $-1.1512\times {10}^{-8\ }{\mathrm{Pa}}^{\mathrm{-1}}$ Calculate the molar volume of supercritical water at ${10}^3$ bar and 700 K from the virial equation. (See Problem 12.) What is the error, expressed as a percentage of the value you calculated in Problem 15?
19. Calculate the molar volume of supercritical water at ${10}^3$ bar and 700 K from van der Waals’ equation. (See Problem 13.) What is the error, expressed as a percentage of the value you calculated in Problem 15?
20. Comment on the results in Problems 16 – 19.
21. Comment on the results in Problems 10 – 13 versus the results in Problems 16 – 19.
22. A 1.000 L combustion bomb is filled with natural gas at 2.00 bar and 300 K. Pure oxygen is then pressured into the bomb until the pressure reaches 7.00 bar, at 300 K. Combustion is initiated. When reaction is complete, the bomb is thermostatted at 500 K, and the pressure is measured to be 12.08 bar. Thereafter, the bomb is cooled to 260 K, so that all of the water freezes. The pressure is then found to be 2.812 bar. The natural gas is a mixture of helium, methane, and ethane. How many moles of each gas are in the original sample?
23. An unknown liquid compound boils at 124 C. A classical method is used to find the approximate molecular weight of this compound. This method uses a glass bulb whose only opening is a long thin capillary tube, so that a gas sample inside the bulb can mix with the air outside only slowly. Filled with water, the bulb weighs 102.7535 grams. Empty, it weighs 50.0230 grams. A quantity of the unknown liquid is put into the bulb, and the body of the bulb is immersed in an oil bath at 150 C. The end of the capillary tube extends out of the oil bath. The liquid vaporizes filling the bulb with its gas. The total amount of vapor generated is large compared to the volume of the bulb, so the escaping vapor effectively sweeps all of the air out of the bulb, leaving the bulb filled with just the vapor of the unknown compound. When the last drop of liquid has just vaporized, the bulb is filled with the vapor of the unknown substance at the ambient atmosphere pressure, which is 0.980 bar, and a temperature of 150 C. The bulb is then removed from the oil bath and allowed to cool quickly so that the vapor condenses to a liquid film on the inside of the bulb. The oil is cleaned from the outside of the bulb, and the bulb is reweighed. The bulb and the liquid inside weigh 50.1879 grams. What is the approximate molecular weight of the liquid?
24. From the data below, calculate the molar volume, in liters, of each substance. For each substance, divide van der Waals’ $b$ by the molar volume you calculate. Comment.
$\begin{array}{|c|c|c|c|} \hline \text{ Compound } & \text{ Mol Mass, g mol}^{–1} ~ & \text{ Density, g mL}^{–1} & \text{ Van der Waals } b, \text{ L mol}^{–1} \ \hline \text{Acetic acid} & 60.05 & 1.0491 & 0.10680 \ \hline \text{Acetone} & 58.08 & 0.7908 & 0.09940 \ \hline \text{Acetonitrile} & 41.05 & 0.7856 & 0.11680 \ \hline \text{Ammonia} & 17.03 & 0.7710 & 0.03707 \ \hline \text{Aniline} & 93.13 & 1.0216 & 0.13690 \ \hline \text{Benzene} & 78.11 & 0.8787 & 0.11540 \ \hline \text{Benzonitrile} & 103.12 & 1.0102 & 0.17240 \ \hline \text{iso-Butylbenzene } & 134.21 & 0.8621 & 0.21440 \ \hline \text{Chlorine} & 70.91 & 3.2140 & 0.05622 \ \hline \text{Durene} & 134.21 & 0.8380 & 0.24240 \ \hline \text{Ethane} & 30.07 & 0.5720 & 0.06380 \ \hline \text{Hydrogen chloride} & 36.46 & 1.1870 & 0.04081 \ \hline \text{Mercury} & 200.59 & 13.5939 & 0.01696 \ \hline \text{Methane} & 16.04 & 0.4150 & 0.04278 \ \hline \text{Nitrogen dioxide} & 46.01 & 1.4494 & 0.04424 \ \hline \text{Silicon tetrafluoride} & 104.08 & 1.6600 & 0.05571 \ \hline \text{Water} & 18.02 & 1.0000 & 0.03049 \ \hline \end{array} \nonumber$
Notes
1We use the over-bar to indicate that the quantity is per mole of substance. Thus, we write $\overline{N}$ to indicate the number of particles per mole. We write $\overline{M}$ to represent the gram molar mass. In Chapter 14, we introduce the use of the over-bar to denote a partial molar quantity; this is consistent with the usage introduced here, but carries the further qualification that temperature and pressure are constant at specified values. We also use the over-bar to indicate the arithmetic average; such instances will be clear from the context.
2The unit of temperature is named the kelvin, which is abbreviated as K.
3A redefinition of the size of the unit of temperature, the kelvin, is under consideration. The practical effect will be inconsequential for any but the most exacting of measurements.
4For a thorough discussion of the development of the concept of temperature, the evolution of our means to measure it, and the philosophical considerations involved, see Hasok Chang, Inventing Temperature, Oxford University Press, 2004.
5See T. L. Hill, An Introduction to Statistical Thermodynamics, Addison-Wesley Publishing Company, 1960, p 286.
6See S. M. Blinder, Advanced Physical Chemistry, The Macmillan Company, Collier-Macmillan Canada, Ltd., Toronto, 1969, pp 185-189
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/02%3A_Gas_Laws/2.16%3A_Problems.txt
|
In Section 2.10, we derive Boyle’s law from Newton’s laws using the assumption that all gas molecules move at the same speed at a given temperature. This is a poor assumption. Individual gas molecules actually have a wide range of velocities. In Chapter 4, we derive the Maxwell–Boltzmann distribution law for the distribution of molecular velocities. This law gives the fraction of gas molecules having velocities in any range of velocities. Before developing the Maxwell–Boltzmann distribution law, we need to develop some ideas about distribution functions. Most of these ideas are mathematical. We discuss them in a non-rigorous way, focusing on understanding what they mean rather than on proving them.
The overriding idea is that we have a real-world source of data. We call this source of data the distribution. We can collect data from this source to whatever extent we please. The datum that we collect is called the distribution’s random variable. We call each possible value of the random variable an outcome. The process of gathering a set of particular values of the random variable from a distribution is often called sampling or drawing a sample. The set of values that is collected is called the sample. The set of values that comprise the sample is often called “the data.” In scientific applications, the random variable is usually a number that results from making a measurement on a physical system. Calling this process “drawing a sample” can be inappropriate. Often we call the process of getting a value for the random variable “doing an experiment”, “doing a test”, or “making a trial”.
As we collect increasing amounts of data, the accumulation quickly becomes unwieldy unless we can reduce it to a mathematical model. We call the mathematical model we develop a distribution function, because it is a function that expresses what we are able to learn about the data source—the distribution. A distribution function is an equation that summarizes the results of many measurements; it is a mathematical model for a real-world source of data. Specifically, it models the frequency of an event with which we obtain a particular outcome. We usually believe that we can make our mathematical model behave as much like the real-world data source as we want if we use enough experimental data in developing it.
Often we talk about statistics. By a statistic, we mean any mathematical entity that we can calculate from data. Broadly speaking a distribution function is a statistic, because it is obtained by fitting a mathematical function to data that we collect. Two other statistics are often used to characterize experimental data: the mean and the variance. The mean and variance are defined for any distribution. We want to see how to estimate the mean and variance from a set of experimental data collected from a particular distribution.
We distinguish between discrete and continuous distributions. A discrete distribution is a real-world source of data that can produce only particular data values. A coin toss is a good example. It can produce only two outcomes—heads or tails. A continuous distribution is a real-world source of data that can produce data values in a continuous range. The speed of an automobile is a good example. An automobile can have any speed within a rather wide range of speeds. For this distribution, the random variable is automobile speed. Of course we can generate a discrete distribution by aggregating the results of sampling a continuous distribution; if we lump all automobile speeds between 20 mph and 30 mph together, we lose the detailed information about the speed of each automobile and retain only the total number of automobiles with speeds in this interval.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/03%3A_Distributions_Probability_and_Expected_Values/3.01%3A_The_Distribution_Function_as_a_Summary_of_Experimental_Results.txt
|
We also need to introduce the idea that a function that successfully models the results of past experiments can be used to predict some of the characteristics of future results.
We reason as follows: We have results from drawing many samples of a random variable from some distribution. We suppose that a mathematical representation has been found that adequately summarizes the results of these experiences. If the underlying distribution—the physical system in scientific applications—remains the same, we expect that a long series of future results would give rise to essentially the same mathematical representation. If 25% of many previous results have had a particular characteristic, we expect that 25% of a large number of future trials will have the same characteristic. We also say that there is one chance in four that the next individual result will have this characteristic; when we say this, we mean that 25% of a large number of future trials will have this characteristic, and the next trial has as good a chance as any other to be among those that do. The probability that an outcome will occur in the future is equal to the frequency with which that outcome has occurred in the past.
Given a distribution, the possible outcomes must be mutually exclusive; in any given trial, the random variable can have only one of its possible values. Consequently, a discrete distribution is completely described when the probability of each of its outcomes is specified. Many distributions are comprised of a finite set of N mutually exclusive possible outcomes. If each of these outcomes is equally likely, the probability that we will observe any particular outcome in the next trial is $1/N$.
We often find it convenient to group the set of possible outcomes into subsets in such a way that each outcome is in one and only one of the subsets. We say that such assignments of outcomes to subsets are exhaustive, because every possible outcome is assigned to some subset; we say that such assignments are mutually exclusive, because no outcome belongs to more than one subset. We call each such subset an event. When we partition the possible outcomes into exhaustive and mutually exclusive events, we can say the same things about the probabilities of events that we can say about the probabilities of outcomes. In our discussions, the term “events” will always refer to an exhaustive and mutually exclusive partitioning of the possible outcomes. Distinguishing between outcomes and events just gives us some language conventions that enable us to create alternative groupings of the same set of real world observations.
Suppose that we define a particular event to be a subset of outcomes that we denote as U. If in a large number of trials, the fraction of outcomes that belong to this subset is F, we say that the probability is F that the outcome of the next trial will belong to this event. To express this in more mathematical notation, we write $P\left(U\right)=F$. When we do so, we mean that the fraction of a large number of future trials that belong to this subset will be F, and the next trial has as good a chance as any other to be among those that do. In a sample comprising M observations, the best forecast we can make of the number of occurrences of U is $M\times P(U)$, and we call this the expected number of occurrences of U in a sample of size M.
The idea of grouping real world observations into either outcomes or events is easy to remember if we keep in mind the example of tossing a die. The die has six faces, which are labeled with 1, 2, 3, 4, 5, or 6 dots. The dots distinguish one face from another. On any given toss, one face of the die must land on top. Therefore, there are six possible outcomes. Since each face has as good a chance as any other of landing on top, the six possible outcomes are equally probable. The probability of any given outcome is ${1}/{6}$. If we ask about the probability that the next toss will result in one of the even-numbered faces landing on top, we are asking about the probability of an event—the event that the next toss will have the characteristic that an even-numbered face lands on top. Let us call this event $X$. That is, event $X$ occurs if the outcome is a 2, a 4, or a 6. These are three of the six equally likely outcomes. Evidently, the probability of this event is ${3}/{6}={1}/{2}$.
Having defined event $X$ as the probability of an even-number outcome, we still have several alternative ways to assign the odd-number outcomes to events. One assignment would be to say that all of the odd-number outcomes belong to a second event—the event that the outcome is odd. The events “even outcome” and “odd outcome” are exhaustive and mutually exclusive. We could create another set of events by assigning the outcomes 1 and 3 to event $Y$, and the outcome 5 to event $Z$. Events $X$, $Y$, and $Z$ are also exhaustive and mutually exclusive.
We have a great deal of latitude in the way we assign the possible outcomes to events. If it suits our purposes, we can create many different exhaustive and mutually exclusive partitionings of the outcomes of a given distribution. We require that each partitioning of outcomes into events be exhaustive and mutually exclusive, because we want to apply the laws of probability to events.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/03%3A_Distributions_Probability_and_Expected_Values/3.02%3A_Outcomes_Events_and_Probability.txt
|
If we know the probabilities of the possible outcomes of a trial, we can calculate the probabilities for combinations of outcomes. These calculations are based on two rules, which we call the laws of probability. If we partition the outcomes into exhaustive and mutually exclusive events, the laws of probability also apply to events. Since, as we define them, “events” is a more general term than “outcomes,” we call them the law of the probability of alternative events and the law of the probability of compound events. These laws are valid so long as three conditions are satisfied. We have already discussed the first two of these conditions, which are that the outcomes possible in any individual trial must be exhaustive and mutually exclusive. The third condition is that, if we make more than one trial, the outcomes must be independent; that is, the outcome of one trial must not be influenced by the outcomes of the others.
We can view the laws of probability as rules for inferring information about combinations of events. The law of the probability of alternative events applies to events that belong to the same distribution. The law of the probability of compound events applies to events that can come from one or more distributions. An important special case occurs when the compound events are $N$ successive samplings of a given distribution that we identify as the parent distribution. If the random variable is a number, and we average the numbers that we obtain from $N$ successive samplings of the parent distribution, these “averages-of-$N$” themselves constitute a distribution. If we know certain properties of the parent distribution, we can calculate corresponding properties of the “distribution of averages-of-$N$ values obtained by sampling the parent distribution.” These calculations are specified by the central limit theorem, which we discuss in Section 3.11.
In general, when we combine events from two distributions, we can view the result as an event that belongs to a third distribution. At first encounter, the idea of combining events and distributions may seem esoteric. A few examples serve to show that what we have in mind is very simple.
Since an event is a set of outcomes, an event occurs whenever any of the outcomes in the set occurs. Partitioning the outcomes of tossing a die into “even outcomes” and “odd outcomes” illustrates this idea. The event “even outcome” occurs whenever the outcome of a trial is $2$, $4,$ or $6$. The probability of an event can be calculated from the probabilities of the underlying outcomes. We call the rule for this calculation the law of the probabilities of alternative events. (We create the opportunity for confusion here because we are illustrating the idea of alternative events by using an example in which we call the alternatives “alternative outcomes” rather than “alternative events.” We need to remember that “event” is a more general term than “outcome.” One possible partitioning is that which assigns every outcome to its own event.) We discuss the probabilities of alternative events further below.
To illustrate the idea of compound events, let us consider a first distribution that comprises “tossing a coin” and a second distribution that comprises “drawing a card from a poker deck.” The first distribution has two possible outcomes; the second distribution has $52$ possible outcomes. If we combine these distributions, we create a third distribution that comprises “tossing a coin and drawing a card from a poker deck.” The third distribution has $104$ possible outcomes. If we know the probabilities of the outcomes of the first distribution and the probabilities of the outcomes of the second distribution, and these probabilities are independent of one another, we can calculate the probability of any outcome that belongs to the third distribution. We call the rule for this calculation the law of the probability of compound events. We discuss it further below.
A similar situation occurs when we consider the outcomes of tossing two coins. We assume that we can tell the two coins apart. Call them coin $1$ and coin $2$. We designate heads and tails for coins $1$ and $2$ as $H_1$, $T_1$, $H_2$, and $T_2$, respectively. There are four possible outcomes in the distribution we call “tossing two coins:” $H_1H_2$, $H_1T_2$, $T_1H_2$, and $T_1T_2$. (If we could not tell the coins apart, $H_1T_2$ would be the same thing as $T_1H_2$; there would be only three possible outcomes.) We can view the distribution “tossing two coins” as being a combination of the two distributions that we can call “tossing coin $1$” and “tossing coin$\ 2$.” We can also view the distribution “tossing two coins” as a combination of two distributions that we call “tossing a coin a first time” and “tossing a coin a second time.” We view the distribution “tossing two coins” as being equivalent to the distribution “tossing one coin twice.” This is an example of repeated trials, which is a frequently encountered type of distribution. In general, we call such a distribution a “distribution of events from a trial repeated N times,” and we view this distribution as being completely equivalent to N simultaneous trials of the same kind. Chapter 19 considers the distribution of outcomes when a trial is repeated many times. Understanding the properties of such distributions is the single most essential element in understanding the theory of statistical thermodynamics. The central limit theorem relates properties of the repeated-trials distribution to properties of the parent distribution.
The Probability of Alternative Events
If we know the probability of each of two mutually exclusive events that belong to an exhaustive set, the probability that one or the other of them will occur in a single trial is equal to the sum of the individual probabilities. Let us call the independent events A and B, and represent their probabilities as $P(A)$ and $P(B)$, respectively. The probability that one of these events occurs is the same thing as the probability that either A occurs or B occurs. We can represent this probability as $P(A\ or\ B)$. The probability of this combination of events is the sum: $P(A)+P(B)$. That is,
$P\left(A\ or\ B\right)=P\left(A\right)+P(B) \nonumber$
Above we define Y as the event that a single toss of a die comes up either $1$ or $3$. Because each of these outcomes is one of six, mutually-exclusive, equally-likely outcomes, the probability of either of them is ${1}/{6}$: $P\left(tossing\ a\ 1\right)=P\left(tossing\ a\ 3\right)$$={1}/{6}$. From the law of the probability of alternative events, we have
\begin{align*} P\left(event\ Y\right) &=(tossing\ a\ 1\ or\ tossing\ a\ 3) \[4pt] &=P\left(tossing\ a\ 1\right)\ or P\left(tossing\ a\ 3\right) \[4pt] &= {1}/{6}+{1}/{6} \[4pt] &={2}/{6} \end{align*}
We define $X$ as the event that a single toss of a die comes up even. From the law of the probability of alternative events, we have
\begin{align*} P\left(event\ X\right) &=P\left(tossing\ 2\ or\ 4\ or\ 6\right) \[4pt] &=P\left(tossing\ a\ 2\right)+P\left(tossing\ a\ 4\right)+P\left(tossing\ a\ 6\right) \[4pt] &={3}/{6} \end{align*}
We define $Z$ as the event that a single toss comes up $5$.
$P\left(event\ Z\right)=P\left(tossing\ a\ 5\right)=1/6 \nonumber$
If there are $\omega$ independent events (denoted $E_1,E_2,\dots ,E_i,\dots ,E_{\omega }$), the law of the probability of alternative events asserts that the probability that one of these events will occur in a single trial is
\begin{align*} P\left(E_1\ or\ E_2\ or\dots E_i\dots or\ E_{\omega }\right) &=P\left(E_1\right)+P\left(E_2\right)+\dots +P\left(E_i\right)+\dots +P\left(E_{\omega }\right) \[4pt] &=\sum^{\omega }_{i=1} P\left(E_i\right) \end{align*}
If these $\omega$ independent events encompass all of the possible outcomes, the sum of their individual probabilities must be unity.
The Probability of Compound Events
Let us now suppose that we make two trials in circumstances where event $A$ is possible in the first trial and event $B$ is possible in the second trial. We represent the probabilities of these events by $P\left(A\right)$ and $P(B)$ and stipulate that they are independent of one another; that is, the probability that $B$ occurs in the second trial is independent of the outcome of the first trial. Then, the probability that $A$ occurs in the first trial and $B$ occurs in the second trial, $P(A\ and\ B)$, is equal to the product of the individual probabilities.
$P\left(A\ and\ B\right)=P\left(A\right)\times P(B) \nonumber$
To illustrate this using outcomes from die-tossing, let us suppose that event $A$ is tossing a $1$ and event $B$ is tossing a $3$. Then, $P\left(A\right)={1}/{6}$ and $P\left(B\right)={1}/{6}$. The probability of tossing a 1 in a first trial and tossing a $3$ in a second trial is then
\begin{align*} P\left( \text{tossing a 1 first and tossing a 3 second}\right) &=P\left(\text{tossing a 1}\right)\times P\left(\text{tossing a 3}\right) \[4pt] &={1}/{6}\times {1}/{6} \[4pt] &={1}/{36} \end{align*}
If we want the probability of getting one $1$ and one $3$ in two tosses, we must add to this the probability of tossing a $3$ first and a $1$ second.
If there are $\omega$ independent events (denoted $E_1,E_2,\dots ,E_i,\dots ,E_{\omega }$), the law of the probability of compound events asserts that the probability that $E_1$ will occur in a first trial, and $E_2$ will occur in a second trial, etc., is
\begin{align*} P\left(E_1\ and\ E_2\ and\dots E_i\dots and\ E_{\omega }\right) &=P\left(E_1\right)\times P\left(E_2\right)\times \dots \times P\left(E_i\right)\times \dots \times P\left(E_{\omega }\right)\[4pt] &=\prod^{\omega }_{i=1}{P(E_i)} \end{align*}
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/03%3A_Distributions_Probability_and_Expected_Values/3.03%3A_Some_Important_Properties_of_Events.txt
|
The laws of probability apply to events that are independent. If the result of one trial depends on the result of another trial, we may still be able to use the laws of probability. However, to do so, we must know the nature of the interdependence.
If the activity associated with event C precedes the activity associated with event D, the probability of D may depend on whether C occurs. Suppose that the first activity is tossing a coin and that the second activity is drawing a card from a deck; however, the deck we use depends on whether the coin comes up heads or tails. If the coin is heads, we draw a card from an ordinary deck; if the coin is tails, we draw a coin from a deck with the face cards removed. Now we ask about the probability of drawing an ace. If the coin is heads, the probability of drawing an ace is ${4}/{52}={1}/{13}$. If the coin is tails, the probability of drawing an ace is ${4}/{40}={1}/{10}$. The combination coin is heads and card is ace has probability: $\left({1}/{2}\right)\left({1}/{13}\right)={1}/{26}$. The combination coin is tails and card is ace has probability $\left({1}/{2}\right)\left({1}/{10}\right)={1}/{20}$. In this
case, the probability of drawing an ace depends on the modification we make to the deck based on the outcome of the coin toss.
Applying the laws of probability is straightforward. An example that illustrates the application of these laws in a transparent way is provided by villages First, Second, Third, and Fourth, which are separated by rivers. (See Figure 1.) Bridges $1$, $2$, and $3$ span the river between First and Second. Bridges $a$ and $b$ span the river between Second and Third. Bridges $A$, $B$, $C$, and $D$ span the river between Third and Fourth. A traveler from First to Fourth who is free to take any route he pleases has a choice from among $3\times 2\times 4=24$ possible combinations. Let us consider the probabilities associated with various events:
• There are 24 possible routes. If a traveler chooses his route at random, the probability that he will take any particular route is ${1}/{24}$. This illustrates our assumption that each event in a set of $N$ exhaustive and mutually exclusive events occurs with probability ${1}/{N}$.
• If he chooses a route at random, the probability that he goes from First to Second by either bridge $1$ or bridge $2$ is $P\left(1\right)+P\left(2\right)=\ {1}/{3}+{1}/{3}={2}/{3}$. This illustrates the calculation of the probability of alternative events.
• The probability of the particular route $2\to a\to C$ is $P\left(2\right)\times P\left(a\right)\times P\left(C\right)=\left({1}/{3}\right)\left({1}/{2}\right)\left({1}/{4}\right)={1}/{24}$, and we calculate the same probability for any other route from First to Fourth. This illustrates the calculation of the probability of a compound event.
• If he crosses bridge $1$, the probability that his route will be $2\to a\to C$ is zero, of course. The probability of an event that has already occurred is 1, and the probability of any alternative is zero. If he crosses bridge $1,$ $P\left(1\right)=1$, and $P\left(2\right)=P\left(3\right)=0$.
• Given that a traveler has used bridge $1$, the probability of the route $1\to a\to C$ becomes the probability of path $a\to C$, which is $P\left(a\right)\times P\left(C\right)=\left({1}/{2}\right)\left({1}/{4}\right)={1}/{8}$. Since $P\left(1\right)=1$, the probability of the compound event $1\to a\to C$ is the probability of the compound event $a\to C$.
The outcomes of rolling dice, rolling provide more illustrations. If we roll two dice, we can classify the possible outcomes according to the sums of the outcomes for the individual dice. There are thirty-six possible outcomes. They are displayed in Table 1.
Table 1: Outcomes from tossing two dice
Outcome for first die
Outcome for second die 1 2 3 4 5 6
1 2 3 4 5 6 7
2 3 4 5 6 7 8
3 4 5 6 7 8 9
4 5 6 7 8 9 10
5 6 7 8 9 10 11
6 7 8 9 10 11 12
Let us consider the probabilities associated with various dice-throwing events:
• The probability of any given outcome, say the first die shows $2$ and the second die shows $3$, is ${1}/{36}$.
• Since the probability that the first die shows $3$ while the second die shows $2$ is also ${1}/{36}$, the probability that one die shows $2$ and the other shows $3$ is $P\left(3\right)\times P\left(2\right)+P\left(2\right)\times P\left(3\right) =\left({1}/{36}\right)+\left({1}/{36}\right) ={1}/{18}. \nonumber$
• Four different outcomes correspond to the event that the score is $5$. Therefore, the probability of rolling $5$ is $P\left(1\right)\times P\left(4\right)+P\left(2\right)\times P\left(3\right) +P\left(3\right)\times P\left(2\right)+P\left(4\right)\times P\left(1\right) ={1}/{9} \nonumber$
• The probability of rolling a score of three or less is the probability of rolling $2$, plus the probability of rolling $3$ which is $\left({1}/{36}\right)+\left({2}/{36}\right)={3}/{36}={1}/{12}$
• Suppose we roll the dice one at a time and that the first die shows $2$. The probability of rolling $7$ when the second die is thrown is now ${1}/{6}$, because only rolling a $5$ can make the score 7, and there is a probability of ${1}/{6}$ that a $5$ will come up when the second die is thrown.
• Suppose the first die is red and the second die is green. The probability that the red die comes up $2$ and the green die comes up $3$ is $\left({1}/{6}\right)\left({1}/{6}\right)={1}/{36}$.
Above we looked at the number of outcomes associated with a score of $3$ to find that the probability of this event is ${1}/{18}$. We can use another argument to get this result. The probability that two dice roll a score of three is equal to the probability that the first die shows $1$ or $2$ times the probability that the second die shows whatever score is necessary to make the total equal to three. This is:
\begin{align*} P\left(first\ die\ shows\ 1\ or\ 2\right)\times \left({1}/{6}\right) &= \left[\left({1}/{6}\right)+\left({1}/{6}\right)\right]\times {1}/{6} \[4pt] &={2}/{36} \[4pt]& ={1}/{18} \end{align*}
Application of the laws of probability is frequently made easier by recognizing a simple restatement of the requirement that events be mutually exclusive. In a given trial, either an event occurs or it does not. Let the probability that an event A occurs be $P\left(A\right)$. Let the probability that event A does not occur be $P\left(\sim A\right)$. Since in any given trial, the outcome must belong either to event A or to event $\sim A$, we have
$P\left(A\right)+P\left(\sim A\right)=1 \nonumber$
For example, if the probability of success in a single trial is ${2}/{3}$, the probability of failure is ${1}/{3}$. If we consider the outcomes of two successive trials, we can group them into four events.
• Event SS: First trial is a success; second trial is a success.
• Event SF: First trial is a success; second trial is a failure.
• Event FS: First trial is a failure; second trial is a success.
• Event FF: First trial is a failure; second trial is a failure.
Using the laws of probability, we have
\begin{align*} 1 &=P\left(Event\ SS\right)+P\left(Event\ SF\right)+P\left(Event\ FS\right)+\ P(Event\ FF) \[4pt] &=P_1\left(S\right)\times P_2\left(S\right)+P_1\left(S\right)\times P_2\left(F\right) +P_1(F)\times P_2(S)+P_1(F)\times P_2(F) \end{align*}
where $P_1\left(X\right)$ and $P_2\left(X\right)$ are the probability of event $X$ in the first and second trials, respectively.
This situation can be mapped onto a simple diagram. We represent the possible outcomes of the first trial by line segments on one side of a unit square $P_1\left(S\right)+P_1\left(F\right)=1$. We represent the outcomes of the second trial by line segments along an adjoining side of the unit square. The four possible events are now represented by the areas of four mutually exclusive and exhaustive portions of the unit square as shown in Figure 2.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/03%3A_Distributions_Probability_and_Expected_Values/3.04%3A_Applying_the_Laws_of_Probability.txt
|
Since a discrete distribution is completely specified by the probabilities of each of its events, we can represent it by a bar graph. The probability of each event is represented by the height of one bar. We can generalize this graphical representation to represent continuous distributions. To see what we have in mind, let us consider a particular example.
Let us suppose that we have a radar gun and that we decide to interest ourselves in the typical speeds of cars on a highway just outside of town. As we think about this project, we recognize that speeds might vary with the time of day and the day of the week. Random variations in many other factors might also be important; these include weather conditions and accidents in the vicinity. To eliminate as many atypical factors as possible, we might decide that typical speeds are those of cars going north between 1:00 pm and 4:00 pm on weekdays when the road surface is dry and there are no disabled vehicles in view. If we have a lot of time and the road is busy, we could collect a lot of data. Let us suppose that we record the speeds of $10,000$ cars. Each datum would be the speed of a car on the road at a time when the selected conditions are satisfied.
To use this data, we want to summarize it in a form that is easy to visualize. One way to do this is to aggregate the data to give the number of cars in each $20$ mph range; the results might look something like the data in Table 2. Figure 3 is a five-channel bar graph that displays the number of cars in each $20$ mph range. A great deal of information is lost in the aggregating process. In particular, nothing on the graph represents the number of automobiles in narrower speed intervals.
Table 2. Vehicle speed data.
Speed(mph) Number of cars Fraction of cars Height for bar area to equal fraction
–10
200 0.020 0.20/20 = 0.0010
10
800 0.08 0.08/20 = 0.0040
30
2500 0.25 0.25/20 = 0.0125
50
5500 0.55 0.55/20 = 0.0275
70
1000 0.10 0.10/20 = 0.0050
90
Now, suppose that we repeat this task, but that we do not have enough time to collect data on as many as $10,000$ more cars. We will be curious about the extent to which our two samples agree with one another. Since the total number of vehicles will be different, the appropriate way to go about this is obviously to compare the fraction of cars in each speed range. In fact, using fractions enables us to compare any number of such studies. To the extent that these studies measure the same thing—typical speeds under the specified conditions—the fraction of automobiles in any particular speed interval should be approximately constant. Dividing the number of automobiles in each speed interval by the total number of automobiles gives a representation that focuses attention on the proportion of automobiles with various speeds. The shape of the bar graph remains the same; all that changes is the scale we use to label the ordinate. (See Figure 4.)
Insofar as any repetition of this experiment gives nearly the same results, this is a useful change. However, the fundamental limitations of the graph remain. For example, if we want to use the graph to estimate how speeds are distributed in any other set of intervals, we have to read values off the ordinate and manipulate them in ways that may not be very satisfactory. To estimate the fraction with speeds between $20$ mph and $40$ mph, we might assign half of the automobiles in the $10-30$ mph interval and half of those in the $30-50$ mph interval to the new interval. This enables us to estimate that the fraction in the $20-40$ mph interval is $0.165$. This estimate is much less reliable than one that could be made by going back to the raw data for all $10,000$ automobiles.
The data can also be represented as a histogram. In a histogram, the information is represented by the area rather than the height of the bar. In the present case, the only visible change to the graph is another change in the numerical values on the ordinate. In Figure 5, the area of a bar represents the fraction of automobiles with speeds in the given interval. As the speed interval is made smaller, any of these bar graphs looks increasingly like a continuous curve. (See Figure 6.) The histogram has the advantage that, as the curve becomes continuous, the interpretation remains constant: the area under the curve between any two speeds always represents the fraction of automobiles with speeds in this interval. It turns out that we are adept at visually estimating the relative areas of different parts of a histogram. That is, from a quick glance at a histogram, we are able to obtain a good semi-quantitative appreciation of the significance of the underlying data.
If the histogram captures our experience, and we expect future events to have the same characteristics, the histogram becomes an expression of probability. All that is necessary is that we construct the histogram so that the total area under the graph is unity. If we let $f\left(u\right)$ be the area under the graph from $u=-\infty$ to $u=u$, then $f\left(u\right)$ represents the probability that the speed of a randomly selected automobile will lie between $-\infty$ and $u$. For any $a$ and b, the probability that $u$ lies in the interval $a<b$> is $f\left(b\right)-f\left(a\right)$. The function $f\left(u\right)$ is called the cumulative probability distribution function, because its value for any $u$ is the fraction of automobiles that have a speed less than $u$. $f\left(a\right)$ is the frequency with which we observe values of the random variable, $u$, that are less than $a$. Equivalently, we can say that $f\left(u\right)$ is the probability that any randomly selected automobile will have a speed less than $u$. If we let the width of every interval go to zero, the bar graph representation of the histogram becomes a curve, and the histogram becomes a continuous function of the random variable, $u$. (See Figure 7.) Note that the curve—the enclosing envelope—is not $\boldsymbol{f}\left(\boldsymbol{u}\right)$.$\boldsymbol{\ \ }\boldsymbol{f}\left(\boldsymbol{u}\right)$ is the area under the enclosing envelope curve.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/03%3A_Distributions_Probability_and_Expected_Values/3.05%3A_Bar_Graphs_and_Histograms.txt
|
When we can represent the envelope curve as a continuous function, the envelope curve is the derivative of the cumulative probability distribution function: The cumulative distribution function is $f\left(u\right)$; the envelope function is ${df\left(u\right)}/{du}$. The envelope function is a probability density, and we will refer to the envelope function, ${df\left(u\right)}/{du}$, as the probability density function. The probability density function is the derivative, with respect to the random variable, of the cumulative distribution function. This is an immediate consequence of the fundamental theorem of calculus.
If $H\left(u\right)$ is the anti-derivative of a function $h\left(u\right)$, we have ${dH\left(u\right)}/{du}=h\left(u\right)$, and the fundamental theorem of calculus asserts that the area under $h\left(u\right)$, from $u=a$ to $u=b$ is
\begin{aligned} \int_{a}^{b} h(u) d u &=\int_{a}^{b}\left(\frac{d H(u)}{d u}\right) d u \[4pt] &=H(b)-H(a) \end{aligned}\nonumber
In the present instance, $H(u) = f(u)$, so that
$\int_{a}^{b}\left(\frac{d f(u)}{d u}\right) d u=f(b)-f(a)\nonumber$
and
$h(u)=\frac{d f(u)}{du}\nonumber$
The envelope function, $h(u)$, and $df(u)$ are the same function.
This point is also apparent if we consider the incremental change in the area, $dA$, under a histogram as the variable increases from $u$ to $u + du$. If we let the envelope function be $h(u)$, we have
$dA=h(u)du\nonumber$
or
$h(u) = \dfrac{dA}{du}\nonumber$
That is, the envelope function is the derivative of the area with respect to the random variable, $u$. The area is $f(u)$, so the envelope function is $h(u)=df(u)/du$.
Calling the envelope curve the probability density function emphasizes that it is analogous to a function that expresses the density of matter. That is, for an incremental change in $u$, the incremental change in probability is
$Δ(probability)= \dfrac{df}{du} Δu\nonumber$
analogous to the incremental change in mass accompanying an incremental change in volume
$Δ(mass)= density \times Δ(volume)\nonumber$
where
$density=\dfrac{d\left(mass\right)}{d\left(volume\right)}.\nonumber$
In this analogy, we suppose that mass is distributed in space with a density that varies from point to point in the space. The mass enclosed in any particular volume is given by the integral of the density function over the volume enclosed; that is,
$mass=\int_V{\left(density\right)dV}.\nonumber$
Conversely, the density at any given point is the limit, as the enclosing volume shrinks to zero, of the enclosed mass divided by the magnitude of the enclosing volume.
Similarly, for any value of the random variable, the probability density is the limit, as an interval spanning the value of the random variable shrinks to zero, of the probability that the random variable is in the interval, divided by the magnitude of the interval.
3.07: A Heuristic View of the Probability Density Function
Suppose that we have a probability density function like that sketched in Figure 8 and that the area under the curve in the interval $a<u<b$ is 0.25. If we draw a large number of samples from the distribution, our definitions of probability and the probability density function mean that about 25% of the values we draw will lie in the interval $a<u<b$. We expect the percentage to become closer and closer to 25% as the total number of samples drawn becomes very large. The same would be true of any other interval, $c<u<d$, where the area under the curve in the interval $c<u<d$ is 0.25.
If we draw exactly four samples from this distribution, the values can be anywhere in the domain of $u$. However, if we ask what arrangement of four values best approximates the result of drawing a large number of samples, it is clear that this arrangement must have a value in each of the four, mutually-exclusive, 25% probability zones. We can extend this conclusion to any number of representative points. If we ask what arrangement of N points would best represent the arrangement of a large number of points drawn from the distribution, the answer is clearly that one of the $N$ representative points should lie within each of $N$, mutually-exclusive, equal-area segments that span the domain of $u$.)
We can turn this idea around. In the absence of information to the contrary, the best assumption we can make about a set of $N$ values of a random variable is that each represents an equally probable outcome. If our entire store of information about a distribution consists of four data points drawn from the distribution, the best description that we can give of the probability density function is that one-fourth of the area under the curve lies above a segment of the domain that is associated with each point. If we have $N$ points, the best estimate we can make of the distribution from which the $N$ points are drawn is that ${\left({1}/{N}\right)}^{th}$ of the area lies above each of them.
This view tells us to associate a probability of ${1}/{N}$ with an interval around each data point, but it does not tell us where to begin or end the interval. If we could decide where the interval about each data point began and ended, we could estimate the shape of the probability density function. For a small number of points, we could not expect this estimate to be very accurate, but it would be the best possible estimate based on the given data.
Now, instead of trying to find the best interval to associate with each data point, let us think about the intervals into which the data points divide the domain. This small change of perspective leads us to a logical way to divide the domain of $u$ into specific intervals of equal probability. If we put $N$ points on any line, these points divide the line into $N+1$ segments. There is a segment to the left of every point; there are $N$ such segments. There is one final segment to the right of the right-most point, and so there are $N+1$ segments in all.
In the absence of information to the contrary, the best assumption we can make is that $N$ data points divide their domain into $N+1$ segments, each of which is associated with equal probability. The fraction of the area above each of these segments is ${1}/{\left(N+1\right)}$; also, the probability associated with each segment is ${1}/{\left(N+1\right)}$. If, as in the example above, there are four data points, the best assumption we can make about the probability density function is that 20% of its area lies between the left boundary and the left-most data point, and 20% lies between the right-most data point and the right boundary. The three intervals between the four data points each represent an additional 20% of the area. Figure 9 indicates the $N$ data points that best approximate the distribution sketched in Figure 8.
The sketches in Figure 10 describe the probability density functions implied by the indicated sets of data points.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/03%3A_Distributions_Probability_and_Expected_Values/3.06%3A_Continuous_Distribution_Functions_-_the_Envelope_Function_is_the.txt
|
We can use these ideas to create a plot that approximates the cumulative probability distribution function given any set of $N$ measurements of a random variable $u$. To do so, we put the $u_i$ values found in our $N$ measurements in order from smallest to largest. We label the ordered values $u_1$, $u_2$,,$u_i$,,$u_N$, where $u_1$ is the smallest. By the argument that we develop in the previous section, the probability of observing a value less than $u_1$ is about ${1}/{\left(N+1\right)}$. If we were to make a large number of additional measurements, a fraction of about ${1}/{\left(N+1\right)}$ of this large number of additional measurements would be less than $u_1$. This fraction is just $f\left(u_1\right)$, so we reason that $f\left(u_1\right)\approx {1}/{\left(N+1\right)}$. The probability of observing a value between $u_1$ and $u_2$ is also about ${1}/{\left(N+1\right)}$; so the probability of observing a value less than $u_2$ is about ${2}/{\left(N+1\right)}$, and we expect $f\left(u_2\right)\approx {2}/{\left(N+1\right)}$. In general, the probability of observing a value between $u_{i-1}$ and $u_i$ is also about ${1}/{\left(N+1\right)}$, and the probability of observing a value less than $u_i$ is about ${i}/{\left(N+1\right)}$. In other words, we expect the cumulative probability distribution function for $u_i$ to be such that the $i^{th}$ smallest observation corresponds to $f\left(u_i\right)\approx {i}/{\left(N+1\right)}$. The quantity ${i}/{\left(N+1\right)}$ is often called the rank probability of the $i^{th}$ data point.
Figure 11 is a sketch of the sigmoid shape that we usually expect to find when we plot ${i}/{\left(N+1\right)}$ versus the $i^{th}$ value of $u$. This plot approximates the cumulative probability distribution function, $f\left(u\right)$. We expect the sigmoid shape because we expect the observed values of $u$ to bunch up around their average value. (If, within some domain of $u$ values, all possible values of $u$ were equally likely, we would expect the difference between successive observed values of $u$ to be roughly constant, which would make the plot look approximately linear.) At any value of $u$, the slope of the curve is just the probability-density function, ${df\left(u\right)}/{du}$.
These ideas mean that we can test whether the experimental data are described by any particular mathematical model, say $F\left(u\right)$. To do so, we use the mathematical model to predict each of the N rank probability values: ${1}/{\left(N+1\right)}$, ${2}/{\left(N+1\right)}$,, ${i}/{\left(N+1\right)}$,, ${N}/{\left(N+1\right)}$. That is to say, we calculate $F\left(u_1\right)$, $F\left(u_2\right)$,, $F\left(u_i\right)$, , $F\left(u_N\right)$; if $F\left(u\right)$ describes the data well, we will find, for all $i$, $F\left(u_i\right)\approx {i}/{\left(N+1\right)}$. Graphically, we can test the validity of the relationship by plotting ${i}/{\left(N+1\right)}$ versus $F\left(u_i\right)$. If $F\left(u\right)$ describes the data well, this plot will be approximately linear, with a slope of one.
In Section 3.12, we introduce the normal distribution, which is a mathematical model that describes a great many sources of experimental observations. The normal distribution is a distribution function that involves two parameters, the mean, $\mu$, and the standard deviation, $\sigma$. The ideas we have discussed can be used to develop a particular graph paper—usually called normal probability paper. If the data are normally distributed, plotting them on this paper produces an approximately straight line.
We can do essentially the same test without benefit of special graph paper, by calculating the average, $\overline{u}\approx \mu$, and the estimated standard deviation, $s\approx \sigma$, from the experimental data. (Calculating $\overline{u}$ and $s$ is discussed below.) Using $\overline{u}$ and $s$ as estimates of $\mu$ and $\sigma$, we can find the model-predicted probability of observing a value of the random variable that is less than $u_i$. This value is $f\left(u_i\right)$ for a normal distribution whose mean is $\overline{u}$ and whose standard deviation is $s$. We can find $f\left(u_i\right)$ by using standard tables (usually called the normal curve of error in mathematical compilations), by numerically integrating the normal distribution’s probability density function, or by using a function embedded in a spreadsheet program, like Excel${}^{\circledR }$. If the data are described by the normal distribution function, this value must be approximately equal to the rank probability; that is, we expect $f\left(u_i\right)\approx {i}/{\left(N+1\right)}$. A plot of ${i}/{\left(N+1\right)}$ versus $f\left(u_i\right)$ will be approximately linear with a slope of about one.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/03%3A_Distributions_Probability_and_Expected_Values/3.08%3A_A_Heuristic_View_of_the_Cumulative_Distribution_Function.txt
|
When we sample a particular distribution, the value that we obtain depends on chance and on the nature of the distribution described by the function $f\left(u\right)$. The probability that any given trial will produce $u$ in the interval $a<u<b$ is equal to $f\left(b\right)-f\left(a\right)$. We often find situations in which a second function of $u$, call it $g\left(u\right)$, is also of interest. If we sample the distribution and obtain a value of the random variable, $u_k$, then the value of $g$ associated with that trial is $g\left(u_k\right)$. The question arises: Given $g(u)$ and the distribution function $f(u)$, what should we expect the value of $g\left(u_k\right)$ to be? That is, if we get a value of $u$ from the distribution and then find $g\left(u\right)$, what value should we expect to find for $g\left(u\right)$? While this seems like a reasonable question, it is obvious that we can give a meaningful answer only when we can define more precisely just what we mean by “expect.”
To understand our definition of the expected value (sometimes called the expectation value) of $g\left(u\right)$, let us consider a game of chance. Suppose that we have a needle that rotates freely on a central axis. When spun, the needle describes a circular path, and its point eventually comes to rest at some point on this path. The location at which the needle stops is completely random. Imagine that we divide the circular path into six equal segments, which we number from one to six. When we spin the needle, it is equally likely to stop over any of these segments. Now, let us suppose that we conduct a lottery by selling six tickets, also numbered from one to six. We decide the winner of the lottery by spinning the needle. The holder of the ticket whose number matches the number on which the needle stops receives a payoff of $6000. After the spin, one ticket is worth$6000, and the other five are valueless. We ask: Before the spin, what is any one of the lottery tickets worth?
In this context, it is reasonable to define the expected value of a ticket as the amount that we should be willing to pay to buy a ticket. If we buy them all, we receive $6000 when the winning ticket is selected. If we pay$1000 per ticket to buy them all, we get our money back. If we buy all the tickets, the expected value of each ticket is $1000. What if we buy only one ticket? Is it reasonable to continue to say that its expected value is$1000? We argue that it is. One argument is that the expected value of a ticket should not depend on who owns the ticket; so, it should not depend on whether we buy one, two, or all of them. A more general argument supposes that repeated lotteries are held under the same rules. If we spend $1000 to buy one ticket in each of a very large number of such lotteries, we expect that we will eventually “break even.” Since the needle comes to rest at each number with equal probability, we reason that $\begin{array}{l} \text{Expected value of a ticket} \ =\6000\left(fraction\ of\ times\ our\ ticket\ would\ be\ selected\right) \ =\6000\left({1}/{6}\right) \ =\1000 \end{array} \nonumber$ Since we assume that the fraction of times our ticket would be selected in a long series of identical lotteries is the same thing as the probability that our ticket will be selected in any given drawing, we can also express the expected value as $\begin{array}{l} \text{Expected value of a ticket} \ =\6000\left(probability\ that\ our\ ticket\ will\ be\ be\ selected\right) \ =\6000\left({1}/{6}\right) \ =\1000 \end{array} \nonumber$ Clearly, the ticket is superfluous. The game depends on obtaining a value of a random variable from a distribution. The distribution is a spin of the needle. The random variable is the location at which the needle comes to rest. We can conduct essentially the same game by allowing any number of participants to bet that the needle will come to rest on any of the six equally probable segments of the circle. If an individual repeatedly bets on the same segment in many repetitions of this game, the total of his winnings eventually matches the total amount that he has wagered. (More precisely, the total of his winnings divided by the total amount he has wagered becomes arbitrarily close to one.) Suppose now that we change the rules. Under the new rules, we designate segment $1$ of the circle as the payoff segment. Participants pay a fixed sum to be eligible for the payoff for a particular game. Each game is decided by a spin of the needle. If the needle lands in segment $1$, everyone who paid to participate in that game receives$6000. Evidently, the new rules have no effect on the value of participation. Over the long haul, a participant in a large number of games wins $6000 in one-sixth of these games. We take this to be equivalent to saying that he has a probability of one-sixth of winning$6000 in a given game in which he participates. His expected payoff is
$\begin{array}{l} \text{Expected value of game} \ =\6000\left(probability\ of\ winning\ \6000\right) \ =\6000\left({1}/{6}\right) \ =\1000 \end{array} \nonumber$
Let us change the game again. We sub-divide segment $2$ into equal-size segments $2A$ and $2B$. The probability that the needle lands in $2A$ or $2B$ is ${1}/{12}$. In this new game, the payoff is $6000 when the needle lands in either segment $1$ or segment $2A$. We can use any of the arguments that we have made previously to see that the expected payoff game is now $\6000\left({1}/{4}\right)=\1500$. However, the analysis that is most readily generalized recognizes that the payoff from this game is just the sum of the payout from the previous game plus the payout from a game in which the sole payout is$6000 whenever the needle lands in segment $2A$. For the new game, we have
$\begin{array}{l} \text{Expected value of a game} \ =\6000\times P\left(segment\ 1\right)+\6000\times P\left(segment\ 2A\right) \ =\6000\left({1}/{6}\right)+\6000\left({1}/{12}\right) \ =\1500 \end{array} \nonumber$
We can devise any number of new games by dividing the needle’s circular path into $\mathrm{\textrm{Ω}}$ non-overlapping segments. Each segment is a possible outcome. We number the possible outcomes $1$, $2$, , $i$, , Ω, label these outcomes $u_1$, $u_2$,, $u_i$,, $u_{\textrm{Ω}}$, and denote their probabilities as $P\left(u_1\right)$, $P\left(u_2\right)$,...,$P\left(u_i\right)$,, $P\left(u_{\textrm{Ω}}\right)$. We say that the probability of outcome $u_i$, $P\left(u_i\right)$, is the expected frequency of outcome $u_i$. We denote the respective payoffs as $g\left(u_1\right)$, $g\left(u_2\right)$,...,$g\left(u_i\right)$,, $g\left(u_{\textrm{Ω}}\right)$. Straightforward generalization of our last analysis shows that the expected value for participation in any game of this type is
$\sum^{\textrm{Ω}}_{i=1}{g\left(u_i\right)\times P\left(u_i\right)} \nonumber$
Moreover, the spinner is representative of any distribution, so it is reasonable to generalize further. We can say that the expected value of the outcome of a single trial is always the probability-weighted sum, over all possible outcomes, of the value of each outcome. A common notation uses angular brackets to denote the expected value for a function of the random variable; the expected value of $g\left(u\right)$ is $\left\langle g\left(u\right)\right\rangle$. For a discrete distribution with $\textrm{Ω}$ exhaustive mutually-exclusive outcomes $u_i$, probabilities $P\left(u_i\right)$, and outcome values (payoffs) $g\left(u_i\right)$, we define the expected value expected value of $g\left(u\right)$ to be
$\left\langle g\left(u\right)\right\rangle \ =\sum^{\textrm{Ω}}_{i=1}{g\left(u_i\right)}\times P\left(u_i\right) \nonumber$
Now, let us examine the expected value of $g\left(u\right)$ from a slightly different perspective. Let the number of times that each of the various outcomes is observed in a particular sample of $N$ observations be $N_1,\ N_2,\dots ,N_3,\dots ,N_{\textrm{Ω}}$. We have $N=N_1+\ N_2+\dots +N_i+\dots +N_{\textrm{Ω}}$. The set $\{N_1,\ N_2,\dots ,N_i,\dots ,N_{\textrm{Ω}}\}$ specifies the way that the possible outcomes are populated in this particular series of $N$ observations. We call $\{N_1,\ N_2,\dots ,N_i,\dots ,N_{\textrm{Ω}}\}$ a population set. If we make a second series of N observations, we obtain a second population set. We infer that the best forecast we can make for the number of occurrences of outcome $u_i$ in any future series of N observations is $N\times P\left(u_i\right)$. We call $N\times P\left(u_i\right)$ the expected number of observations of outcome $u_i$ in a sample of size $N$.
In a particular series of $N$ trials, the number of occurrences of outcome $u_i$, and hence of $g\left(u_i\right)$, is $N_i$. For the set of outcomes $\{N_1,\ N_2,\dots ,N_3,\dots ,N_{\textrm{Ω}}\}$, the average value of $g\left(u\right)$ is
$\overline{g\left(u\right)}=\frac{1}{N}\sum^{\textrm{Ω}}_{i=1}{g\left(u_i\right)\times N_i} \nonumber$
Collecting a second sample of $N$ observations produces a second estimate of $\overline{g\left(u\right)}$. If $N$ is small, successive estimates of $\overline{g\left(u\right)}$ may differ significantly from one another. If we make a series of $N$ observations multiple times, we obtain multiple population sets. In general, the population set from one series of $N$ observations is different from the population set for a second series of $N$ observations. If $N\gg \mathit{\Omega}$, collecting such samples of $N$ a sufficiently large number of times must produce some population sets more than once, and among those that are observed more than once, one must occur more often than any other. We call it the most probable population set. Let the elements of the most probable population set be $\{N_1,N_2,\dots ,N_i,\dots ,N_{\textrm{Ω}}\}$. We infer that the most probable population set is the best forecast we can make about the outcomes of any future sample of $N$ from this distribution. Moreover, we infer that the best estimate we can make of $N_i$ is that it equals the expected number of observations of outcome $u_i$; that is,
$N_i\approx N\times P\left(u_i\right) \nonumber$
Now, $N_i$ and $N_i$ must be natural numbers, while $N\times P\left(u_i\right)$ need only be real. In particular, we can have $0, but \(N_i$ must be $0$ or $1$ (or some higher integer). This is a situation of practical importance, because circumstances may limit the sample size to a number, $N$, that is much less than the number of possible outcomes, $\mathit{\Omega}$. (We encounter this situation in our discussion of statistical thermodynamics in Chapter 21. We find that the number of molecules in a system can be much smaller than the number of outcomes—observable energy levels—available to any given molecule.)
If many more than $N$ outcomes have about the same probability, repeated collection of samples of $N$ observations can produce a series of population sets (each population set different from all of the others) in each of which every element is either zero or one. When this occurs, it may be that no single population set is significantly more probable than any of many others. Nevertheless, every outcome occurs with a well-defined probability. We infer that the set $\left\{N\times P\left(u_1\right),N\times P\left(u_2\right),\dots ,N\times P\left(u_i\right),\dots ,N\times P\left(u_{\textrm{Ω}}\right)\right\}$ is always an adequate proxy for calculating the expected value for the most probable population set.
To illustrate this kind of distribution, suppose that there are $3000$ possible outcomes, of which the first and last thousand have probabilities that are so low that they can be taken as zero, while the middle $1000$ outcomes have approximately equal probabilities. Then $P\left(u_i\right)\approx 0$ for $1<$>1000 and 2001$<$>3000, while $P\left(u_i\right)\approx {10}^{-3}$ for $1001<2000$>. We are illustrating the situation in which the number of outcomes we can observe, $N$, is much less than the number of outcomes that have appreciable probability, which is $1000$. So let us take the number of trials to be $N=4$. If the value of $g\left(u\right)$ for each of the $1000$ middle outcomes is the same, say $g\left(u_i\right)=100$ for $1001<2000$>, then our calculation of the expected value of $g\left(u\right)$ will be
$\left\langle g\left(u\right)\right\rangle \ =\frac{1}{4}\sum^{3000}_{i=1}{g\left(u_i\right)\times N}\times P\left(u_i\right)=\frac{1}{4}\sum^{2000}_{i=1001}{100\times N_i}=\frac{400}{4}=100 \nonumber$
regardless of which population set results from the four trials. That is, because all of the populations sets that have a significant chance to be observed have $N_i=1$ and $g\left(u_i\right)=100$ for exactly four values of $i$ in the range $1001<2011$>, all of the population sets that have a significant chance to be observed give rise to the same expected value.
Let us compute the arithmetic average, $\overline{g\left(u\right)}$, using the most probable population set for a sample of N trials. In this case, the number of observations of the outcome $u_i$ is $N_i=N\times P\left(u_i\right).$
$\overline{g\left(u\right)}\mathrm{\ }\ =\frac{1}{N}\sum^{\textrm{Ω}}_{i=1}{g\left(u_i\right)}\times N_i$$=\frac{1}{N}\sum^{\textrm{Ω}}_{i=1}{g\left(u_i\right)}\times N\times P\left(u_i\right)$$=\sum^{\textrm{Ω}}_{i=1}{g\left(u_i\right)\times P\left(u_i\right)}$$=\ \ \left\langle g\left(u\right)\right\rangle$
For a discrete distribution, $\left\langle g\left(u\right)\right\rangle$ is the value of $\overline{g\left(u\right)}$ that we calculate from the most probable population set, $\left\{N_1,N_2,\dots ,N_i,\dots ,N_{\textrm{Ω}}\right\}$, or its proxy $\left\{N\times P\left(u_1\right),N\times P\left(u_2\right),\dots ,N\times P\left(u_i\right),\dots ,N\times P\left(u_{\textrm{Ω}}\right)\right\}$.
We can extend the definition of the expected value, $\left\langle g\left(u\right)\right\rangle$, to cases in which the cumulative probability distribution function, $f\left(u\right)$, and the outcome-value function, $g\left(u\right)$, are continuous in the domain of the random variable, $u_{min}<u_{max}$>. To do so, we divide this domain into a finite number, $\mathit{\Omega}$, of intervals, $\Delta u_i$. We let $u_i$ be the lower limit of $u$ in the interval $\Delta u_i$. Then the probability that a given trial yields a value of the random variable in the interval $\Delta u_i$ is $P\left({\Delta u}_i\right)=f\left(u_i+\Delta u_i\right)-f\left(u_i\right)$, and we can approximate the expected value of $g\left(u\right)$ for the continuous distribution by the finite sum
$\left\langle g\left(u\right)\right\rangle \ =\sum^{\textrm{Ω}}_{i=1}{g\left(u_i\right)\times P\left(\Delta u_i\right)}=\sum^{\textrm{Ω}}_{i=1}{g\left(u_i\right)}\times \left[f\left(u_i+\Delta u_i\right)-f\left(u_i\right)\right]=\sum^{\textrm{Ω}}_{i=1}{g\left(u_i\right)\times \left[\frac{f\left(u_i+\Delta u_i\right)-f\left(u_i\right)}{\Delta u_i}\right]}\times \Delta u_i \nonumber$
In the limit as $\mathit{\Omega}$ becomes arbitrarily large and all of the intervals $\Delta u_i$ become arbitrarily small, the expected value of $g\left(u\right)$ for a continuous distribution becomes
$\left\langle g\left(u\right)\right\rangle \ =\int^{\infty }_{-\infty }{g\left(u\right)\left[\frac{df\left(u\right)}{du}\right]du} \nonumber$
This integral is the value of $\left\langle g\left(u\right)\right\rangle$, where ${df\left(u\right)}/{du}$ is the probability density function for the distribution. If c is a constant, we have
$\left\langle g\left(cu\right)\right\rangle =c\left\langle g\left(u\right)\right\rangle \nonumber$
If $h\left(u\right)$ is a second function of the random variable, we have
$\left\langle g\left(u\right)+h\left(u\right)\right\rangle =\left\langle g\left(u\right)\right\rangle +\left\langle h\left(u\right)\right\rangle \nonumber$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/03%3A_Distributions_Probability_and_Expected_Values/3.09%3A_Random_Variables_Expected_Values_and_Population_Sets.txt
|
There are two important statistics associated with any probability distribution, the mean of a distribution and the variance of a distribution. The mean is defined as the expected value of the random variable itself. The Greek letter $\mu$ is usually used to represent the mean. If $f\left(u\right)$ is the cumulative probability distribution, the mean is the expected value for $g\left(u\right)=u$. From our definition of expected value, the mean is
$\mu =\int^{\infty }_{-\infty }{u\left(\frac{df}{du}\right)du} \nonumber$
The variance is defined as the expected value of ${\left(u-\mu \right)}^2$. The variance measures how dispersed the data are. If the variance is large, the data are—on average—farther from the mean than they are if the variance is small. The standard deviation is the square root of the variance. The Greek letter $\sigma$ is usually used to denote the standard deviation. Then, $\sigma^2$ denotes the variance, and
$\sigma^2=\int^{\infty }_{-\infty }{{\left(u-\mu \right)}^2\left(\frac{df}{du}\right)du} \nonumber$
If we have a small number of points from a distribution, we can estimate $\mu$ and $\sigma$ by approximating these integrals as sums over the domain of the random variable. To do this, we need to estimate the probability associated with each interval for which we have a sample point. By the argument we make in Section 3.7, the best estimate of this probability is simply ${1}/{N}$, where $N$ is the number of sample points. We have therefore
$\mu =\int^{\infty }_{-\infty }{u\left(\frac{df}{du}\right)du\approx \sum^N_1{u_i\left(\frac{1}{N}\right)=\overline{u}}} \nonumber$
That is, the best estimate we can make of the mean from $N$ data points is $\overline{u}$, where $\overline{u}$ is the ordinary arithmetic average. Similarly, the best estimate we can make of the variance is
$\sigma^2 = \int_{- \infty}^{ \infty} (u - \mu )^2 \left( \frac{df}{du} \right) du \approx \sum_{i=1}^N (u_i - \mu )^2 \left( \frac{1}{N} \right) \nonumber$
Now a complication arises in that we usually do not know the value of $\mu$. The best we can do is to estimate its value as $\mu \approx \overline{u}$. It turns out that using this approximation in the equation we deduce for the variance gives an estimate of the variance that is too small. A more detailed argument (see Section 3.14) shows that, if we use $\overline{u}$ to approximate the mean, the best estimate of $\sigma^2$, usually denoted $s^2$, is
$estimated\ \sigma^2=s^2=\sum^N_{i=1}{{\left(u_i-\overline{u}\right)}^2\left(\frac{1}{N-1}\right)} \nonumber$
Dividing by $N-1$, rather than $N$, compensates exactly for the error introduced by using $\overline{u}$ rather than $\mu$.
The mean is analogous to a center of mass. The variance is analogous to a moment of inertia. For this reason, the variance is also called the second moment about the mean. To show these analogies, let us imagine that we draw the probability density function on a uniformly thick steel plate and then cut along the curve and the $u$-axis (Figure $1$). Let $M$ be the mass of the cutout piece of plate; $M$ is the mass below the probability density curve. Let $dA$ and $dm$ be the increments of area and mass in the thin slice of the cutout that lies above a small increment, $du$, of $u$. Let $\rho$ be the density of the plate, expressed as mass per unit area. Since the plate is uniform, $\rho$ is constant. We have $dA=\left({df}/{du}\right)du$ and $dm=\rho dA$ so that
$dm=\rho \left(\frac{df}{du}\right)du \nonumber$
The mean of the distribution corresponds to a vertical line on this cutout at $u=\mu$. If the cutout is supported on a knife-edge along the line $u=\mu$, gravity induces no torque; the cutout is balanced. Since the torque is zero, we have
$0=\int^M_{m=0}{\left(u-\mu \right)dm=\int^{\infty }_{-\infty }{\left(u-\mu \right)\rho \left(\frac{df}{du}\right)du}} \nonumber$
Since $\mu$ is a constant property of the cut-out, it follows that
$\mu =\int^{\infty }_{-\infty }{u\left(\frac{df}{du}\right)}du \nonumber$
The cutout’s moment of inertia about the line $u=\mu$ is
\begin{aligned} I & =\int^M_{m=0}{{\left(u-\mu \right)}^2dm} \ ~ & =\int^{\infty }_{-\infty }{{\left(u-\mu \right)}^2\rho \left(\frac{df}{du}\right)du} \ ~ & =\rho \sigma^2 \end{aligned} \nonumber
The moment of inertia about the line $u-\mu$ is simply the mass per unit area, $\rho$, times the variance of the distribution. If we let $\rho =1$, we have $I=\sigma^2$.
We define the mean of $f\left(u\right)$ as the expected value of $u$. It is the value of $u$ we should “expect” to get the next time we sample the distribution. Alternatively, we can say that the mean is the best prediction we can make about the value of a future sample from the distribution. If we know $\mu$, the best prediction we can make is $u_{predicted}=\mu$. If we have only the estimated mean, $\overline{u}$, then $\overline{u}$ is the best prediction we can make. Choosing $u_{predicted}=\overline{u}$ makes the difference,$\ \left|u-u_{predicted}\right|$, as small as possible.
These ideas relate to another interpretation of the mean. We saw that the variance is the second moment about the mean. The first moment about the mean is
\begin{aligned} 1^{st}\ moment & =\int^{\infty }_{-\infty }{\left(u-\mu \right)}\left(\frac{df}{du}\right)du \ ~ & =\int^{\infty }_{-\infty }{u\left(\frac{df}{du}\right)du}-\mu \int^{\infty }_{-\infty }{\left(\frac{df}{du}\right)du} \ ~ & =\mu -\mu \ ~ & =0 \end{aligned} \nonumber
Since the last two integrals are $\mu$ and 1, respectively, the first moment about the mean is zero. We could have defined the mean as the value, $\mu$, for which the first moment of $u$ about $\mu$ is zero.
The first moment about the mean is zero. The second moment about the mean is the variance. We can define third, fourth, and higher moments about the mean. Some of these higher moments have useful applications.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/03%3A_Distributions_Probability_and_Expected_Values/3.10%3A_Statistics_-_the_Mean_and_the_Variance_of_a_Distribution.txt
|
The central limit theorem establishes very important relationships between the statistics for two distributions that are related in a particular way. It enables us to understand some important features of physical systems.
The central limit theorem concerns the distribution of averages. If we have some original distribution and sample it three times, we can calculate the average of these three data points. Call this average $A_{3,1}$. We could repeat this activity and obtain a second average of three values, $A_{3,2}$. We can do this repeatedly, generating averages $A_{3,3}$,, $A_{3,n}$. Several things will be true about these averages:
• The set of all of the possible averages-of-three, $\{{\boldsymbol{A}}_{\boldsymbol{3},\boldsymbol{i}}\}$, is itself a distribution. This averages-of-three distribution is different from the original distribution. Each average-of-three is a value of the random variable associated with the averages-of-three distribution.
• Each of the ${\boldsymbol{A}}_{\boldsymbol{3},\boldsymbol{i}}$ is an estimate of the mean of the original distribution.
• The distribution of the ${\boldsymbol{A}}_{\boldsymbol{3},\boldsymbol{i}}$ will be less spread out than the original distribution.
There is nothing unique about averaging three values. We could sample the original distribution seven times and compute the average of these seven values, calling the result $A_{7,1}$. Repeating, we could generate averages $A_{7,2}$,, $A_{7,m}$. All of the things we say about the averages-of-three are also true of these averages-of-seven. However, we can now say something more: The distribution of the ${\mathrm{A}}_{\mathrm{7,i}}$ will be less spread out than the distribution of the ${\mathrm{A}}_{\mathrm{3,i}}$. The corresponding probability density functions are sketched in Figure 13.
The central limit theorem relates the mean and variance of the distribution of averages to the mean and variance of the original distribution:
If random samples of $\boldsymbol{N}$ values are taken from a distribution, whose mean is µ and whose variance is ${\boldsymbol{\sigma }}^{\boldsymbol{2}}$, averages of these $\boldsymbol{N}$ values, ${\boldsymbol{A}}_{\boldsymbol{N}}$, are approximately normally distributed with a mean of µ and a variance of $\boldsymbol{\sigma }^{\boldsymbol{2}}/\boldsymbol{N}$. The approximation to the normal distribution becomes better as $\boldsymbol{N}$ becomes larger.
It turns out that the number, $N$, of trials that is needed to get a good estimate of the variance is substantially larger than the number required to get a good estimate of the mean.
3.12: The Normal Distribution
The normal distribution is very important. The central limit theorem says that if we average enough values from any distribution, the distribution of the averages we calculate will be the normal distribution. The probability density function for the normal distribution is
$\frac{df}{du} =\frac{1}{\sigma \sqrt{2\pi }}\mathrm{exp}\left[\frac{{-\left(u-\mu \right)}^2}{2{\sigma }^2}\right] \nonumber$
The integral of the normal distribution from $u=-\infty$ to $u=\infty$ is unity. However, the definite integral between arbitrary limits cannot be obtained as an analytical function. This turns out to be true for some other important distributions also; this is one reason for working with probability density functions rather than the corresponding cumulative probability functions. Of course, the definite integral can be calculated to any desired accuracy by numerical methods, and readily available tables give values for definite integrals from $u=-\infty$ to $u=u$. (We mention normal curve of error tables in Section 3.8, where we introduce a method for testing whether a given set of data conforms to the normal distribution equation.)
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/03%3A_Distributions_Probability_and_Expected_Values/3.11%3A_The_Variance_of_the_Average-_The_Central_Limit_Theorem.txt
|
We can extend the idea of an expected value to a function of multiple random variables. Let U and V be distributions whose random variables are $u$ and $v$, respectively. Let the probability density functions for these distributions be ${df_u\left(u\right)}/{du}$ and ${df_v\left(v\right)}/{dv}$. In general, these probability density functions are different functions; that is, $U$ and $V$ are different distributions. Let $g\left(u,v\right)$ be some function of these random variables. The probability that an observation made on $U$ produces a value of $u$ in the range $u^*<u<u^*+du$ is
$P\left(u^*<u<u^*+du\right)\ =\frac{df_u\left(u^*\right)}{du}du\nonumber$
and the probability that an observation made on $V$ produces a value of $v$ in the range $v^*<v<v^*+dv$ is
$P\left(v^*<v<v^*+dv\right)=\frac{df_v\left(v^*\right)}{dv}dv\nonumber$
The probability that making one observation on each of these distributions produces a value of $u$ that lies in the range $u^*<u<u^*+du$ and a value of $v$ that lies in the range $v^*<v<v^*+dv$ is
$\frac{df_u\left(u^*\right)}{du}\frac{df_v\left(v^*\right)}{dv}du\,dv\nonumber$
In a straightforward generalization, we define the expected value of $g\left(u,v\right)$, $\left\langle g\left(u,v\right)\ \right\rangle$, as $\left\langle g\left(u,v\right)\ \right\rangle =\int^{\infty }_{v=-\infty }{\int^{\infty }_{u=-\infty }{g\left(u,v\right)}}\frac{df_u\left(u\right)}{du}\frac{df_v\left(v\right)}{dv}dudv\nonumber$
If $g\left(u,v\right)$ is a sum of functions of independent variables, $g\left(u,v\right)=h\left(u\right)+k\left(v\right)$, we have
$\left\langle g\left(u,v\right)\right\rangle =\int^{\infty }_{-\infty }{\int^{\infty }_{-\infty }{\left[h\left(u\right)+k\left(v\right)\right]\frac{df_u\left(u\right)}{du}\frac{df_v\left(v\right)}{dv}}dudv}=\int^{\infty }_{-\infty }{h\left(u\right)\frac{df_u\left(u\right)}{du}}du+\int^{\infty }_{-\infty }{k\left(v\right)\frac{df_v\left(v\right)}{dv}}dv=\ \left\langle h\left(u\right)\ \right\rangle +\left\langle k\left(v\right)\ \right\rangle\nonumber$
If $g\left(u,v\right)$ is a product of independent functions, $g\left(u,v\right)=h\left(u\right)k\left(v\right)$, we have
$\left\langle g\left(u,v\right)\right\rangle =\int^{\infty }_{-\infty }{\int^{\infty }_{-\infty }{h\left(u\right)k\left(v\right)\frac{df_u\left(u\right)}{du}\frac{df_v\left(v\right)}{dv}}dudv}\ \ \ \ =\int^{\infty }_{-\infty }{h\left(u\right)\frac{df_u\left(u\right)}{du}}du\times \int^{\infty }_{-\infty }{k\left(v\right)\frac{df_v\left(v\right)}{dv}}d=\ \left\langle h\left(u\right)\right\rangle \ \left\langle k\left(v\right)\right\rangle\nonumber$
We can extend these conclusions to functions of the random variables of any number of distributions. If $u_i$ is the random variable of distribution $U_i$ whose probability density function is ${df_i\left(u_i\right)}/{du_i}$, the expected value of
$g\left(u_1,\dots ,u_i,\dots ,u_N\right)=h_1\left(u_1\right)+\dots +h_i\left(u_i\right)+\dots +h_N\left(u_N\right)\nonumber$
becomes
$\left\langle g\left(u_1,\dots ,u_i,\dots ,u_N\right)\right\rangle =\sum^N_{i=1}{\left\langle h_i\left(u_i\right)\right\rangle }\nonumber$
and the expected value of
$g\left(u_1,\dots ,u_i,\dots ,u_N\right)=h_1\left(u_1\right)\dots h_i\left(u_i\right)\dots h_N\left(u_N\right)\nonumber$
becomes $\left\langle g\left(u_1,\dots ,u_i,\dots ,u_N\right)\right\rangle =\ \ \prod^N_{i=1}{\left\langle h_i\left(u_i\right)\right\rangle }\nonumber$
We are particularly interested in expected values for repeated trials made on the same distribution. We consider distributions for which the outcome of one trial is independent of the outcome of any other trial. The probability density function is the same for every trial, so we have $f\left(u\right)=f_1\left(u_1\right)=\dots =f_i\left(u_i\right)=\dots =f_N\left(u_N\right)$. Let the values obtained for the random variable in a series of trials on the same distribution be $u_1$,, $u_i$,, $u_N$. For each trial, we have
$\left\langle h_i\left(u_i\right)\right\rangle \ =\ \ \int^{\infty }_{-\infty }{h_i\left(u_i\right)\frac{df_i\left(u_i\right)}{du_i}}du_i\nonumber$
If we consider the special case of repeated trials in which the functions $h_i\left(u_i\right)$ are all the same function, so that $h\left(u\right)=h_1\left(u_1\right)=\dots =h_i\left(u_i\right)=\dots =h_N\left(u_N\right)$, the expected value of
$g\left(u_1,\dots ,u_i,\dots ,u_N\right)\nonumber$ $=h_1\left(u_1\right)+\dots +h_i\left(u_i\right)+\dots +h_N\left(u_N\right)\nonumber$
becomes
$\left\langle g\left(u_1,\dots ,u_i,\dots ,u_N\right)\right\rangle =\ \sum^N_{i=1}{\left\langle h_i\left(u_i\right)\right\rangle \ }=N\left\langle h\left(u\right)\right\rangle\nonumber$
and the expected value of
$g\left(u_1,\dots ,u_i,\dots ,u_N\right)=h_1\left(u_1\right)\dots h_i\left(u_i\right)\dots h_N\left(u_N\right)\nonumber$
becomes
$\left\langle g\left(u_1,\dots ,u_i,\dots ,u_N\right)\right\rangle =\ \prod^N_{i=1}{\left\langle h_i\left(u_i\right)\right\rangle }={\left\langle h\left(u\right)\right\rangle }^N\nonumber$
Now let us consider $N$ independent trials on the same distribution and let $h_i\left(u_i\right)=h\left(u_i\right)=u_i$. Then, the expected value of
$g\left(u_1,\dots ,u_i,\dots ,u_N\right)= h_1\left(u_1\right)+\dots +h_i\left(u_i\right)+\dots +h_N\left(u_N\right)\nonumber$
becomes $\ \left\langle u_1+\dots +u_i+\dots +u_N\right\rangle =\sum^N_{i=1}{\left\langle u_i\right\rangle }=N\left\langle u\right\rangle =N\mu\nonumber$
By definition, the average of $N$ repeated trials is
${\overline{u}}_N={\left(u_1+\dots +u_i+\dots +u_N\right)}/{N}$, so that the expected value of the mean of a distribution of an average-of- $N$ repeated trials is
$\left\langle {\overline{u}}_N\right\rangle =\frac{\left\langle u_1+\dots +u_i+\dots +u_N\right\rangle }{N}=\mu\nonumber$
This proves one element of the central limit theorem: The mean of a distribution of averages-of- $N$ values of a random variable drawn from a parent distribution is equal to the mean of the parent distribution.
The variance of these averages-of- $N$ is
$\sigma^2_N=\left\langle {\left({\overline{u}}_N-\mu \right)}^2\right\rangle =\left\langle {\left[\left(\frac{1}{N}\sum^N_{i=1}{u_i}\right)-\mu \right]}^2\ \right\rangle =\left\langle {\left[\left(\frac{1}{N}\sum^N_{i=1}{u_i}\right)-\frac{N\mu }{N}\right]}^2\right\rangle =\ \frac{1}{N^2}\ \left\langle \ {\left[\left(\sum^N_{i=1}{u_i}\right)-N\mu \right]}^2\right\rangle =\ \frac{1}{N^2}\ \left\langle {\left[\left(\sum^N_{i=1}{\left(u_i-\mu \right)}\right)\right]}^2\ \right\rangle =\frac{1}{N^2\ }\ \left\langle \ \sum^N_{i=1}\left(u_i-\mu \right)^2 \right\rangle +\frac{2}{N^2} \left\langle \sum^{N-1}_{i=1} \left(u_i-\mu \right)\sum^N_{j=i+1} \left(u_j-\mu \right) \right \rangle =\frac{1}{N^2}\sum^N_{i=1} \left\langle \left(u_i-\mu \right)^2 \right\rangle +\frac{2}{N^2} \left\langle \sum^{N-1}_{i=1} \left(u_i-\mu \right) \right\rangle \ \left\langle \sum^N_{j=i+1} \left(u_j-\mu \right) \right\rangle\nonumber$
Where the last term is zero, because
$\left\langle \sum^{N-1}_{i=1}{\left(u_i-\mu \right)}\ \right\rangle = \sum^{N-1}_{i=1}{\left\langle \ \left(u_i-\mu \right)\ \right\rangle } \nonumber$
and
$\left\langle \ \left(u_i-\mu \right)\ \right\rangle \ =0\nonumber$
By definition, $\sigma^2=\ \left\langle \ {\left(u_i-\mu \right)}^2\ \right\rangle$, so that we have
$\sigma^2_N=\frac{N\sigma^2}{N^2}=\frac{\sigma^2}{N}\nonumber$
This proves a second element of the central limit theorem: The variance of an average of $N$ values of a random variable drawn from a parent distribution is equal to the variance of the parent distribution divided by $N$.
3.14: Where Does the N - 1 Come from
If we know $\mu$ and we have a set of $N$ data points, the best estimate we can make of the variance is
$\sigma^2=\int^{u_{max}}_{u_{min}}{\left(u-\mu \right)}^2\left(\frac{df}{du}\right)du \approx \sum^N_{i=1}{\left(u_i-\mu \right)}^2\left(\frac{1}{N}\right) \nonumber$
We have said that if we must use $\overline{u}$ to approximate the mean, the best estimate of $\sigma^2$, usually denoted $s^2$, is
$estimated\ \sigma^2=s^2 =\sum^N_{i=1}{\left(u_i-\overline{u}\right)}^2\left(\frac{1}{N-1}\right) \nonumber$
The use of $N-1$, rather than $N$, in the denominator is distinctly non-intuitive; so much so that this equation often causes great irritation. Let us see how this equation comes about.
Suppose that we have a distribution whose mean is $\mu$ and variance is $\sigma^2$. Suppose that we draw $N$ values of the random variable, $u$, from the distribution. We want to think about the expected value of ${\left(u-\mu \right)}^2$. Let us write $\left(u-\mu \right)$ as
$\left(u-\mu \right)=\left(u-\overline{u}\right)+\left(\overline{u}-\mu \right). \nonumber$
Squaring this gives
${\left(u-\mu \right)}^2={\left(u-\overline{u}\right)}^2+{\left(\overline{u}-\mu \right)}^2+2\left(u-\overline{u}\right)\left(\overline{u}-\mu \right). \nonumber$
From our definition of expected value, we can write:
$\begin{array}{l} \text{Expected value of } \left(u-\mu \right)^2= \ ~~~~ =expected\ value\ of\ \ {\left(u-\overline{u}\right)}^2 \ \ \ \ \ +expected\ value\ of\ {\left(\overline{u}-\mu \right)}^2 \ \ \ \ \ +expected\ value\ of\ 2\left(u-\overline{u}\right)\left(\overline{u}-\mu \right) \end{array} \nonumber$
From our discussion above, we can recognize each of these expected values:
• The expected value of ${\left(u-\mu \right)}^2$ is the variance of the original distribution, which is $\sigma^2$. Since this is a definition, it is exact.
• The best possible estimate of the expected value of ${\left(u-\overline{u}\right)}^2$ is $\sum^N_{i=1}{{\left(u_i-\overline{u}\right)}^2\left(\frac{1}{N}\right)} \nonumber$
• The expected value of ${\left(\overline{u}-\mu \right)}^2$ is the expected value of the variance of averages of $N$ random variables drawn from the original distribution. That is, the expected value of ${\left(\overline{u}-\mu \right)}^2$ is what we would get if we repeatedly drew $N$ values from the original distribution, computed the average of each set of $N$ values, and then found the variance of this new distribution of average values. By the central limit theorem, this variance is ${\sigma^2}/{N}$. Thus, the expected value of ${\left(\overline{u}-\mu \right)}^2$ is exactly ${\sigma^2}/{N}$.
• Since $\left(\overline{u}-\mu \right)$ is constant, the expected value of $2\left(u-\overline{u}\right)\left(\overline{u}-\mu \right)$ is $2\left(\overline{u}-\mu \right)\left[\frac{1}{N}\sum^N_{i=1}{\left(u_i-\overline{u}\right)}\right] \nonumber$ which is equal to zero, because $\sum^N_{i=1}{\left(u_i-\overline{u}\right)} = \left(\sum^N_{i=1}{u_i}\right)-N\overline{u}=0 \nonumber$ by the definition of $\overline{u}$.
Substituting, our expression for the expected value of ${\left(u-\mu \right)}^2$ becomes:
$\sigma^2\approx \sum^N_{i=1} \left(u_i-\overline{u}\right)^2\left(\frac{1}{N}\right)+\frac{\sigma^2}{N} \nonumber$
so that
$\sigma^2\left(1-\frac{1}{N}\right)=\sigma^2\left(\frac{N-1}{N}\right)\approx \sum^N_{i=1} \frac{\left(u_i-\overline{u}\right)^2}{N} \nonumber$
and
$\sigma^2 \approx \sum^N_{i=1} \frac{\left(u_i-\overline{u}\right)^2}{N-1} \nonumber$
That is, as originally stated, when we must use $\overline{u}$ rather than the true mean, $\mu$, in the sum of squared differences, the best possible estimate of $\sigma^2$, usually denoted $s^2$, is obtained by dividing by $N-1$, rather than by $N$.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/03%3A_Distributions_Probability_and_Expected_Values/3.13%3A_The_Expected_Value_of_a_Function_of_Several_Variables_and_the_Ce.txt
|
Problems
1. At each toss of a die, the die lands with one face on top. This face is distinguished from the other five faces by the number of dots that appear on it. Tossing a die produces data. What is the distribution? What is the random variable of this distribution? What outcomes are possible for this distribution? How would we collect a sample of ten values of the random variable of this distribution?
2. Suppose that we toss a die three times and average the results observed. How would you describe the distribution from which this average is derived? What is the random variable of this distribution? What outcomes are possible for this distribution? What would we do to collect a sample of ten values of the random variable of this distribution?
3. Suppose that we toss three dice simultaneously and average the results observed. How would you describe the distribution from which this average is derived? What is the random variable of this distribution? What outcomes are possible for this distribution? What would we do to collect a sample of ten values of the random variable of this distribution? Suppose that some third party collects a set, call it A, of ten values from this distribution and a second set, call it B, of values from the distribution in problem 2. If we are given the data in each set but are not told which label goes with which set of data, can we analyze the data to determine which set is A and which is B?
4. The manufacturing process for an electronic component produces 3 bad components in every 1000 components produced. The bad components appear randomly. What is the probability that
(a) a randomly selected component is bad?
(b) a randomly selected component is good?
(c) 2 bad components are produced in succession?
(d) 100 good components are produced in succession?
5. A product incorporates two of the components in the previous problem. What is the probability that
(a) both components are good?
(b) both components are bad?
(c) one component is good and one component is bad?
(d) at least one component is good?
6. A card is selected at random from a well-shuffled deck. A second card is then selected at random from among the remaining 51 cards. What is the probability that
(a) the first card is a heart?
(b) the second card is a heart?
(c) neither card is a heart?
(d) both cards are hearts?
(e) at least one card is a heart?
7. A graduating class has 70 men and 77 women. How many combinations of homecoming king and queen are possible?
8. After the queen is selected from the graduating class of problem 7, one woman is selected to “first attendant” to the homecoming queen. Thereafter, another woman is selected to be “second attendant.” After the queen is selected, how many ways can two attendants be selected?
9. A red die and a green die are rolled. What is the probability that
(a) both come up 3?
(b) both come up the same?
(c) they come up different?
(d) the red die comes up less than the green die?
(e) the red die comes up exactly two less than the green die?
(f) together they show 5?
10. A television game show offers a contestant a new car as the prize for correctly guessing which of three doors the car is behind. After the contestant selects a door, the game-show host opens an incorrect door. The host then gives the contestant the option of switching from the door he originally chose to the other door that remains unopened. Should the contestant change his selection?
[Hint: Consider the final set of outcomes to result from a sequence of three choices. First, the game-show producer selects a door and places the car behind this door. Diagram the possibilities. What is the probability of each? Second, the contestant selects a door. There are now nine possible outcomes. Diagram them. What is the probability of each? Third, the host opens a door. There are now twelve possible outcomes. Diagram them. What is the probability of each? Note that these twelve possibilities are not all equally probable.]
11. For a particular distribution, possible values of the random variable, $x$, range from zero to one. The probability density function for this distribution is ${df}/{dx}=1$.
(a) Show that the probability of finding $x$ in the range $0\le x\le 1$ is one.
(b) What is the mean of this distribution?
(c) What is the variance of this distribution? The standard deviation?
(d) A quantity, $g$, is a function of x: $g\left(x\right)=x^2$. What is the expected value of $g$?
12. For a particular distribution, possible values of the random variable,$\ x$, range from one to three. The probability density function for this distribution is ${df}/{dx}=cx$, where c is a constant.
(a) What is the value of the constant, c?
(b) What is the mean of this distribution?
(c) What is the variance of this distribution? The standard deviation?
(d) If $g\left(x\right)=x^2$, what is the expected value of $g$?
13. For a particular distribution, possible values of the random variable, $x$, range from two to four. The probability density function for this distribution is ${df}/{dx=cx^3}$, where c is a constant.
(a) What is the value of the constant, c?
(b) What is the mean of this distribution?
(c) What is the variance of this distribution? The standard deviation?
(d) If $g\left(x\right)=x^2$, what is the expected value of $g$?
14. For a particular distribution, possible values of the random variable, $x$, range from zero to four. For $0\le x\le \le 1$, the probability density function is ${df}/{dx}={x}/{2}$. For $1, the probability density function is \({df}/{dx}={\left(4-x\right)}/{6}$.
(a) Show that the area under this probability distribution function is one.
(b) What is the mean of this distribution?
(c) What is the variance of this distribution? The standard deviation?
(d) If $g\left(x\right)=x^2$, what is the expected value of $g$?
15. The following values, $x_i$, of the random variable, $x$, are drawn from a distribution: $9.63$, $9.00$, $11.87$, $10.13$, $10.83,\ 9.50$, $10.40$, $9.83$, and $10.09$.
(a) Arrange these values in increasing order and calculate the “rank probability,” ${i}/{\left(N+1\right)}$, associated with each of the $x_i$ values.
(b) Plot the rank probability (on the ordinate) versus the random-variable value (on the abscissa). Sketch a smooth curve through the points on this plot.
(c) What function is approximated by the curve sketched in part b?
(d) Plot the data points along a horizontal axis. Then create a bar graph (histogram) by erecting bars of equal area between each pair of data points.
(e) What function is approximated by the tops of the bars erected in part d?
16. For a particular distribution, possible values of the random variable range from zero to four. The following values of the random variable are drawn from this distribution: $0.1$, $1.0$, $1.1$, $1.5$, $2.1$. Sketch an approximate probability density function for this distribution.
17. The possible values for the random variable of a particular distribution lie in the range $0\le x\le 10$. In six trials, the following values are obtained: $1.0$, $1.9$, $2.3$, $2.7$, $3.0$, $3.8$.
(a) Sketch an approximate probability density function for this distribution.
(b) What is the best estimate we can make of the mean of this distribution?
(c) What is the best estimate we can make of the variance of this distribution?
(d) What is the best estimate we can make of the variance of averages-of-six drawn from this distribution?
(e) What is the best estimate we can make of the variance of averages-of-sixteen drawn from this distribution?
18. A computer program generates numbers from a normal distribution with a mean of zero and a standard deviation of $10$. Also, for any integer $N$, the program will generate and average $N$ values from this distribution. It will repeat this operation until it has produced 100 such averages. It will then compute the estimated standard deviation of these $100$ average values. The table below gives various values of $N$ and the estimated standard deviation, $s$, that was found for $100$ averages of that $N$. Plot these data in a way that tests the validity of the central limit theorem.
$N$ $s$
4 5.182
9 2.794
16 2.206
25 2.152
36 1.689
49 1.092
64 1.001
81 1.004
100 1.074
144 0.601
196 0.546
256 0.690
324 0.545
19. If $f\left(u\right)$ is the cumulative probability distribution function for a distribution, what is the expected value of $f\left(u\right)$? What interpretation can you place on this result?
20. Five replications of a volumetric analysis yield concentration estimates of $0.3000$, $0.3008$, $0.3012$, $0.3014$, and$\ 0.3020$ mol ${\mathrm{L}}^{-1}$. Calculate the rank probability of each of these results. Sketch, over the concentration range $0.3000<0.3020$> mol ${\mathrm{L}}^{-1}$, an approximation of the cumulative probability distribution function for the distribution that yielded these data.
21. The Louisville Mudhens play on a square baseball field that measures $100$ meters on a side. Casey’s hits always fall on the field. (He never hits a foul ball or hits one out of the park.) The probability density function for the distance that a Casey hit goes parallel to the first-base line is ${df_x\left(x\right)}/{dx=\left(2\times {10}^{-4}\right)x}$. (That is, we take the first-base line as our $x$-axis; the third-base line as our $y$-axis; and home plate is at the origin. ${df_x\left(x\right)}/{dx}$ is independent of the distance that the hit goes parallel to the third-base line, our $y$-axis.) The probability density function for the distance that a Casey hit goes parallel to the third-base line is ${df_y\left(y\right)}/{dy}=\left(3\times {10}^{-6}\right)y^2$. (${df_y\left(y\right)}/{dy}$ is independent of the distance that the hit goes parallel to the first-base line, our $x$-axis.)
(a) What is the probability that a Casey hit lands at a point $\left(x,y\right)$ such that $x^*<x^*+dx$> and $y^*<y^*+dy$> ?
(b) What is the two-dimensionally probability density function that describes Casey’s hits, expressed in this Cartesian coordinate system?
(c) Recall that polar coordinates transform to Cartesian coordinates according to $x=r{\mathrm{cos} \theta \ }$ and $y=r{\mathrm{sin} \theta \ }$. What is the probability density function for Casey’s hits expressed using polar coordinates?
(d) Recall that the differential element of area in polar coordinates is $rdrd\theta$. Find the probability that a Casey hit lands within the pie-shaped area bounded by $0<50$> m and $0<\theta <{\pi }/{4}$.
22. In Chapter 2, we derived the Barometric Formula, $\eta \left(h\right)=\eta \left(0\right)\mathrm{exp}\left({-mgh}/{kT}\right)$ for molecules of mass $m$ in an isothermal atmosphere at a height $h$ above the surface of the earth. $\eta \left(h\right)$ is the number of molecules per unit volume at height $h$ ; $\eta \left(0\right)$ is the number of molecules per unit volume at the earth’s surface, where $h=0$. Consider a vertical cylinder of unit cross-sectional area, extending from the earth’s surface to an infinite height. Let $f\left(h\right)$ be the fraction of the molecules in this cylinder that is at a height less than $h$. Prove that the probability density function is ${df}/{dh=\left({mg}/{kT}\right)\mathrm{exp}\left({-mgh}/{kT}\right)}$.
23. A particular distribution has six outcomes. These outcomes and their probabilities are $a$ [GrindEQ__0_1_]; $b$ [GrindEQ__0_2_]; $c$ [GrindEQ__0_3_]; $d$ [GrindEQ__0_2_];$\ e$ [GrindEQ__0_1_]; and $f$ [GrindEQ__0_1_].
(a) Partitioning I assigns these outcomes to a set of three events: Event $A$ $=\ a$ or $b$ or $c$; Event $B$ = $d$; and Event $C\ =\ e$ or $f.$ What are the probabilities of Events $A$, $B$, and $C$?
(b) Partitioning II assigns the outcomes to two events: Event $D\ =\ a$ or $b$ or $c$; and Event $E\ =\ d$ or $e$ or $f$. What are the probabilities of Events $D$ and $E$? Express the probabilities of Events $D$ and $E$ in terms of the probabilities of Events $A$, $B$, and $C$.
(c) Partitioning III assigns the outcomes to three events: Event $F\ =\ a$ or $b$; Event $G\ =\ c$ or $d$; and Event $H\ =\ e$ or $f$. What are the probabilities of Events $F$, $G$, and $H$? Can the probabilities of Events $F$, $G$, and $H$ be expressed in terms of the probabilities of Events $A$, $B$, and $C$?
24. Consider a partitioning of outcomes into events that is not exhaustive; that is, not every outcome is assigned to an event. What problem arises when we want to describe the probabilities of these events?
25. Consider a partitioning of outcomes into events that is not mutually exclusive; that is, one (or more) outcome is assigned to two (or more) events. What problem arises when we want to describe the probabilities of these events?
26. For integer values of p $\left(p\neq 1\right)$, we find
$\int{x^p{ \ln \left(x\right)\ }dx}=\left(\frac{x^{p+1}}{p+1}\right){ \ln \left(x\right)\ }-\frac{x^{p+1}}{{\left(p+1\right)}^2} \nonumber$
(a) Sketch the function, $h\left(x\right)={df\left(x\right)}/{dx}=-4x{ \ln \left(x\right)\ }$, over the interval $0\le x\le 1$.
(b) Show that we can consider $h\left(x\right)={df\left(x\right)}/{dx}=-4x{ \ln \left(x\right)\ }$ to be a probability density function over this interval; that is, show $f\left(1\right)-f\left(0\right)=1$. Let us name the corresponding distribution “Sam.”
(c) What is the mean, $\mu$, of Sam?
(d) What is the variance, ${\sigma }^2$, of Sam?
(e) What is the standard deviation, $\sigma$, of Sam?
(f) What is the variance of averages-of-four samples taken from Sam?
(g) The following four values are obtained in random sampling of an unknown distribution: 0.050; 0.010; 0.020; and 0.040. Estimate the mean, $\mu$, variance (${\sigma }^2$or $s^2$), and the standard deviation ($\sigma$ or s) for this unknown distribution.
(h) What is the probability that a single sample drawn from Sam will lie in the interval $0\le x\le 0.10$ ? Note: The upper limit of this interval is 0.10, not 1.0 as in part (a).
(i) Is it likely that the unknown distribution sampled in part g is in fact the distribution we named Sam? Why, or why not?
27. We define the mean, $\mu$, as the expected value of the random variable: $\mu =\int^{\infty }_{-\infty }{u\left({df}/{du}\right)}du$. Define $\overline{u}=\sum^N_{i=1}{\left({u_i}/{N}\right)}$, where the $u_i$ are N independent values of the random variable. Show that the expected value of $\overline{u}$ is $\mu$.
28. A box contains a large number of plastic balls. An integer, $W,$ in the range $1\le W\le 20$ is printed on each ball. There are many balls printed with each integer. The integer specifies the mass of the ball in grams. Six random samples of three balls each are drawn from the box. The balls are replaced and the box is shaken between drawings. The numbers on the balls in drawings I through VI are:
I: 3, 4, 9
II: 1, 6, 17
III: 2, 5, 8
IV: 2, 6, 7
V: 3, 5, 6
VI: 2, 3, 10
(a) What are the population sets represented by the samples I through VI?
(b) Sketch the probability density function as estimated from sample I.
(c) Sketch the probability density function as estimated from sample II
(d) Using the data from samples I through VI, estimate the probability of drawing a ball of each mass in a single trial.
(e) Sketch the probability density function as estimated from the probability values in part (d).
(f) From the data in sample I, estimate the average mass of a ball in the box.
(g) From the data in sample II, estimate the average mass of a ball in the box.
(h) From the probability values calculated in part (d), estimate the average mass of a ball in the box.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/03%3A_Distributions_Probability_and_Expected_Values/3.15%3A_Problems.txt
|
In Chapter 2, we assume that all of the molecules in a gas move with the same speed and use a simplified argument to conclude that this speed depends only on temperature. We now recognize that the individual molecules in a gas sample have a wide range of speeds; the velocities of gas molecules must be described by a distribution function. It is true, however, that the average speed depends only on temperature.
James Clerk Maxwell was the first to derive the distribution function for gas velocities. He did it about 1860. We follow Maxwell’s argument. For a molecule moving in three dimensions, there are three velocity components. Maxwell’s argument uses only one assumption: the speed of a gas molecule is independent of the direction in which it is moving. Equivalently, we can say that the components of the velocity of a gas molecule are independent of one another; knowing the value of one component of a molecule’s velocity does not enable us to infer anything about the values of the other two components. When we use Cartesian coordinates, Maxwell’s assumptionMaxwell’s assumption means also that the same mathematical model must describe the distribution of each of the velocity components.
Since the velocity of a gas molecule has three components, we must treat the velocity distribution as a function of three random variables. To understand how this can be done, let us consider how we might find probability distribution functions for velocity components. We need to consider both spherical and Cartesian coordinate systems.
Let us suppose that we are able to measure the Cartesian-coordinate components $v_x$, $v_y$, and $v_z$ of the velocities of a large number of randomly selected gas molecules in a particular constant-temperature sample. Then we can transform each set of Cartesian components to spherical-coordinate velocity covelocity componentsmponents $v$, $\theta$, and $\varphi$. We imagine accumulating the results of these measurements in a table like Table 1. As a practical matter, of course, we cannot make the measurements to complete such a table. However, there is no doubt that, at every instant, every gas molecule can be characterized by a set of such velocity components; the values exist, even if we cannot measure them. We imagine that we have such data only as a way to clarify the properties of the distribution functions that we need.
Table 1. Molecular Velocity Components
Molecule Number ${\boldsymbol{v}}_{\boldsymbol{x}}$ ${\boldsymbol{v}}_{\boldsymbol{x}}$ ${\boldsymbol{v}}_{\boldsymbol{x}}$ v $\boldsymbol{\theta }$ $\boldsymbol{\varphi }$
1 $v_x\left(1\right)$ $v_y\left(1\right)$ $v_z\left(1\right)$ $v\left(1\right)$ $\theta \left(1\right)$ $\varphi \left(1\right)$
2 $v_x\left(2\right)$ $v_y\left(2\right)$ $v_z\left(2\right)$ $v\left(2\right)$ $\theta \left(2\right)$ $\varphi \left(2\right)$
3 $v_x\left(3\right)$ $v_y\left(3\right)$ $v_z\left(3\right)$ $v\left(3\right)$ $\theta \left(3\right)$ $\varphi \left(3\right)$
4 $v_x\left(4\right)$ $v_y\left(4\right)$ $v_z\left(4\right)$ $v\left(4\right)$ $\theta \left(4\right)$ $\varphi \left(4\right)$
$N$ $v_x\left(N\right)$ $v_x\left(N\right)$ $v_z\left(N\right)$ $v\left(N\right)$ $\theta \left(N\right)$ $\varphi \left(N\right)$
These data have several important features. The scalar velocity, $v$, ranges from 0 to $+\infty$; $v_x$, $v_y$, and $v_z$ range from $-\infty$ to $+\infty$. In §2, we see that $\theta$ varies from 0 to $\pi$; and $\varphi$ ranges from 0 to $2\pi$. Each column represents data sampled from the distribution of the corresponding random variable. In Chapter 3, we find that we can use such data to find mathematical models for such distributions. Here, we can find mathematical models for the cumulative distribution functions $f_x\left(v_x\right)$, $f_y\left(v_y\right)$, and $f_z\left(v_z\right)$. We can approximate the graph of $f_x\left(v_x\right)$ by plotting the rank probability of $v_x$ versus $v_x$. We expect this plot to be sigmoid; at any $v_x$, the slope of this plot is the probability-density function, ${df_x\left(v_x\right)}/{dv_x}$. The probability density function for $v_x$ depends only on $v_x$, because the value measured for $v_x$ is independent of the values measured for $v_y$ and $v_z$. However, by Maxwell’s assumption, the functions describing the distribution of $v_y$ and $v_z$ are the same as those describing the distribution of $v_x$. While redundant, it is convenient to introduce additional symbols to represent these probability density functions. We define ${\rho }_x\left(v_x\right)={df_x\left(v_x\right)}/{dv_x}$, ${\rho }_y\left(v_y\right)={df_y\left(v_y\right)}/{dv_y}$, and ${\rho }_z\left(v_z\right)={df_z\left(v_z\right)}/{dv_z}$.
When we find these one-dimensional distribution functions by modeling the experimental data in this way, each $v_x$ datum that we use in our analysis comes from an observation on a molecule and is associated with particular $v_y$ and $v_z$ values. These values of $v_y$ and $v_z$ can be anything from $-\infty$ to $+\infty$. This is a significant point. The functions $f_x\left(v_x\right)$ and ${df_x\left(v_x\right)}/{dv_x}$ are independent of $v_y$ and $v_z$. We can also say that ${df_x\left(v_x\right)}/{dv_x}$ describes the distribution of $v_x$ when $v_y$ and $v_z$ are averaged over all the values it is possible for them to have.
To clarify this, let us consider another cumulative probability distribution function, $f_{xyz}\left(v_x,v_y,v_z\right)$, which is just the fraction of all molecules whose respective Cartesian velocity components are less than $v_x$, $v_y$, $v_z$. Since $f_x\left(v_x\right)$, $f_y\left(v_y\right)$, and $f_z\left(v_z\right)$ are the fractions whose components are less than $v_x$, $v_y$, and $v_z$, respectively, their product is equal to $f_{xyz}\left(v_x,v_y,v_z\right)$ We have $f_{xyz}\left(v_x,v_y,v_z\right)=f_x\left(v_x\right)f_y\left(v_y\right)f_z\left(v_z\right)$. For the velocity of a randomly selected molecule, $\left(v^*_x,v^*_y,v^*_z\right)$, to be included in the fraction represented by $f_{xyz}\left(v_x,v_y,v_z\right)$, the velocity must be in the particular range ${ + \mathrm{\infty }\mathrm{<}v}^{\mathrm{*}}_x\mathrm{<}v_x$, $+ \mathrm{\infty }\mathrm{<}v^{\mathrm{*}}_y\mathrm{<}v_y$, and ${ + \mathrm{\infty }\mathrm{<}v}^{\mathrm{*}}_z\mathrm{<}v_z$.
However, for a velocity $v^*_x$ to be included in $f_x\left(v_x\right)$, we must have $v^*_x<v_x$, $v^*_y<\infty$, and $v^*_z<\infty$; that is, the components $v^*_y$ and $v^*_z$ can have any values. Since the probability that $v_x$, $v_y$, and $v_z$ satisfy $v^*_x<v_x$, $v^*_y<v_y$, and $v^*_z<v_z$ is
\begin{aligned} P\left(v^*_x<v_x,v^*_y<v_y,v^*_z<v_z\right) & = f_{xyz} \left(v_x,v_y,v_z \right) \ ~ & = f_x (v_x)f_y(v_y)f_z(v_z) \end{aligned} \nonumber
the probability that $v^*_x$ is included in $f_x\left(v_x\right)$ becomes
\begin{aligned} P\left(v^*_x<v_x,v^*_y< \infty, v_z^* < \infty \right) & = f_{xyz} \left( v_x, \infty, \infty \right) \ ~ & = f_x \left( v_x \right) f_y \left( \infty \right) f_z \left( \infty \right) \ ~ & = f_x \left( v_x \right) \end{aligned} \nonumber
For our purposes, we need to be able to express the probability that the velocity lies within any range of velocities. Let us use $\mathrm{\textrm{ʋ}}$ to designate a particular “volume” region in velocity space and use $P\left(\mathrm{\textrm{ʋ}}\right)$ to designate the probability that the velocity of a randomly selected molecule is in this region. When we let ʋ be the region in velocity space in which $x$-components lie between $v_x$
and $v_x+dv_x$, $y$-components lie between $v_y$, and $v_y+dv_y$, and $z$-components lie between $v_z$ and $v_z+dv_z$, $dP\left(\textrm{ʋ}\right)$ denotes the probability that the velocity of a randomly chosen molecule,$\ \left(v^*_x,v^*_y,v^*_z\right)$, satisfies the conditions $v_x<v^*_x<v_x+dv_x$, $v_y<v^*_y<v_y+dv_y$, and $v_z<v^*_z<v_z+dv_z$.
$dP\left(\textrm{ʋ}\right)$ is an increment of probability. The dependence of $\ dP\left(\textrm{ʋ}\right)$ on $v_x$, $v_y$, $v_z$, ${dv}_x$, $dv_y$, and $dv_z$can be made explicit by introducing a new function, $\rho \left(v_x,v_y,v_z\right)$, defined by
$dP\left(\textrm{ʋ}\right)=\rho \left(v_x,v_y,v_z\right)dv_xdv_ydv_z \nonumber$
Since $dv_xdv_ydv_z$ is the volume available in velocity space for velocities whose $x$-components are between $v_x$ and $v_x+dv_x$, whose $y$-components are between $v_y$, and $v_y+dv_y$, and whose $z$-components are between $v_z$ and $v_z+dz$, we see that $\rho \left(v_x,v_y,v_z\right)$ is a probability density function in three dimensions. The value of $\rho \left(v_x,v_y,v_z\right)$ is the probability, per unit volume in velocity space, that a molecule has the velocity $\left(v_x,v_y,v_z\right)$. For any velocity, $\left(v_x,v_y,v_z\right)$, there is a value of $\rho \left(v_x,v_y,v_z\right)$; this value is just a number. If we want the probability of finding a velocity within some small volume of velocity space around $\left(v_x,v_y,v_z\right)$, we can find it by multiplying $\rho \left(v_x,v_y,v_z\right)$ by this volume.
From the one-dimensional probability-density functions, the probability that the $x$-component of a molecular velocity lies between $v_x$ and $v_x+dv_x$, is just $\left({{df}_x\left(v_x\right)}/{dv_x}\right)dv_x$, whatever the values of $v_y$ and $v_z$. The probability that the $y$-component lies between $v_y$ and $v_y+dv_y$, is just $\left({{df}_y\left(v_y\right)}/{dv_y}\right)dv_y$, whatever the values of $v_x$ and $v_z$. The probability that the $z$-component lies between $v_z$ and $v_z+dv_z$, is just $\left({df_z\left(v_z\right)}/{dv_z}\right)dv_z$, whatever the values of $v_x$ and $v_y$. When we interpret Maxwell’s assumption to mean that these are independent probabilities, the probability that all three conditions are realized simultaneously is
$dP\left(\textrm{ʋ}\right)=\left(\frac{df_x\left(v_x\right)}{dv_x}\right)\left(\frac{df_y\left(v_y\right)}{{dv}_y}\right)\left(\frac{{df}_z\left(v_z\right)}{{dv}_z}\right)dv_xdv_ydv_z=\rho \left(v_x,v_y,v_z\right)dv_xdv_ydv_z \nonumber$
Evidently, the product of these three one-dimensional probability densities is the three-dimensional probability density. We have $\rho \left(v_x,v_y,v_z\right)=\left(\frac{df_x\left(v_x\right)}{dv_x}\right)\left(\frac{df_y\left(v_y\right)}{{dv}_y}\right)\left(\frac{{df}_z\left(v_z\right)}{{dv}_z}\right)={\rho }_x\left(v_x\right){\rho }_y\left(v_y\right){\rho }_z\left(v_z\right) \nonumber$
From Maxwell’s assumption, we have derived the conclusion that $\rho \left(v_x,v_y,v_z\right)$ can be expressed as a product of the one-dimensional probability densities $\left({df\left(v_x\right)}/{dv_x}\right)dv_x$, $\left({df\left(v_y\right)}/{dv_y}\right)dv_y$, and $\left({df\left(v_z\right)}/{dv_z}\right)dv_z$. Since these are probability densities, we have
$\int^{\infty }_{-\infty }{\left(\frac{{df}_x\left(v_x\right)}{dv_x}\right)}dv_x=\int^{\infty }_{-\infty }{\left(\frac{{df}_y\left(v_y\right)}{dv_y}\right)}dv_y=\int^{\infty }_{-\infty }{\left(\frac{{df}_z\left(v_z\right)}{dv_z}\right)}dv_z=1 \nonumber$ and $\mathop{\int\!\!\!\!\int\!\!\!\!\int}\nolimits^{\infty }_{-\infty }{\rho \left(v_x,v_y,v_z\right){dv}_xdv_y}dv_z=1 \nonumber$
Moreover, because the Cartesian coordinates differ from one another only in orientation, $\left({df\left(v_x\right)}/{dv_x}\right)dv_x$, $\left({df\left(v_y\right)}/{dv_y}\right)dv_y$, and $\left({df\left(v_z\right)}/{dv_z}\right)dv_z$ must all be the same function.
To summarize the development above, we define $\rho \left(v_x,v_y,v_z\right)$ independently of ${df_x\left(v_x\right)}/{dv_x}$, ${df_y\left(v_y\right)}/{dv_y}$, and ${df_z\left(v_z\right)}/{dv_z}$. Then, from Maxwell’s assumption that the three one-dimensional probabilities are independent, we find $\rho \left(v_x,v_y,v_z\right)=\left(\frac{df_x\left(v_x\right)}{dv_x}\right)\left(\frac{df_y\left(v_y\right)}{{dv}_y}\right)\left(\frac{{df}_z\left(v_z\right)}{{dv}_z}\right) \nonumber$ $={\rho }_x\left(v_x\right){\rho }_y\left(v_y\right){\rho }_z\left(v_z\right) \nonumber$
Alternatively, we could take Maxwell’s assumption to be that the three-dimensional probability density function is expressible as a product of three one-dimensional probability densities:
$\rho \left(v_x,v_y,v_z\right) ={\rho }_x\left(v_x\right){\rho }_y\left(v_y\right){\rho }_z\left(v_z\right) \nonumber$
In this case, the relationships of ${\rho }_x\left(v_x\right)$, ${\rho }_y\left(v_y\right)$, and ${\rho }_z\left(v_z\right)$, to the one-dimensional cumulative probabilities ($f_x\left(v_x\right)$, etc.) must be deduced from the properties of $\rho \left(v_x,v_y,v_z\right)$. As emphasized above, our deduction of $f_x\left(v_x\right)$ from experimental data uses $v_x$ values that are associated with all possible values of $v_y$ and $v_z$. That is, what we determine in our (hypothetical) experiment is
\begin{aligned} f_x\left(v_x\right) & =\int^{v_x}_{v_x=-\infty}{\mathop{\int\!\!\!\!\int}\nolimits^{\infty}_{v_{y,z} = -\infty }{\rho \left(v_x,v_y,v_z\right){dv}_xdv_ydv_z}} \ & =\int^{v_x}_{-\infty} \rho_x \left(v_x\right)dv_x\int^{\infty}_{-\infty} \rho_y \left(v_y\right) dv_y\int^{\infty}_{-\infty} \rho_z \left(v_z\right) dv_z \ & =\int^{v_x}_{-\infty} \rho_x \left(v_x\right) dv_x \end{aligned} \nonumber
from which it follows that
$\frac{{df}_x\left(v_x\right)}{{dv}_x}={\rho }_x\left(v_x\right) \nonumber$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/04%3A_The_Distribution_of_Gas_Velocities/4.01%3A_Distribution_Functions_for_Gas-velocity_Components.txt
|
We introduce the idea of a three-dimensional probability-density function by showing how to find it from data referred to a Cartesian coordinates system. The probability density associated with a particular molecular velocity is just a number—a number that depends only on the velocity. Given a velocity, the probability density associated with that velocity must be independent of our choice of coordinate system. We can express the three-dimensional probability density using any coordinate system. We turn now to expressing velocities and probability density functions using spherical coordinates.
Just as we did for the Cartesian velocity components, we deduce the cumulative probability functions $f_v\left(v\right)$, $f_{\theta }\left(\theta \right)$, and $f_{\varphi }\left(\varphi \right)$ for the spherical-coordinate components. Our deduction of $f_v\left(v\right)$ from the experimental data uses $v$-values that are associated with all possible values of $\theta$ and $\varphi$. Corresponding statements apply to our deductions of $f_{\theta }\left(\theta \right)$, and $f_{\varphi }\left(\varphi \right)$. We also obtain their derivatives, the probability-density functions ${df_v\left(v\right)}/{dv}$, ${df_{\theta }\left(\theta \right)}/{d\theta }$, and ${{df}_{\varphi }\left(\varphi \right)}/{d\varphi }$. From the properties of probability-density functions, we have
$\int^{\infty }_0{\left(\frac{{df}_v\left(v\right)}{dv}\right)}dv=\int^{\pi }_0{\left(\frac{{df}_{\theta }\left(\theta \right)}{d\theta }\right)}d\theta =\int^{2\pi }_0{\left(\frac{{df}_{\varphi }\left(\varphi \right)}{d\varphi }\right)}d\varphi =1 \nonumber$
Let $\textrm{ʋ}\prime$ be the arbitrarily small increment of volume in velocity space in which the $v$-, $\theta$-, and $\varphi$-components of velocity lie between $v$ and $v+dv$, $\theta$ and $\theta +d\theta$, and $\varphi$ and $\varphi +d\varphi$. Then the probability that the velocity of a randomly selected molecule lies within $\textrm{ʋ}\prime$ is
$dP\left(\textrm{ʋ}\prime \right)=\left(\frac{df_v\left(v\right)}{dv}\right)\left(\frac{df_{\theta }\left(\theta \right)}{d\theta }\right)\left(\frac{{df}_{\varphi }\left(\varphi \right)}{d\varphi }\right)dvd\theta d\varphi \nonumber$
Note that the product
$\left(\frac{df_v\left(v\right)}{dv}\right)\left(\frac{df_{\theta }\left(\theta \right)}{d\theta }\right)\left(\frac{{df}_{\varphi }\left(\varphi \right)}{d\varphi }\right) \nonumber$
is not a three-dimensional probability density function. This is most immediately appreciated by recognizing that $dvd\theta d\varphi$ is not an incremental “volume” in velocity space. That is, $\textrm{ʋ}\prime \neq \ dvd\theta d\varphi$
We let $\rho \left(v,\ \theta ,\varphi \right)$ be the probability-density function for the velocity vector in spherical coordinates. When $v$, $\theta$, and $\varphi$ specify the velocity, $\rho \left(v,\ \theta ,\varphi \right)$ is the probability per unit volume at that velocity. We want to use $\rho \left(v,\ \theta ,\varphi \right)$ to express the probability that an arbitrarily selected molecule has a velocity vector whose magnitude lies between$\ v$ and $v+dv$, while its $\theta$-component lies between $\theta$ and $\theta +d\theta$, and its $\varphi$-component lies between $\varphi$ and $\varphi +d\varphi$. This is just $\rho \left(v,\ \theta ,\varphi \right)$ times the velocity-space “volume” included by these ranges of $v$, $\theta$, and $\varphi$.
When we change from Cartesian coordinates, $\mathop{v}\limits^{\rightharpoonup}=\left(v_x,v_y,v_z\right)$, to spherical coordinates, $\mathop{v}\limits^{\rightharpoonup}=\left(v,\theta ,\varphi \right)$, the transformation is $v_x=v{\mathrm{sin} \theta \ }{\mathrm{cos} \varphi \ }$, $v_y=v{\mathrm{sin} \theta \ }{\mathrm{sin} \varphi \ }$, $v_z=v{\mathrm{cos} \theta \ }$. (See Figure 1.) As sketched in Figure 2, an incremental increase in each of the coordinates of the point specified by the vector $\left(v,\ \theta ,\varphi \right)$ advances the vector to the point $\left(v+dv,\theta +d\theta ,\varphi +d\varphi \right)$. When $dv$, $d\theta$, and $d\varphi$ are arbitrarily small, these two points specify the diagonally opposite corners of a rectangular parallelepiped, whose edges have the lengths $dv$, $vd\theta$, and $v{\mathrm{sin} \theta \ }d\varphi$. The volume of this parallelepiped is $v^2{\mathrm{sin} \theta \ }dvd\theta d\varphi$. Hence, the differential volume elementdifferential volume element in Cartesian coordinates, $dv_xdv_ydv_z$, becomes $v^2{\mathrm{sin} \theta \ }dvd\theta d\varphi$ in spherical coordinates.
Mathematically, this conversion is obtained using the absolute value of the Jacobian, $J\left(\frac{v_x,v_y,v_z}{v,\theta ,\varphi }\right)$, of the transformation. That is,
$dv_xdv_ydv_z=\left|J\left(\frac{v_x,v_y,v_z}{v,\theta ,\varphi }\right)\right|dvd\theta d\varphi \nonumber$
where the Jacobian is a determinate of partial derivatives
$J\left(\frac{v_x,v_y,v_z}{v,\theta ,\varphi }\right)=\left| \begin{array}{ccc} {\partial v_x}/{\partial v} & {\partial v_x}/{\partial \theta } & {\partial v_x}/{\partial \varphi } \ {\partial v_y}/{\partial v} & {\partial v_y}/{\partial \theta } & {\partial v_y}/{\partial \varphi } \ {\partial v_z}/{\partial v} & {\partial v_z}/{\partial \theta } & {\partial v_z}/{\partial \varphi } \end{array} \right| \nonumber$ ${=v}^2{\mathrm{sin} \theta \ } \nonumber$
Since the differential unit of volume in spherical coordinates is $v^2{\mathrm{sin} \theta \ }$$dvd\theta d\varphi$, the probability that the velocity components lie within the indicated ranges is
$dP\left(\textrm{ʋ}\prime \right)=\rho \left(v,\theta ,\varphi \right)v^2{\mathrm{sin} \theta \ }dvd\theta d\varphi \nonumber$
We can develop the next step in Maxwell’s argument by taking his assumption to mean that the three-dimensional probability density function is expressible as a product of three one-dimensional functions. That is, we take Maxwell’s assumption to assert the existence of independent functions ${\rho }_v\left(v\right)$, ${\rho }_{\theta }\left(\theta \right)$, and ${\rho }_{\varphi }\left(\varphi \right)$ such that $\rho \left(v,\ \theta ,\varphi \right)={\rho }_v\left(v\right){\rho }_{\theta }\left(\theta \right){\rho }_{\varphi }\left(\varphi \right)$. The probability that the $v$-, $\theta$-, and $\varphi$-components of velocity lie between $v$ and $v+dv$, $\theta$ and $\theta +d\theta$, and $\varphi$ and $\varphi +d\varphi$ becomes
\begin{aligned} dP\left(\textrm{ʋ}\prime \right) & =\left(\frac{df_v\left(v\right)}{dv}\right)\left(\frac{df_{\theta }\left(\theta \right)}{d\theta }\right)\left(\frac{{df}_{\varphi }\left(\varphi \right)}{d\varphi }\right)dvd\theta d\varphi \ ~ & =\rho \left(v,\theta ,\varphi \right)v^2{\mathrm{sin} \theta dvd\theta d\varphi \ } \ ~ & ={\rho }_v\left(v\right){\rho }_{\theta }\left(\theta \right){\rho }_{\varphi }\left(\varphi \right)v^2{\mathrm{sin} \theta dvd\theta d\varphi \ } \end{aligned} \nonumber
Since $v$, $\theta$, and $\varphi$ are independent, it follows that
$\frac{df_v\left(v\right)}{dv}=v^2{\rho }_v\left(v\right) \nonumber$
$\frac{df_{\theta }\left(\theta \right)}{d\theta }={\rho }_{\theta }\left(\theta \right){\mathrm{sin} \theta \ } \nonumber$
$\frac{df_{\varphi }\left(\varphi \right)}{d\varphi }={\rho }_{\varphi }\left(\varphi \right) \nonumber$
Moreover, the assumption that velocity is independent of direction means that ${\rho }_{\theta }\left(\theta \right)$ must actually be independent of $\theta$; that is, ${\rho }_{\theta }\left(\theta \right)$ must be a constant. We let this constant be ${\alpha }_{\theta }$; so$\ {\rho }_{\theta }\left(\theta \right)={\alpha }_{\theta }$. By the same argument, we set ${\rho }_{\varphi }\left(\varphi \right)={\alpha }_{\varphi }$. Each of these probability-density functions must be normalized. This means that
$1=\int^{\infty }_0{v^2{\rho }_v\left(v\right)}dv \nonumber$
$1=\int^{\pi }_0{{\alpha }_{\theta }{\mathrm{sin} \theta \ }d\theta }=2{\alpha }_{\theta } \nonumber$
$1=\int^{2\pi }_0{{\alpha }_{\varphi }d\varphi }=2\pi {\alpha }_{\varphi } \nonumber$
from which we see that ${\rho }_{\theta }\left(\theta \right)={\alpha }_{\theta }={1}/{2}$ and ${\rho }_{\varphi }\left(\varphi \right)={\alpha }_{\varphi }={1}/{2}\pi$. It is important to recognize that, while ${\rho }_x\left(v_x\right)$, ${\rho }_y\left(v_y\right)$, and ${\rho }_z\left(v_z\right)$ are probability density functions, ${\rho }_{\theta }\left(\theta \right)$ and ${\rho }_v\left(v\right)$ are not. (However, ${\rho }_{\varphi }\left(\varphi \right)$ is a probability density function.) We can see this by noting that, if ${\rho }_{\theta }\left(\theta \right)$ were a probability density, its integral over all possible values of $\theta$ $\left(0<\theta <\pi \right)$would be one. Instead, we find
$\int^{\pi }_0 \rho_{\theta} \left(\theta \right)d\theta =\int^{\pi }_0 d\theta /2= \pi /2 \nonumber$
Similarly, when we find ${\rho }_v\left(v\right)$, we can show explicitly that
$\int^{\infty }_0{{\rho }_v\left(v\right)dv\neq 1} \nonumber$
Our notation now allows us to express the probability that an arbitrarily selected molecule has a velocity vector whose magnitude lies between $v$ and $v+dv$, while its $\theta$-component lies between $\theta$ and$\ \theta +d\theta$, and its $\varphi$-component lies between $\varphi$ and $\varphi +d\varphi$ using three equivalent representations of the probability density function:
$dP\left(\textrm{ʋ}\prime \right)=\rho \left(v,\theta ,\varphi \right)v^2{\mathrm{sin} \theta dvd\theta d\varphi \ } - {\rho }_v\left(v\right){\rho }_{\theta }\left(\theta \right){\rho }_{\varphi }\left(\varphi \right)v^2{\mathrm{sin} \theta dvd\theta d\varphi \ }=\left(\frac{1}{4\pi }\right){\rho }_v\left(v\right)v^2{\mathrm{sin} \theta \ }dvd\theta d\varphi \nonumber$
The three-dimensional probability-density function in spherical coordinates is
$\rho \left(v,\ \theta ,\varphi \right)={\rho }_v\left(v\right){\rho }_{\theta }\left(\theta \right){\rho }_{\varphi }\left(\varphi \right)=\frac{{\rho }_v\left(v\right)}{4\pi } \nonumber$ This shows explicitly that $\rho \left(v,\ \theta ,\varphi \right)$ is independent of $\theta$ and $\varphi$; if the speed is independent of direction, the probability density function that describes velocity must be independent of the coordinates, $\theta$ and $\varphi$, that specify its direction.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/04%3A_The_Distribution_of_Gas_Velocities/4.02%3A_Probability_Density_Functions_for_Velocity_Components_in_Spherical_Coordina.txt
|
To this point, we have been developing our ability to characterize the gas-velocity distribution functions. We now want to use Maxwell’s argument to find them. We have already introduced the first step, which is the recognition that three-dimensional probability-density functions can be expressed as products of independent one-dimensional functions, and that ${\rho }_{\theta }\left(\theta \right)$, and ${\rho }_{\varphi }\left(\varphi \right)$ are the constants ${1}/{2}$ and ${1}/{2\pi }$. Now, because the probability density associated with any given velocity is just a number that is independent of the coordinate system, we can equate the three-dimensional probability-density functions for Cartesian and spherical coordinates: $\rho \left(v_x,v_y,v_z\right)=\rho \left(v,\ \theta ,\varphi \right)$ so that
${\rho }_x\left(v_x\right){\rho }_y\left(v_y\right){\rho }_z\left(v_z\right)=\frac{{\rho }_v\left(v\right)}{4\pi } \nonumber$
We take the partial derivative of this last equation with respect to $v_x$. The probability densities ${\rho }_y\left(v_y\right)$ and ${\rho }_z\left(v_z\right)$ are independent of $v_x$. However, $v$ is a function of $v_x$, because $v^2=v^2_x+v^2_y+v^2_z$. We find
$\frac{{d\rho }_x\left(v_x\right)}{dv_x}{\rho }_y\left(v_y\right){\rho }_z\left(v_z\right)=\frac{1}{4\pi }{\left(\frac{{\partial \rho }_v\left(v\right)}{\partial v_x}\right)}_{v_yv_v}=\frac{1}{4\pi }\left(\frac{d{\rho }_v\left(v\right)}{dv}\right){\left(\frac{\partial v}{\partial v_x}\right)}_{v_yv_z} \nonumber$ Since $v^2=v^2_x+v^2_y+v^2_z$, $2v{\left({\partial v}/{\partial v_x}\right)}_{v_yv_z}=2v_x$ and ${\left(\frac{\partial v}{\partial v_x}\right)}_{v_yv_z}=\frac{v_x}{v} \nonumber$
Making this substitution and dividing by the original equation gives
$\frac{{d\rho }_x\left(v_x\right)}{dv_x}\frac{{\rho }_y\left(v_y\right){\rho }_z\left(v_z\right)}{{\rho }_x\left(v_x\right){\rho }_y\left(v_y\right){\rho }_z\left(v_z\right)}=\frac{v_x}{v}\frac{1}{{\rho }_v\left(v\right)}\frac{d{\rho }_v\left(v\right)}{dv} \nonumber$
Cancellation and rearrangement of the result leads to an equation in which the independent variables $v_x$ and $v$ are separated. This means that each term must be equal to a constant, which we take to be $-\lambda$. We find
$\left(\frac{1}{v_x \rho_x\left(v_x\right)}\right) \frac{d \rho_x \left(v_x\right)}{dv_x}=\left(\frac{1}{v\rho_v \left(v\right)}\right)\frac{d \rho_v\left(v\right)}{dv}=-\lambda \nonumber$
so that
$\frac{d \rho_x\left(v_x\right)}{ \rho_x\left(v_x\right)}=-\lambda v_x dv_x \nonumber$
and
$\frac{d\rho_v\left(v\right)}{\rho_v\left(v\right)}=-\lambda vdv \nonumber$
From the first of these equations, we obtain the probability density function for the distributions of one-dimensional velocities. (See Section 4.4.) The three-dimensional probability density function can be deduced from the one-dimensional function. (See Section 4.5.)
From the second equation, we obtain the three-dimensional probability-density function directly. Integrating from $v=0$, where ${\rho }_v\left(0\right)$ has a fixed value, to an arbitrary scalar velocity, $v$, where the scalar-velocity function is ${\rho }_v\left(v\right)$, we have
$\int^{\rho_v\left(v\right)}_{\rho_v\left(0\right)} \frac{d \rho_v\left(v\right)}{ \rho_v\left(v\right)}=-\lambda \int^v_0 vdv \nonumber$
or
${\rho }_v\left(v\right)={\rho }_v\left(0\right)exp\left(\frac{-\lambda v^2}{2}\right) \nonumber$
The probability-density function for the scalar velocity becomes
$\frac{df_v\left(v\right)}{dv}=v^2{\rho }_v\left(v\right)={\rho }_v\left(0\right)v^2exp\left(\frac{-\lambda v^2}{2}\right) \nonumber$
This is the result we want, except that it contains the unknown parameters ${\rho }_v\left(0\right)$ and $\lambda$. The value of ${\ \rho }_v\left(0\right)$ must be such as to make the integral over all velocities equal to unity. We require
\begin{aligned} 1 & =\int^{\infty }_0 \left(\frac{df_v\left(v\right)}{dv}\right) dv \ ~ & = \rho_v\left(0\right)\int^{\infty }_0 v^2\mathrm{exp}\left(\frac{-\lambda v^2}{2}\right)dv \ ~ & =\frac{\rho_v\left(0\right)}{4\pi } \left(\frac{2\pi }{\lambda }\right)^{3/2} \end{aligned} \nonumber
so that
$\rho_v\left(0\right)=4\pi \left(\frac{\lambda }{2\pi }\right)^{3/2} \nonumber$
where we use the definite integral $\int^{\infty }_0 x^2 \mathrm{exp}\left(-ax^2\right)dx=\left(1/4\right)\sqrt{\pi /a^3}$. (See Appendix D.) The scalar-velocity function in the three-dimensional probability-density function becomes
$\rho_v\left(v\right)=4\pi \left(\frac{\lambda }{2\pi }\right)^{3/2}\mathrm{exp}\left(\frac{-\lambda v^2}{2}\right) \nonumber$
The probability-density function for the scalar velocity becomes
\begin{aligned} \frac{df_v\left(v\right)}{dv} & =v^2 \rho_v \left(v\right) \ ~ & =4\pi \left(\frac{\lambda }{2\pi }\right)^{3/2}v^2\mathrm{exp}\left(\frac{-\lambda v^2}{2}\right) \end{aligned} \nonumber
The three-dimensional probability density in spherical coordinates becomes
\begin{aligned} \rho \left(v,\ \theta ,\varphi \right) & = \rho_v\left(v\right)\rho_{\theta}\left(\theta \right) \rho_{\varphi }\left(\varphi \right) \ ~ & =\left(\frac{\lambda }{2\pi }\right)^{3/2}\mathrm{exp}\left(\frac{-\lambda v^2}{2}\right) \end{aligned} \nonumber
The probability that an arbitrarily selected molecule has a velocity vector whose magnitude lies between $v$ and $v+dv$, while its $\theta$-component lies between $\theta$ and$\ \theta +d\theta$, and its $\varphi$-component lies between $\varphi$ and $\varphi +d\varphi$ becomes
\begin{aligned} dP\left(\textrm{ʋ}\prime \right) & =\left(\frac{df_v\left(v\right)}{dv}\right)\left(\frac{df_{\theta }\left(\theta \right)}{d\theta }\right)\left(\frac{df_{\varphi }\left(\varphi \right)}{d\varphi }\right)dvd\theta d\varphi \ ~ & =\rho \left(v,\theta ,\varphi \right)v^2 \mathrm{sin} \theta dvd\theta d\varphi \ ~ & =\left(\frac{1}{4\pi }\right) \rho_v \left(v\right)v^2 \mathrm{sin} \theta dvd\theta d\varphi \ ~ & = \left(\frac{\lambda }{2\pi }\right)^{3/2}v^2exp\left(\frac{-\lambda v^2}{2}\right) \mathrm{sin} \theta dvd\theta d\varphi \end{aligned} \nonumber
In Section 4.6, we again derive Boyle’s law and use the ideal gas equation to show that $\lambda ={m}/{kT}$.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/04%3A_The_Distribution_of_Gas_Velocities/4.03%3A_Maxwell%27s_Derivation_of_the_Gas-velocity_Probability-density_Function.txt
|
In Section 4.3, we find a differential equation in the function ${\rho }_x\left(v_x\right)$. Unlike the velocity, which takes values from zero to infinity, the $x$-component, $v_x$, takes values from minus infinity to plus infinity. The probability density at an infinite velocity, in either direction, is necessarily zero. Therefore, we cannot evaluate the integral of $d \rho_x\left(v_x\right)/ \rho_x\left(v_x\right)$ from $v_x=-\infty$ to an arbitrary velocity, $v_x$. However, we know from Maxwell’s assumption that the probability density for $v_x$ must be independent of whether the molecule is traveling in the direction of the positive $x$-axis or the negative $x$-axis. That is, $\rho_x\left(v_x\right)$ must be an even function; the probability density function must be symmetric around $v_x=0$; $\rho_x\left(v_x\right)=\rho_x\left(-v_x\right)$. Hence, we can express $\rho_x\left(v_x\right)$ relative to its fixed value,$\rho_x\left(0\right)$, at $v_x=0$. We integrate $d \rho_x\left(v_x\right)/ \rho_x\left(v_x\right)$ from $\rho_x\left(0\right)$ to $\rho_x\left(v_x\right)$ as $v_x$ goes from zero to an arbitrary velocity, $v_x$, to find
$\int^{\rho_x\left(v_x\right)}_{\rho_x\left(0\right)} \frac{d \rho_x\left(v_x\right)}{\rho_x\left(v_x\right)}=-\lambda \int^{v_x}_0 v_xdv_x \nonumber$
or
$\rho_x\left(v_x\right)=\frac{df_x\left(v_x\right)}{dv_x}= \rho_x\left(0\right)\mathrm{exp}\left(\frac{-\lambda v^2_x}{2}\right) \nonumber$
The value of $\rho_x\left(0\right)$ must be such as to make the integral of $\rho _x\left(v_x\right)$ over all possible values of $v_x$, $-\infty < v_x <\infty$, equal to unity. That is, we must have
\begin{aligned} 1 & =\int^{\infty}_{-\infty} \rho_x\left(v_x\right) dv_x \ ~ & =\int^{\infty}_{-\infty} \frac{df_x\left(v_x\right)}{dv_x}dv_x \ ~ & = \rho_x \left(0\right)\int^{\infty}_{-\infty} \mathrm{exp} \left(\frac{-\lambda v^2_x}{2}\right) dv_x \ ~ & =\rho_x\left(0\right)\sqrt{\frac{2\pi }{\lambda }} \end{aligned} \nonumber
where we use the definite integral $\int^{\infty }_{-\infty } \mathrm{exp} \left(-ax^2\right) dx=\sqrt{ \pi /a}$. (See Appendix D.) It follows that $\rho_x\left(0\right)= \left( \lambda /2 \pi \right)^{1/2}$. The one-dimensional probability-density function becomes
\begin{aligned} \rho_x\left(v_x\right) & =\frac{df_x\left(v_x\right)}{dv_x} \ ~ & = \left(\frac{\lambda }{2\pi }\right)^{1/2}\mathrm{exp}\left(\frac{-\lambda v^2_x}{2}\right) \end{aligned} \nonumber
Note that this is the normal distribution with $\mu =0$ and ${\sigma }^2={\lambda }^{-1}$. So ${\lambda }^{-1}$ is the variance of the normal one-dimensional probability-density function. As noted above, in Section 4.6 we find that $\lambda ={m}/{kT}$.
4.05: Combining the One-dimensional Probability Density Functions
In Section 4.4, we derive the probability density function for one Cartesian component of the velocity of a gas molecule. The probability density functions for the other two Cartesian components are the same function. For $\mathop{v}\limits^{\rightharpoonup}=\left(v_x,v_y,v_z\right)$, we have $v^2=v^2_x+v^2_y+v^2_z$, and
\begin{aligned} \frac{df_x\left(v_x\right)}{dv_x}= \left(\frac{\lambda }{2\pi }\right)^{1/2}\mathrm{exp}\left(\frac{-\lambda v^2_x}{2}\right) \ \frac{df_y\left(v_y\right)}{dv_y}= \left(\frac{\lambda }{2\pi} \right)^{1/2}\mathrm{exp}\left(\frac{-\lambda v^2_y}{2}\right) \ \frac{df_z\left(v_z\right)}{dv_z}= \left(\frac{\lambda }{2\pi }\right)^{1/2}\mathrm{exp}\left(\frac{-\lambda v^2_z}{2}\right) \end{aligned} \nonumber
We now want to derive the three-dimensional probability density function from these relationships. Given these probability density functions for the Cartesian components of $\mathop{v}\limits^{\rightharpoonup}$, we can find the probability density function in spherical coordinates
$\begin{array}{l} \left(\frac{df_x\left(v_x\right)}{dv_x}\right)\left(\frac{df_y\left(v_y\right)}{dv_y}\right)\left(\frac{df_z\left(v_z\right)}{dv_z}\right) \ = \left(\frac{\lambda }{2\pi }\right)^{3/2}\mathrm{exp}\left(\frac{-\lambda v^2_x}{2}\right)exp\left(\frac{-\lambda v^2_y}{2}\right)exp\left(\frac{-\lambda v^2_z}{2}\right) \ = \left(\frac{\lambda }{2\pi }\right)^{3/2}\mathrm{exp}\left(\frac{-\lambda v^2}{2}\right) \ =\rho \left(v,\theta ,\varphi \right) \end{array} \nonumber$
Since the differential volume element in spherical coordinates is $v^2 \mathrm{sin} \theta ~ dvd\theta d\varphi$, the probability that a molecule has a a velocity vector whose magnitude lies between $v$ and $v+dv$, while its $\theta$-component lies between $\theta$ and$\ \theta +d\theta$, and its $\varphi$-component lies between $\varphi$ and $\varphi +d\varphi$ becomes
$\begin{array}{l} \left(\frac{df_v\left(v\right)}{dv}\right)\left(\frac{df_{\theta }\left(\theta \right)}{d\theta }\right)\left(\frac{df_{\varphi }\left(\varphi \right)}{d\varphi }\right)dvd\theta d\varphi \ ~~ =\rho \left(v,\theta ,\varphi \right)v^2 \mathrm{sin} \theta dvd\theta d\varphi \ ~~ =\left(\frac{\lambda }{2\pi }\right)^{3/2}v^2\mathrm{exp}\left(\frac{-\lambda v^2}{2}\right) \mathrm{sin} \theta dvd\theta d\varphi \end{array} \nonumber$
(We found the same result in Section 4.3, of course.) We can find the probability-density function for the scalar velocity by eliminating the dependence on the angular components. To do this, we need only sum up, at a given value of $v$, the contributions from all possible values of $\theta$ and $\varphi$, recalling that $0\le \theta <\pi$ and $0\le \varphi <2\pi$. This sum is just
\begin{aligned} \frac{df_v\left(v\right)}{dv}\int^{\pi }_{\theta =0} \left(\frac{df_{\theta }\left(\theta \right)}{d\theta }\right) d\theta \int^{2\pi }_{\varphi =0} \left(\frac{df_{\varphi }\left(\varphi \right)}{d\varphi }\right)d\varphi = \ =\left(\frac{\lambda }{2\pi }\right)^{3/2}v^2exp\left(\frac{-\lambda v^2}{2}\right)\int^{\pi }_{\theta =0} \mathrm{sin} \theta d\theta \int^{2\pi }_{\varphi =0} d\varphi \end{aligned} \nonumber
Since $\int^{\pi }_{\theta =0}{\left(\frac{df_{\theta }\left(\theta \right)}{d\theta }\right)}d\theta =\int^{2\pi }_{\varphi =0}{\left(\frac{{df}_{\varphi }\left(\varphi \right)}{d\varphi }\right)d\varphi }=1$, $\int^{\pi }_0 \mathrm{sin} \theta d\theta =2$, and $\int^{2\pi }_0 d\varphi =2\pi$, we again obtain the Maxwell-Boltzmann probability-density function for the scalar velocity:
$\frac{df_v\left(v\right)}{dv}=4\pi \left(\frac{\lambda }{2\pi }\right)^{3/2}v^2exp\left(\frac{-\lambda v^2}{2}\right) \nonumber$
Unlike the distribution function for the Cartesian components of velocity, the Maxwell-Boltzmann distribution for scalar velocities is not a normal distribution. Possible speeds lie in the interval $0\le v<\infty$. Because of the $v^2$ term, the Maxwell-Boltzmann equation is asymmetric; it has a pronounced tail at high velocities.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/04%3A_The_Distribution_of_Gas_Velocities/4.04%3A_The_Probability-density_Function_for_Gas_Velocities_in_One_Dimension.txt
|
In Chapter 2, we derive Boyle’s lawBoyle’s law using simplifying assumptions. We are now able to do this derivation much more rigorously. We consider the collisions of gas molecules with a small portion of the wall of their container. We suppose that the wall is smooth, so that we can select a small and compact segment of it that is arbitrarily close to being planar. We denote both the segment of the wall and its area as $A$. $A$ can have any shape so long as it is a smooth, flat surface enclosed by a smooth curve.
Let the volume of the container be $V$ and the number of gas molecules in the container be $N$. We imagine that we follow the trajectory of one particular molecule as it moves to hit the wall somewhere within $A$. We begin our observations at time $t=0$ and suppose that the collision occurs at time $t$.
As sketched in Figure 3, we erect a Cartesian coordinate system with its origin at the location in space of the molecule at time $t=0$. We orient the axes of this coordinate system so that the $xy$-plane is parallel to the plane of $A$, and the z-axis is pointed toward the wall. Then the unit vector along the $z$-axis and a vector perpendicular to $A$ are parallel to one another. It is convenient to express the velocity of the selected molecule in spherical coordinates. We suppose that, referred to the Cartesian coordinate system we have erected, the velocity vector of the selected molecule is $\left(v^*,{\theta }^*,{\varphi }^*\right)$. The vector $\mathop{v^*}\limits^{\rightharpoonup}t$, drawn from the origin of our Cartesian system to the point of impact on the wall, follows the trajectory of the molecule from time zero to time $t$. The $z$-component of the molecular velocity vector is normal to the plane of $A$ at the point of impact; the magnitude of the$\ z$-component $v^*{\mathrm{cos} {\theta }^*\ }$. The perpendicular distance from the plane of A to the $xy$-plane of the Cartesian system is $v^*t{\mathrm{cos} {\theta }^*\ }$.
We assume that the collision is perfectly elastic. Before collision, the velocity component perpendicular to the wall is $v_z=v^*{\mathrm{cos} {\theta }^*\ }$. Afterward, it is $v_z={-v}^*{\mathrm{cos} {\theta }^*\ }$. Only this change in the $v_z$ component contributes to the force on the wall within $A$. (The $v_x$ and $v_y$ components are not changed by the collision.) During the collision, the molecule’s momentum change is ${-2mv}^*{\mathrm{cos} {\theta }^*\ }$. During our period of observation, the average force on the molecule is thus ${\left({-2mv}^*{\mathrm{cos} {\theta }^*\ }\right)}/{t}$. The force that the molecule exerts on the wall is ${\left({2mv}^*{\mathrm{cos} {\theta }^*\ }\right)}/{t}$, and hence the contribution that this particular collision—by one molecule traveling at velocity $v^*$— makes to the pressure on the wall is
$P_1\left(v^*\right)=\frac{2mv^* \mathrm{cos} {\theta }^*\ }{At} \nonumber$
We want to find the pressure on segment $A$ of the wall that results from all possible impacts. To do so, we recognize that any other molecule whose velocity components are $v^*$, ${\theta }^*$, and ${\varphi }^*$, and whose location at time $t=0$ enables it to reach $A$ within time $t$, makes the same contribution to the pressure as the selected molecule does. Let us begin by assuming that the velocities of all N of the molecules in the volume, $V$, are the same as that of the selected molecule. In this case, we can find the number of the molecules in the container that can reach $A$ within time $t$ by considering a tubular segment of the interior of the container. The long axis of this tube is parallel to the velocity vector of the selected molecule. The sides of this tube cut the container wall along the perimeter of $A$. This tube also cuts the $xy$-plane (the $z=0$ plane) of our coordinate system in such a way as to make an exact replica of $A$ in this plane. Call this replica $A^o$.
The area of $A^o$ is $A$; the plane of $A^o$ is parallel to the plane of $A$; and the perpendicular distance between the plane of $A$ and the plane of $A^o$ is $v^*t{\mathrm{cos} {\theta }^*\ }$. The volume of this tube is therefore ${Av}^*t{\mathrm{cos} {\theta }^*\ }$. Since there are ${N}/{V}$ molecules per unit volume, the total number of molecules in the tube is ${\left(ANv^*t{\mathrm{cos} {\theta }^*\ }\right)}/{V}$. When we assume that every molecule has velocity components $v^*$, ${\theta }^*$, and ${\varphi }^*$, all of the molecules in the tube reach $A$ within time $t$, because each of them travels parallel to the selected molecule, and each of them is initially at least as close to $A$ as is the selected molecule. Therefore, each molecule in the tube contributes $P_1\left(v^*\right)={2mv^*{\mathrm{cos} {\theta }^*\ }}/{At}$ to the pressure at $A$. The total pressure is the pressure per molecule multiplied by the number of molecules:
$\left(\frac{2mv^* \mathrm{cos} {\theta }^*\ }{At}\right)\left(\frac{ANv^*t \mathrm{cos} {\theta }^*\ }{V}\right)=\frac{2mN \left(v^* \mathrm{cos} {\theta }^*\right)^2}{V} \nonumber$
However, the molecular velocities are not all the same, and the pressure contribution ${2mN{\left(v^*{\mathrm{cos} {\theta }^*\ }\right)}^2}/{V}$ is made only by that fraction of the molecules whose velocity components lie in the intervals ${\theta }^*<\theta <{\theta }^*+d\theta$ and ${\varphi }^*<\varphi <{\varphi }^*+d\varphi$. This fraction is
$\rho \left(v^*,{\theta }^*,{\varphi }^*\right) \left(v^*\right)^2 \mathrm{sin} {\theta }^*dvd\theta d\varphi = \left(\frac{\lambda }{2\pi }\right)^{3/2} \left(v^*\right)^2\mathrm{exp}\left(\frac{-\lambda \left(v^*\right)^2}{2}\right)\mathrm{sin} {\theta }^*dvd\theta d\varphi \nonumber$
so that the pressure contribution from molecules whose velocity components lie in these ranges is
$dP=\frac{2mN\left(v^* \mathrm{cos} {\theta }^* \right)^2}{V}\times \left(\frac{\lambda }{2\pi }\right)^{3/2} \left(v^*\right)^2\mathrm{exp}\left(\frac{-\lambda \left(v^*\right)^2}{2}\right) \mathrm{sin} {\theta }^*dvd\theta d\varphi \nonumber$
The total pressure at $A$ is just the sum of the contributions from molecules with all possible combinations of velocities $v^*$, ${\theta }^*$, and ${\varphi }^*$. To find this sum, we integrate over all possible velocity vectors. The allowed values of $v$ are $0\le v<\infty$. There are no constraints on the values of $\varphi$; we have $0\le \varphi <2\pi$. However, since all of the impacting molecules must have a velocity component in the positive z-direction, the possible values of $\theta$ lie in the interval $0\le \theta <{\pi }/{2}$. We designate the velocity of the original molecule as $\left(v^*,{\theta }^*,{\varphi }^*\right)$ and retain this notation to be as specific as possible in describing the tube bounded by $A$ and $A^o$. However, the velocity components of an arbitrary molecule can have any of the allowed values. To integrate (See Appendix D) over the allowed values, we drop the superscripts. The pressurepressure:on wall at $A$ becomes
$P=\frac{2mN}{V} \left(\frac{\lambda }{2\pi }\right)^{3/2}\times \nonumber$
$\int^{\infty }_0 v^4exp\left(\frac{-\lambda v^2}{2}\right)dv \int^{\pi /2}_0 \mathrm{cos}^2 \theta \ \mathrm{sin} \theta \ d\theta \int^{2\pi }_0 d\varphi \nonumber$
$=\frac{2mN}{V} \left(\frac{\lambda }{2\pi }\right)^{3/2}\left[ \frac{3}{8} \left(\frac{2}{\lambda }\right)^2 \left(\frac{2\pi }{\lambda }\right)^{1/2}\right]\left[\frac{1}{3}\right]\left[2\pi \right]=mN/V\lambda \nonumber$
and the pressure–volume product becomes
$PV=\frac{mN}{\lambda } \nonumber$
Since $m,$ $N$, and $\lambda$ are constants, this is Boyle’s law. Equating this pressure–volume product to that given by the ideal gas equation, we have ${mN}/{\lambda }=NkT$ so that
$\lambda =\frac{m}{kT} \nonumber$
Finally, the Maxwell-Boltzmann equation becomes
$\frac{df_v\left(v\right)}{dv}=4\pi \left(\frac{m}{2\pi kT}\right)^{3/2}v^2\mathrm{exp}\left(\frac{-mv^2}{2kT}\right) \nonumber$
and the probability density becomes
$\rho \left(v,\theta ,\varphi \right)= \left(\frac{m}{2\pi kT}\right)^{3/2}v^2\mathrm{exp}\left(\frac{-mv^2}{2kT}\right) \nonumber$
This derivation can be recast as a computation of the expected value of the pressurepressure:expected value. To do so, we rephrase our description of the system: A molecule whose velocity components are $\left(v^*,{\theta }^*,{\varphi }^*\right)$ creates a pressure ${2mv^*{\mathrm{cos} {\theta }^*\ }}/{At}$ on the area $A$ with a probability of $Av^*t\mathrm{cos} {\theta }^*/{V}$. (The latter term is the probability that a molecule, whose velocity is $\left(v^*,{\theta }^*,{\varphi }^*\right)$, is, at time $t=0$, in a location from which it can reach $A$ within time $t.$ If the molecule is to hit the wall within time $t$, at time $t=0$ the molecule must be within the tubular segment of volume is $Av^*t\mathrm{cos} {\theta }^*$. The probability that the molecule is within this tubular segment is equal to the fraction of the total volume that this segment occupies.) Therefore, the product
$\left(\frac{2mv^*\mathrm{cos} {\theta }^*}{At}\right)\left(\frac{Av^*t \mathrm{cos} {\theta }^*}{V}\right)=\frac{2m}{V} \left(v^* \mathrm{cos} {\theta }^* \right)^2 \nonumber$
is the pressure contribution of a molecule with velocity $\left(v^*,{\theta }^*,{\varphi }^*\right)$, when ${\theta }^*$ is in the interval $0\le {\theta }^*<{\pi }/{2}$. The total pressure per molecule is the expected value of this pressure contribution; the expected value is the integral, over the entire volume of velocity space, of the pressure contribution times the probability density function for velocities.
It is useful to view the Maxwell-Boltzmann equation as the product of a term
$\mathrm{exp}\left({-mv^2}/{2kT}\right) \nonumber$
—called the Boltzmann factorBoltzmann factor—and a pre-exponential term that is proportional to the number of ways that a molecule can have a given velocity, $v$. If there were no constraints on a molecule’s speed, we would expect that the number of molecules with speeds between $v$ and $v+dv$ would increase as $v$ increases, because the probability that a molecule has a speed between $v$ and $v+dv$ is proportional to the volume in velocity space of a spherical shell of thickness $dv$. The volume of a spherical shell of thickness $dv$ is $4\pi v^2dv$, which increases as the square of $v$. However, the number of molecules with large values of $v$ is constrained by the conservation of energy. Since the total energy of a collection of molecules is limited, only a small proportion of the molecules can have very large velocities. The Boltzmann factor introduces this constraint. A molecule whose mass is m and whose scalar velocity is $v$ has kinetic energy $\epsilon ={mv^2}/{2}$. The Boltzmann factor is often written as $\mathrm{exp}\left({-\epsilon }/{kT}\right)$.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/04%3A_The_Distribution_of_Gas_Velocities/4.06%3A_Boyle%27s_Law_from_the_Maxwell-Boltzmann_Probability_Density.txt
|
There are numerous applications of the Maxwell-Boltzmann equation. These include predictions of collision frequencies, mean-free paths, effusion and diffusion rates, the thermal conductivity of gases, and gas viscosities. These applications are important, but none of them is a direct test of the validity of the Maxwell-Boltzmann equation.
The validity of the equation has been demonstrated directly in experiments in which a gas of metal atoms is produced in an oven at a very high temperature. As sketched in Figure 4, the gas is allowed to escape into a vacuum chamber through a very small hole in the side of the oven. The escaping atoms impinge on one or more metal plates. Narrow slits cut in these plates stop any metal atoms whose flight paths do not pass though the slits. This produces a beambeam:of metal atoms of metal atoms whose velocity distribution is the same as that of the metal-atom gas inside the oven. The rate at which metal atoms arrive at a detector is measured. Various methods are used to translate the atom-arrival rate into a measurement of their speed.
One device uses a solid cylindrical drum, which rotates on its cylindrical axis. As sketched in Figure 5, a spiral groove is cut into the cylindrical face of this drum. This groove is cut with a constant pitch. When the drum rotates at a constant rate, an atom traveling at a constant
velocity parallel to the cylindrical axis can traverse the length of the drum while remaining within the groove. That is, for a given rotation rate, there is one critical velocity at which an atom can travel in a straight line while remaining in the middle of the groove all the way from one end of the drum to the other. If the atom moves significantly faster or slower than this critical velocity, it collides with—and sticks to—one side or the other of the groove.
Since the groove has a finite width, atoms whose velocities lie in a narrow range about the critical velocity can traverse the groove without hitting one of the sides.
Let us assume that the groove is cut so that the spiral travels half way around the cylinder. That is, if we project the spiral onto one of the circular faces of the drum, the projection traverses an angle of ${180}^o$ on the face. In order to remain in the middle of this groove all the way from one end of the drum to the other, the atom must travel the length of the cylindrical drum in exactly the same time that it takes the drum to make a half-rotation. Let the critical velocity be $v_{critical}$. Then the time required for the atom to traverse the length, d, of the drum is ${d}/{v_{critial}}$. If the drum rotates at u cycles/sec, the time required for the drum to make one-half rotation is ${1}/{2u}$. Thus, the atom will remain in the middle of the groove all the way through the drum if
$v_{critial}=2ud \nonumber$
By varying the rotation rate, we can vary the critical velocity.
Because the groove has a finite width, atoms whose velocities are in a range $v_{min}<v_{max}$ can successfully traverse the groove. Whether or not a particular atom can do so depends on its velocity, where it enters the groove, and the width of the groove. Let the width of the groove be $w$ and the radius of the drum be $r$, where the drum is constructed with $r\gg w$. A slower atom that enters the groove at the earliest possible time—when the leading edge of the groove first encounters the beam of atoms—can traverse the length of the groove in a longer time, $t_{max}$. A point on the circumference of the drum travels with speed $2\pi ru$. The slowest atom traverses the length of the drum while a point on the circumference of the drum travels a distance $\pi r+w$. (To intercept the slowest atom, the trailing edge of the groove must travel a distance equal to half the circumference of the drum, $\pi r$, plus the width of the groove, $w$.) The time required for this rotation is the maximum time a particle can take to traverse the length, so
$t_{max}=\left(\pi r+w\right)/\left(2\pi ru\right) \nonumber$ and $v_{min}=d/t_{max}=2\pi rud/\left(\pi r+w\right) \nonumber$
A fast atom that enters the groove at the last possible moment—when the trailing edge of the grove just leaves the beam of atoms—can still traverse the groove if it does so in the time, $t_{min}$ that it takes the trailing edge of the groove to travel a distance $\pi r-w$. So,
$t_{min}=\left(\pi r-w\right)/\left(2\pi ru\right) \nonumber$ and $v_{max}=d/t_{min}=2\pi rud/\left(\pi r-w\right) \nonumber$
At a given rotation rate, the drum will pass atoms whose speeds are in the range
$\Delta v=v_{max}-v_{min}=2ud\left(\frac{\pi r}{\pi r-w}-\frac{\pi r}{\pi r+w}\right)=2ud\left(\frac{2\pi rw}{\left(\pi r\right)^2-w^2}\right)\approx v_{critical}\left(\frac{2w}{\pi r}\right) \nonumber$
So that $\frac{\Delta v}{v_{critial}}\approx \frac{2w}{\pi r} \nonumber$
The fraction of the incident atoms that successfully traverse the groove is equal to the fraction that have velocities in the interval $\Delta v$ centered on the critical velocity, $v_{critical}=2ud$.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/04%3A_The_Distribution_of_Gas_Velocities/4.07%3A_Experimental_Test_of_the_Maxwell-Boltzmann_Probability_Density.txt
|
Expected values for several quantities can be calculated from the Maxwell-Boltzmann probability density function. The required definite integrals are tabulated in Appendix D.
The most probable speed, $v_{mp}$, is the speed at which the Maxwell-Boltzmann equation takes on its maximum value. At this speed, we have
\begin{align*} 0&=\frac{d}{dv}\left(\frac{df\left(v\right)}{dv}\right)=\frac{d}{dv}\left[4\pi {\left(\frac{m}{2\pi kT}\right)}^{3/2}v^2\exp\left(\frac{-mv^2}{2kT}\right)\right] \[4pt] &=\left[4\pi {\left(\frac{m}{2\pi kT}\right)}^{3/2}\exp\left(\frac{-mv^2}{2kT}\right)\right]\left[2v-\frac{mv^3}{kT}\right] \end{align*}
from which
$v_{mp}=\sqrt{\frac{2kT}{m}}\approx 1.414\sqrt{\frac{kT}{m}} \nonumber$
The average speed, $\overline{v}$ or $\left\langle v\right\rangle$, is the expected value of the scalar velocity ($g\left(v\right)=v$). We find
$\overline{v}=\left\langle v\right\rangle =\int^{\infty }_0{4\pi {\left(\frac{m}{2\pi kT}\right)}^{3/2}v^3exp\left(\frac{-mv^2}{2kT}\right)}dv=\sqrt{\frac{8kT}{\pi m}}\approx 1.596\sqrt{\frac{kT}{m}} \nonumber$
The mean-square speed, $\overline{v^2}$ or $\left\langle v^2\right\rangle$, is the expected value of the velocity squared ($g\left(v\right)=v^2$):
$\overline{v^2}=\left\langle v^2\right\rangle =\int^{\infty }_0{4\pi {\left(\frac{m}{2\pi kT}\right)}^{3/2}v^4exp\left(\frac{-mv^2}{2kT}\right)}dv=\frac{3kT}{m} \nonumber$
and the root mean-square speed, $v_{rms}$, is
$v_{rms}=\sqrt{\left\langle v^2\right\rangle }=\sqrt{\frac{3kT}{m}}\approx 1.732\sqrt{\frac{kT}{m}} \nonumber$
Example $1$
Figure 6 shows the velocity distribution 300 K for nitrogen molecules at 300 K.
Solution
Finally, let us find the variance of the velocity; that is, the expected value of $\left(v-\left\langle v\right\rangle \right)^2$:
${\text { variance }(v)=\sigma_{v}^{2}}$
\begin{align*} &=\int_{0}^{\infty}(v-\langle v\rangle)^{2}\left(\frac{d f(v)}{d v}\right) d v \ &=\int_{0}^{\infty} v^{2}\left(\frac{d f}{d v}\right) d v-2\langle v\rangle \int_{0}^{\infty} v\left(\frac{d f}{d v}\right) d v+\langle v\rangle^{2} \int_{0}^{\infty}\left(\frac{d f}{d v}\right) \ &=\left\langle v^{2}\right\rangle- 2\langle v\rangle\langle v\rangle+\langle v\rangle^{2} \ &=\left\langle v^{2}\right\rangle-\langle v\rangle^{2} \end{align*}
For $N_2$ at $300$ K, we calculate:
$v_{mp}\ =422\ \mathrm{m\ }{\mathrm{s}}^{\mathrm{-1}} \nonumber$
$\left\langle v\right\rangle =\overline{v}=476\ \mathrm{m\ }{\mathrm{s}}^{\mathrm{-1}} \nonumber$
$v_{rms}=517\ \mathrm{m\ }{\mathrm{s}}^{\mathrm{-1}} \nonumber$
$\text{Variance} \left(v\right)=\sigma^2_v=40.23\times {10}^{-3}\ \mathrm{m\ }{\mathrm{s}}^{\mathrm{-1}} \nonumber$
$\sigma_v=201\ \mathrm{m\ }{\mathrm{s}}^{\mathrm{-1}} \nonumber$
4.09: Pressure Variations for Macroscopic Samples
At $300$ K, the standard deviation of $N_2$ speeds is about 40% of the average speed. Clearly the relative variation among molecular speeds in a sample of ordinary gas is very large. Why do we not observe macroscopic effects from this variation? In particular, if we measure the pressure at a small area of the container wall, why do we not observe pressure variations that reflect the wide variety of speeds with which molecules strike the wall?
Qualitatively, the answer is obvious. A single molecule whose scalar velocity is $v$ contributes $P_1\left(v\right)={mv^2}/{3V}$ to the pressure on the walls of its container. (See problem 20.) When we measure pressure, we measure an average squared velocity. Even if we measure the pressure over a very small area and a very short time, the
number of molecules striking the wall during the time of the measurement is very large. Consequently, the average speed of the molecules hitting the wall during any one such measurement is very close to the average speed in any other such measurement.
We are now able to treat this question quantitatively. For $N_2$ gas at $300$ K and $1$ bar, roughly $\mathrm{3\times }{\mathrm{10}}^{\mathrm{15}}$ molecules collide with a square millimeter of wall every microsecond. (See problem 12.) The standard deviation of the velocity of an $N_2$ molecule is $201\ \mathrm{m\ }{\mathrm{s}}^{\mathrm{-1}}$. Using the central limit theorem, the standard deviation of the average of $3\times {10}^{15}$ molecular speeds is
$\frac{201\, m\,s^{-1}}{\sqrt{3 \times 10^{15}}} \approx 4 \times 10^{-6} \mathrm{ms}^{-1} \nonumber$
The distribution of the average of $3\times {10}^{15}$ molecular speeds is very narrow indeed.
Similarly, when molecular velocities follow the Maxwell-Boltzmann distribution function, we can show that the expected value of the pressure for a single-molecule collision is $\left\langle P_1\left(v\right)\right\rangle =kT/V$. (See problem 21.) The variance of the distribution of these individual pressure measurements is $\sigma^2_{P_1\left(v\right)}={2k^2T^2}/{3V^2}$, so that the magnitude of the standard deviation is comparable to that of the average:
$\sigma_{P_{1}(v)} /\left\langle P_{1}(v)\right\rangle=\sqrt{2 / 3} \nonumber$
For the distribution of averages of $3\times {10}^{15}$ pressure contributions, we find
\begin{aligned} P_{a v g} &=\left\langle P_{1}(v)\right\rangle \ &=\sqrt{3 / 2} \sigma_{P_{1}(v)} \end{aligned} \nonumber
$\sigma_{avg}=\dfrac{\sigma_{P_1\left(v\right)}}{\sqrt{3\times 10^{15}}} \nonumber$
and
$\frac{\sigma_{avg}}{P_{avg}}\approx 1.5\times {10}^{-8} \nonumber$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/04%3A_The_Distribution_of_Gas_Velocities/4.08%3A_Statistics_for_Molecular_Speeds.txt
|
The pressure of a gas depends on the frequency with which molecules collide with the wall of their container. The rate at which gas molecules escape through a very small opening in their container is called the effusion rate. The effusion rate rate also depends on the frequency of collisions with the wall. (See problem 4.12.) Other gas properties depend not on the rate of collision with the wall, but on the rate with which gas molecules collide with one another. We turn now to some of these properties. For these considerations, we need to describe the motion of one molecule relative to another. We need the probability density function for the relative velocity of two particles.
To describe the relative velocity of two particles, we introduce relative velocity coordinates. Let us begin by considering a Cartesian coordinate frame, with $x$-, $y$-, and z-axes, whose origin is at a point $O$; we will use $Oxyz$ to designate this set of axes. We specify the location of particle $1$ by the vector ${\mathop{r}\limits^{\rightharpoonup}}_1=\left(x_1,y_1,z_1\right)=x_1\mathop{i}\limits^{\rightharpoonup}+y_1\mathop{j}\limits^{\rightharpoonup}+z_1\mathop{k}\limits^{\rightharpoonup}$ and that of particle $2$ by ${\mathop{r}\limits^{\rightharpoonup}}_2=\left(x_2,y_2,z_2\right)$. We let the location of the center of mass of this two-particle
system be specified by ${\mathop{r}\limits^{\rightharpoonup}}_0=\left(x_0,y_0,z_0\right)$. The vector from particle $1$ to particle $2$, ${\mathop{r}\limits^{\rightharpoonup}}_{12}=\left(x_{12},y_{12},z_{12}\right)$, is the vector difference
${\mathop{r}\limits^{\rightharpoonup}}_{12}={\mathop{r}\limits^{\rightharpoonup}}_2-{\mathop{r}\limits^{\rightharpoonup}}_1=\left(x_2-x_1,y_2-y_1,z_2-z_1\right)\nonumber$
When the particles are moving, these vectors and their components are functions of time. Using the notation ${\dot{x}}_1={dx_1}/{dt}$, we can specify the velocity of particle $1$, for example, as ${\mathop{v}\limits^{\rightharpoonup}}_1={d{\mathop{r}\limits^{\rightharpoonup}}_1}/{dt}=\left({\dot{x}}_1,{\dot{y}}_1,{\dot{z}}_1\right)$. Our goal is to find the relative velocity vector, ${\mathop{v}\limits^{\rightharpoonup}}_{12}={d{\mathop{r}\limits^{\rightharpoonup}}_{12}}/{dt}$. We call the components of ${\mathop{v}\limits^{\rightharpoonup}}_{12}$ the relative velocity coordinates.
The essential idea underlying relative velocity coordinates is that the vectors ${\mathop{r}\limits^{\rightharpoonup}}_0$ and ${\mathop{r}\limits^{\rightharpoonup}}_{12}$ contain the same information as the vectors ${\mathop{r}\limits^{\rightharpoonup}}_1$ and ${\mathop{r}\limits^{\rightharpoonup}}_2$. This is equivalent to saying that we can transform the locations as specified by $\left(x_1,y_1,z_1\right)$ and $\left(x_2,y_2,z_2\right)$ to the same locations as specified by $\left(x_0,y_0,z_0\right)$ and $\left(x_{12},y_{12},z_{12}\right)$, and vice versa. To accomplish this, we write the equation defining the $x$-component of the center of mass, $x_0$:
$m_1\left(x_1-x_0\right)+m_2\left(x_2-x_0\right)=0\nonumber$
which we rearrange to
$\frac{x_1}{m_2}+\frac{x_2}{m_1}=\left(\frac{1}{m_1}+\frac{1}{m_2}\right)x_0\nonumber$
Corresponding relationships can be written for the $y$- and $z$-components. It proves to be useful to introduce the reduced mass, $\mu$, defined by
$\frac{1}\mu=\frac{1}{m_1}+\frac{1}{m_2}\nonumber$
Using the reduced mass, we can express the coordinates of the center of mass in terms of the coordinates of the individual particles. That is,
$x_0=\left(\frac\mu{m_2}\right)x_1+\left(\frac\mu{m_1}\right)x_2\nonumber$
$y_0=\left(\frac\mu{m_2}\right)y_1+\left(\frac\mu{m_1}\right)y_2\nonumber$
$z_0=\left(\frac\mu{m_2}\right)z_1+\left(\frac\mu{m_1}\right)z_2\nonumber$
Since, by definition, we also have
$x_{12}=x_2-x_1\nonumber$ $y_{12}=y_2-y_1\nonumber$ $z_{12}=z_2-z_1\nonumber$
we have developed the transformation from $\left(x_0,y_0,z_0\right)$ and $\left(x_{12},y_{12},z_{12}\right)$ to $\left(x_1,y_1,z_1\right)$ and $\left(x_2,y_2,z_2\right)$. The inverse transformation is readily found to be
$x_1=x_0-\left(\mu/{m_1}\right)x_{12}\nonumber$
$y_1=y_0-\left(\mu/{m_1}\right)y_{12}\nonumber$
$z_1=z_0-\left(\mu/{m_1}\right)z_{12}\nonumber$
$x_2=x_0+\left(\mu/{m_2}\right)x_{12}\nonumber$
$y_2=y_0+\left(\mu/{m_2}\right)y_{12}\nonumber$
$z_2=z_0+\left(\mu/{m_2}\right)z_{12}\nonumber$
Now we can create two new Cartesian coordinate frames. Which of these is more useful depends on the objective of the particular analysis we have at hand. We call the first one the center of mass frame, $O_Ox^{'}y^{'}z^{'}$. It is sketched in Figure 7. The $x{'}$-, $y{'}$-, and $z{'}$-axes of $O_Ox^{'}y^{'}z^{'}$ are parallel to the corresponding axes of $Oxyz$, but their origin, $O_O$, is always at the point occupied by the center of mass of the two-particle system. In this reference frame, the coordinates of particles $1$ and $2$ are their displacements from the center of mass:
$x^{'}_1=x_1-x_0=-\left(\mu/{m_1}\right)x_{12}\nonumber$
$y^{'}_1=y_1-y_0=-\left(\mu/{m_1}\right)y_{12}\nonumber$
$z^{'}_1=z_1-z_0=-\left(\mu/{m_1}\right)z_{12}\nonumber$
$x^{'}_2=x_2-x_0=\left(\mu/{m_2}\right)x_{12}\nonumber$
$y^{'}_2=y_2-y_0=\left(\mu/{m_2}\right)y_{12}\nonumber$
$z^{'}_2=z_2-z_0=\left(\mu/{m_2}\right)z_{12}\nonumber$
The center of mass frame is particularly useful for analyzing interactions between colliding particles.
For our purposes, a third Cartesian coordinate frame, which we will denote the $O_1x^{''}y^{''}z^{''}$ frame, is more useful. It is sketched in Figure 8. The $x^{''}$-, $y^{''}$-, and $z^{''}$-axes of $O_1x^{''}y^{''}z^{''}$ are parallel to the corresponding axes of $Oxyz$, but their origin, $O_1$, is always at the point occupied by particle 1. In this reference frame, the coordinates of particles 1 and 2 are
$x^{''}_1=0\nonumber$
$y^{''}_1=0\nonumber$
$z^{''}_1=0\nonumber$
$x^{''}_2=x_2-x_1=x_{12}\nonumber$
$y^{''}_2=y_2-y_1=y_{12}\nonumber$
$z^{''}_2=z_2-z_1=z_{12}\nonumber$
and the coordinates of the center of mass are
$x^{''}_0=x_0-x_1={\mu x_{12}}/{m_1}\nonumber$
$y^{''}_0=y_0-y_1={\mu y_{12}}/{m_1}\nonumber$
$z^{''}_0=z_0-z_1={\mu z_{12}}/{m_1}\nonumber$
The $O_1x^{''}y^{''}z^{''}$ frame is sometimes called the center of mass frame also. To avoid confusion, we call $O_1x^{''}y^{''}z^{''}$ the particle-one centered frame. In the particle-one centered frame, particle $1$ is stationary at the origin. With its tail at the origin, the vector ${\mathop{r}\limits^{\rightharpoonup}}_{12}=\left(x_{12},y_{12},z_{12}\right)$ specifies the position of particle $2$.
We are interested in the relative velocity of particles $1$ and $2$. The velocity components for particles $1$ and $2$, and for their relative velocity, are obtained by finding the time-derivatives of the corresponding displacement components. Since the transformations of the displacement coordinates are linear, the velocity components transform from one reference frame to another in exactly the same way that the displacement components do. We have
${\mathop{v}\limits^{\rightharpoonup}}_0={d{\mathop{r}\limits^{\rightharpoonup}}_0}/{dt}=\left({\dot{x}}_0,{\dot{y}}_0,{\dot{z}}_0\right)\nonumber$
and
${\mathop{v}\limits^{\rightharpoonup}}_{12}={d{\mathop{r}\limits^{\rightharpoonup}}_{12}}/{dt}=\left({\dot{x}}_{12},{\dot{y}}_{12},{\dot{z}}_{12}\right)\nonumber$
The vector ${\mathop{v}\limits^{\rightharpoonup}}_{12}$ specifies the velocity of particle $2$, relative to a stationary particle $1$. Just as ${\mathop{r}\limits^{\rightharpoonup}}_0$ and ${\mathop{r}\limits^{\rightharpoonup}}_{12}$ contain the same information as the vectors ${\mathop{r}\limits^{\rightharpoonup}}_1$ and ${\mathop{r}\limits^{\rightharpoonup}}_2$, the vectors ${\mathop{v}\limits^{\rightharpoonup}}_0$ and ${\mathop{v}\limits^{\rightharpoonup}}_{12}$ contain the same information as ${\mathop{v}\limits^{\rightharpoonup}}_1$ and ${\mathop{v}\limits^{\rightharpoonup}}_2$. Since a parallel displacement leaves a vector unchanged, each of these vectors is the same in any of the three reference frames. In §11, we find the probability density function for the magnitude of the scalar relative velocity, $v_{12}=\left|{\mathop{v}\limits^{\rightharpoonup}}_{12}\right|$. Since the probability is independent of direction, the probability that two molecules have relative velocity ${\mathop{v}\limits^{\rightharpoonup}}_{12}$ is the same as that they have relative velocity ${-\mathop{v}\limits^{\rightharpoonup}}_{12}$. (In spherical coordinates, if ${\mathop{v}\limits^{\rightharpoonup}}_{12}=\left(v_{12},\theta ,\varphi \right)$, then $-{\mathop{v}\limits^{\rightharpoonup}}_{12}=\left(v_{12},\theta +\pi ,\varphi +\pi \right)$.) The probability and magnitude of the relative velocity are independent of which particle—if either—we choose to view as being stationary; they are independent of whether the particles are approaching or receding from one another.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/04%3A_The_Distribution_of_Gas_Velocities/4.10%3A_Collisions_between_Gas_Molecules_Relative_Velocity_Coordinates.txt
|
From our development of the Maxwell-Boltzmann probability density functions, we can express the probability that the velocity components of particle 1 lie in the intervals $v_{1x}$ to $v_{1x}+dv_{1x}$; $v_{1y}$ to $v_{1y}+dv_{1y}$; $v_{1z}$ to $v_{1z}+dv_{1z}$; while those of particle 2 simultaneously lie in the intervals $v_{2x}$ to $v_{2x}+dv_{2x}$; $v_{2y}$ to $v_{2y}+dv_{2y}$; $v_{2z}$ to $v_{2z}+dv_{2z}$ as
$\left(\frac{df\left(v_{1x}\right)}{dv_{1x}}\right)\left(\frac{df\left(v_{1y}\right)}{dv_{1y}}\right)\left(\frac{df\left(v_{1z}\right)}{dv_{1z}}\right)\left(\frac{df\left(v_{2x}\right)}{dv_{2x}}\right)\left(\frac{df\left(v_{2y}\right)}{dv_{2y}}\right)\left(\frac{df\left(v_{2z}\right)}{dv_{2z}}\right) \nonumber$
$\times dv_{1x}dv_{1y}dv_{1z}dv_{2x}dv_{2y}dv_{2z} \nonumber$
$=\left(\frac{df\left(v_1\right)}{dv_1}\right)\left(\frac{df\left(v_2\right)}{dv_2}\right)dv_1dv_2 \nonumber$
We want to express this probability using the relative velocity coordinates. Since the velocity of the center of mass and the relative velocity are independent, we might expect that the Jacobian of this transformation is just the product of the two individual Jacobians. This turns out to be the case. The Jacobian of the transformation
$\left({\dot{x}}_1,{\dot{y}}_1,{\dot{z}}_1,{\dot{x}}_2,{\dot{y}}_2,{\dot{z}}_2\right)\to \left({\dot{x}}_0,{\dot{,y}}_0,{\dot{z}}_0,{\dot{x}}_{12},{\dot{y}}_{12},{\dot{z}}_{12}\right) \nonumber$
is a six-by-six determinate. It is messy, but straightforward, to show that it is equal to the product of two three-by-three determinants and that the absolute value of this product is one. Therefore, we have
\begin{align*} dv_{1x}dv_{1y}dv_{1z}dv_{2x}dv_{2y}dv_{2z} =& d{\dot{x}}_1d{\dot{y}}_1d{\dot{z}}_1d{\dot{x}}_2d{\dot{y}}_2d{\dot{z}}_2 \[4pt] &=d{\dot{x}}_0d{\dot{y}}_0d{\dot{z}}_0d{\dot{x}}_{12}d{\dot{y}}_{12}d{\dot{z}}_{12} \end{align*}
We transform the probability density by substituting into the one-dimensional probability density functions. That is,
\begin{align*} \left(\frac{df\left(v_1\right)}{dv_1}\right)\left(\frac{df\left(v_2\right)}{dv_2}\right) &={\left(\frac{m_1}{2\pi kT}\right)}^{3/2} \mathrm{exp}\left(\frac{-m_1\left(v^2_{1x}+v^2_{1y}+v^2_{1z}\right)}{2kT}\right)\times {\left(\frac{m_2}{2\pi kT}\right)}^{3/2} \mathrm{exp}\left(\frac{-m_2\left(v^2_{2x}+v^2_{2y}+v^2_{2z}\right)}{2kT}\right) \[4pt] &={\left(\frac{m_1m_2}{4\pi^2k^2T^2}\right)}^{3/2}\times \mathrm{exp}\left(\frac{-m_1\left(v^2_{1x}+v^2_{1y}+v^2_{1z}\right)-m_2\left(v^2_{2x}+v^2_{2y}+v^2_{2z}\right)}{2kT}\right) \[4pt] &= \left(\frac{m_1m_2}{4\pi^2k^2T^2}\right)^{3/2} \times \mathrm{exp}\left(\frac{-\frac{m_1m_2}{\mu}\left(\dot{x}^2_0+\dot{y}^2_0+z^2_0\right)-\mu \left(\dot{x}^2_{12} + \dot{y}^2_{12}+\dot{z}^2_{12}\right)} {2kT} \right) \end{align*}
where the last expression specifies the probability density as a function of the relative velocity coordinates.
Next, we make a further transformation of variables. We convert the velocity of the center of mass, $\left({\dot{x}}_0,{\dot{y}}_0,{\dot{z}}_0\right)$, and the relative velocity,$\ \left({\dot{x}}_{12},{\dot{y}}_{12},{\dot{z}}_{12}\right)$, from Cartesian coordinates to spherical coordinates, referred to the $Oxyz$ axis system. (The motion of the center of mass is most readily visualized in the original frame $Oxyz$. The relative motion, ${\mathop{v}\limits^{\rightharpoonup}}_{12}$, is most readily visualized in the Particle-One Centered Frame,$\ O_1x^{''}y^{''}z^{''}$. In $O_1x^{''}y^{''}z^{''}$, the motion of particle 2 is specified by ${\dot{x}}^{''}_2={\dot{x}}_{12}$, ${\dot{y}}^{''}_2={\dot{y}}_{12}$, and ${\dot{z}}^{''}_2={\dot{z}}_{12}.$ The motion of the center of mass is specified by ${\dot{x}}^{''}_0={\mu {\dot{x}}_{12}}/{m_1}$, ${\dot{y}}^{''}_0={\mu {\dot{y}}_{12}}/{m_1}$, and ${\dot{z}}^{''}_0={\mu {\dot{z}}_{12}}/{m_1}$. Since it is the relative motion that is actually of interest, it might seem that we should refer the spherical coordinates to the $O_1x^{''}y^{''}z^{''}$ frame. This is an unnecessary distinction because all three coordinate frames are parallel to one another, and ${\mathop{r}\limits^{\rightharpoonup}}_0$ and ${\mathop{r}\limits^{\rightharpoonup}}_{12}$ are the same vectors in all three frames.) Letting
$v^2_0={\dot{x}}^2_0+{\dot{y}}^2_0+z^2_0 \nonumber$
$v^2_{12}={\dot{x}}^2_{12}+{\dot{y}}^2_{12}+z^2_{12} \nonumber$
the Cartesian velocity components are expressed in spherical coordinates by
${\dot{x}}_0=v_0{ \sin \theta_0\ }{\mathrm{cos} {\varphi }_0\ } \nonumber$
${\dot{y}}_0=v_0{ \sin \theta_0\ }{ \sin {\varphi }_0\ } \nonumber$
${\dot{z}}_0=v_0{\mathrm{cos} \theta_0\ } \nonumber$
${\dot{x}}_{12}=v_{12}{ \sin \theta_{12}\ }{\mathrm{cos} {\varphi }_{12}\ } \nonumber$
${\dot{y}}_{12}=v_{12}{ \sin \theta_{12}\ }{ \sin {\varphi }_{12}\ } \nonumber$
${\dot{z}}_{12}=v_{12}{\mathrm{cos} \theta_{12}\ } \nonumber$
The angles $\theta_0$, $\theta_{12}$, ${\varphi }_0$, and ${\varphi }_{12}$ are defined in the usual manner relative to the $Oxyz$ axis system. The Jacobian of this transformation is a six-by-six determinate; which can again be converted to the product of two three-by-three determinates. We find
${d\dot{x}}_0{d\dot{y}}_0{d\dot{z}}_0{d\dot{x}}_{12}{d\dot{y}}_{12}{d\dot{z}}_{12}= \nonumber$ $=v^2_0{ \sin \theta_0\ }dv_0d\theta_0d{\varphi }_0v^2_{12}{ \sin \theta_{12}\ }dv_{12}d\theta_{12}d{\varphi }_{12} \nonumber$
The probability that the components of the velocity of the center of mass lie in the intervals $v_0$ to $v_0+dv_0$; $\theta_0$ to $\theta_0+d\theta_0$; ${\varphi }_0$ to ${\varphi }_0+d{\varphi }_0$; while the components of the relative velocity lie in the intervals $v_{12}$ to $v_{12}+dv_{12}$; $\theta_{12}$ to $\theta_{12}+d\theta_{12}$; ${\varphi }_{12}$ to ${\varphi }_{12}+d{\varphi }_{12}$; becomes
${\left(\frac{m_1m_2}{4\pi^2k^2T^2}\right)}^{3/2}exp\left(\frac{-m_1m_2v^2_0}{2\mu kT}\right)exp\left(\frac{-\mu v^2_{12}}{2kT}\right)\times \nonumber$
$v^2_0{ \sin \theta_0\ }dv_0d\theta_0d{\varphi }_0v^2_{12}{ \sin \theta_{12}\ }dv_{12}d\theta_{12}d{\varphi }_{12} \nonumber$
We are interested in the probability increment for the relative velocityrelative velocity:probability density function irrespective of the velocity of the center of mass. To sum the contributions for all possible motions of the center of mass, we integrate this expression over the possible ranges of $v_0$, $\theta_0$, and ${\varphi }_0$. We have
$\left(\frac{df\left(v_{12}\right)}{dv_{12}}\right)\left(\frac{df\left(\theta_{12}\right)}{d\theta_{12}}\right)\left(\frac{df\left({\varphi }_{12}\right)}{d{\varphi }_{12}}\right)dv_{12}d\theta_{12}d{\varphi }_{12}= \nonumber$
$={\left(\frac{m_1m_2}{4\pi^2k^2T^2}\right)}^{3/2}\int^{\infty }_0{v^2_0exp\left(\frac{-m_1m_2v^2_0}{2\mu kT}\right)}dv_0\times \int^\pi_0{ \sin \theta_0\ }d\theta_0\int^{2\pi }_0{d{\varphi }_0}\ \times \left[v^2_{12}exp\left(\frac{-\mu v^2_{12}}{2kT}\right){ \sin \theta_{12}\ }dv_{12}d\theta_{12}d{\varphi }_{12}\right] \nonumber$
$={\left(\frac{\mu }{2\pi kT}\right)}^{3/2}v^2_{12}exp\left(\frac{-\mu v^2_{12}}{2kT}\right){ \sin \theta_{12}\ }dv_{12}d\theta_{12}d{\varphi }_{12} \nonumber$
This is the same as the probability increment for a single-particle velocity—albeit with $\mu$ replacing $m$; $v_{12}$ replacing $v$; $\theta_{12}$ replacing $\theta$; and ${\varphi }_{12}$ replacing $\varphi$. As in the single-particle case, we can obtain the probability increment for the scalar component of the relative velocity by integrating over all possible values of $\theta_{12}$ and ${\varphi }_{12}$. We find
$\frac{df\left(v_{12}\right)}{dv_{12}}=4\pi {\left(\frac{\mu }{2\pi kT}\right)}^{3/2}v^2_{12}exp\left(\frac{-\mu v^2_{12}}{2kT}\right){dv}_{12} \nonumber$
In §8, we find the most probable velocity, the mean velocity, and the root-mean-square velocity for a gas whose particles have mass $m$. By identical arguments, we obtain the most probable relative velocity, the mean relative velocity, and the root-mean-square relative velocity. To do so, we can simply substitute $\mu$ for $m$ in the earlier results. In particular, the mean relative velocity is
${\overline{v}}_{12}=\left\langle v_{12}\right\rangle ={\left(\frac{8kT}{\pi \mu }\right)}^{1/2}\approx 1.596 {\left(\frac{kT}{\pi \mu }\right)}^{1/21} \nonumber$
If particles 1 and 2 have the same mass, $m$, the reduced mass becomes $\mu ={m}/{2}$. In this case, we have
$\left\langle v_{12}\right\rangle = {\left(\frac{2\left(8kT\right)}{\pi m}\right)}^{1/2}=\sqrt{2}\left\langle v\right\rangle \nonumber$
We can arrive at this same conclusion by considering the relative motion of two particles that represents the average case. As illustrated in Figure 9, this occurs when the two particles have the same speed, $\left\langle v\right\rangle$, but are moving at 90-degree angles to one another. In this situation, the length of the resultant vector—the relative speed— is just
$\left|{\overline{v}}_{12}\right|=\left\langle v_{12}\right\rangle =\sqrt{2}\left\langle v\right\rangle. \nonumber$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/04%3A_The_Distribution_of_Gas_Velocities/4.11%3A_The_Probability_Density_Function_for_the_Relative_Velocity.txt
|
Thus far in our theoretical development of the properties of gases, we have assumed that ideal gas molecules are point masses. While they can collide with the walls of their container, point masses cannot collide with one another. As we saw in our discussion of van der Waals equation, the deviation of real gases from ideal gas behavior is one indication that an individual gas molecule occupies a finite volume.
To develop a model for molecular collisions, we need to know the size and shape of the colliding molecules. For a general model, we want to use the simplest possible size and shape. Accordingly, we consider a model in which gas molecules are spheres with well-defined radii. We let the radii of molecules $1$ and $2$ be $\sigma_1$ and $\sigma_2$, respectively. See Figure 10. When such molecules collide, their surfaces must come into contact, and the distance between their centers must be $\sigma_{12}=\sigma_1+\sigma_2$. We call $\sigma_{12}$ the collision radius.
Let us consider a molecule of type $1$ in a container with a large number of molecules of type $2$. We suppose that there are $N_2$ molecules of type $2$ per unit volume. Every molecule of type $2$ has some velocity, $v_{12}$, relative to the molecule of type $1$. From our development above, we know both the probability density function for $v_{12}$ and the expected value $\left\langle v_{12}\right\rangle$. Both molecule $1$ and all of the molecules of type $2$ are moving with continuously varying speeds. However, it is reasonable to suppose that—on average—the encounters between molecule $1$ and molecules of type $2$ are the same as they would be if all of the type $2$ molecules were fixed at random locations in the volume, and molecule $1$ moved among them with a speed equal to the average relative velocity, $\left\langle v_{12}\right\rangle$.
Under this assumption, a molecule 1 travels a distance equal to $\left\langle v_{12}\right\rangle$ in unit time. As it does so, it collides with any type $2$ molecule whose center is within a distance $\sigma_{12}$ of its own center. For the moment, let us suppose that the trajectory of molecule $1$ is unaffected by the collisions it experiences. Then, in unit time, molecule $1$ sweeps out a cylinder whose length is $\left\langle v_{12}\right\rangle$ and whose cross-sectional area is $\pi \sigma^2_{12}$. The volume of this cylinder is $\pi \sigma^2_{12}\left\langle v_{12}\right\rangle$. (See Figure 11.)
Since there are $N_2$ molecules of type $2$ per unit volume, the number of type $2$ molecules in the cylinder is $N_2\pi \sigma^2_{12}\left\langle v_{12}\right\rangle$. Each of these molecules is a molecule of type $2$ that experiences a collision with molecule $1$ in unit time. Letting ${\widetilde{\nu }}_{12}$ be the frequency (number of collision per unit time) with which molecule $1$ collides with molecules of type $2$, we have
${\widetilde{\nu }}_{12}=N_2\pi \sigma^2_{12}\left\langle v_{12}\right\rangle =N_2\pi \sigma^2_{12}{\left(\frac{8kT}{\pi \mu }\right)}^{1/2} \nonumber$
$=N_2\sigma^2_{12}{\left(\frac{8\pi kT}{\mu }\right)}^{1/2} \nonumber$
Two additional parameters that are useful for characterizing molecular collisions are ${\tau }_{12}$, the mean time between collisions, and ${\lambda }_{12}$, the mean distance that molecule $1$ travels between collisions with successive molecules of type $2$. ${\lambda }_{12}$ is called the mean free path. The mean time between collisions is simply the reciprocal of the collision frequency,
${\tau }_{12}={1}/{\widetilde{\nu }_{12}} \nonumber$
and the mean free path for molecule $1$ is the distance that molecule $1$ actually travels in this time, which is $\left\langle v_1\right\rangle$, not $\left\langle v_{12}\right\rangle$, so that
${\lambda }_{12}=\left\langle v_1\right\rangle {\tau }_{12}=\frac{\left({\mu }/{m}\right)^{1/2}}{N_2\pi \sigma^2_{12}} \nonumber$
Now, we need to reevaluate the assumption that the trajectory of a molecule of a molecule $1$ is unaffected by its collisions with molecules of type $2$. Clearly, this is not the case. The path of molecule $1$ changes abruptly at each collision. The actual cylinder that molecule $1$ sweeps out will have numerous kinks, as indicated in Figure 12. The kinked cylinder can be produced from a straight one by making a series oblique cuts (one for each kink) across the straight cylinder and then rotating the ends of each cut into convergence. If we think of the cylinder as a solid rod, its volume is unchanged by these cuttings and rotations. The volume of the kinked cylinder is the same as that of the straight cylinder. Thus, our conclusions about the collision frequency, the mean time between collisions, and the mean free path are not affected by the fact that the trajectory of molecule $1$ changes at each collision.
4.13: The Rate of Collisions between Unlike Gas Molecules
We define the collision frequencycollision frequency, ${\widetilde{\nu }}_{12}$, as the number of collision per unit time between a single molecule of type 1 and any of the molecules of type 2 present in the same container. We find ${\widetilde{\nu }}_{12}=N_2\pi {\sigma }^2_{12}\left\langle v_{12}\right\rangle$. If there are $N_1$ molecules of type 1 present in a unit volume of the gas, the total number of collisions between type 1 molecules and type 2 molecules is $N_1$ times greater. For clarity, let us refer to the total number of such collisions, per unit volume and per unit time, as the collision rate, ${\rho }_{12}$. We have
${\rho }_{12}=N_1{\widetilde{\nu }}_{12}=N_1N_2\pi {\sigma }^2_{12}\left\langle v_{12}\right\rangle =N_1N_2{\sigma }^2_{12}{\left(\frac{8\pi kT}{\mu }\right)}^{1/2} \nonumber$
4.14: Collisions between like Gas Molecules
When we consider collisions between different gas molecules of the same substance, we can denote the relative velocity and the expected value of the relative velocity as $v_{11}$ and $\left\langle v_{11}\right\rangle$, respectively. By the argument we make above, we can find the number of collisions between any one of these molecules and all of the others. Letting this collision frequency be ${\widetilde{\nu }}_{11}$, we find
$\widetilde{\nu }_{11}=N_1\pi {\sigma }^2_{11}\left\langle v_{11}\right\rangle, \nonumber$
where ${\sigma }_{11}=2{\sigma }_1$. Since we have
$\left\langle v_{11}\right\rangle =\sqrt{2}\left\langle v_1\right\rangle, \nonumber$
while
$\left\langle v_1\right\rangle =\sqrt{{8kT}/{\pi }m_1}, \nonumber$
we have $\left\langle v_{11}\right\rangle =4\sqrt{{kT}/{\pi }m_1}$. The frequency of collisions between molecules of the same substance becomes
${\widetilde{\nu }}_{11}=N_1\pi {\sigma }^2_{11}\left\langle v_{11}\right\rangle =4N_1{\sigma }^2_{11}{\left(\frac{\pi kT}{m_1}\right)}^{1/2} \nonumber$
The mean time between collisions, ${\tau }_{11}$, is
${\tau }_{11}={1}/{\widetilde{\nu}_{11}} \nonumber$
and the mean free path, ${\lambda }_{11}$,
${\lambda }_{11}=\left\langle v_1\right\rangle {\tau }_{11}=\frac{1}{\sqrt{2}N}_1\pi {\sigma }^2_{11} \nonumber$
When we consider the rate of collisions between all of the molecules of type $1$ in a container, ${\rho }_{11}$, there is a minor complication. If we multiply the collision frequency per molecule, ${\widetilde{\nu }}_{11}$, by the number of molecules available to undergo such collisions, $N_1$, we count each collision twice, because each such collision involves two type $1$ molecules. To find the collision rate among like molecules, we must divide this product by 2. That is,
${\rho }_{11}=\frac{N_1{\widetilde{\nu }}_{11}}{2}=2N^2_1{\sigma }^2_{11}{\left(\frac{\pi kT}{m_1}\right)}^{1/2} \nonumber$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/04%3A_The_Distribution_of_Gas_Velocities/4.12%3A_The_Frequency_of_Collisions_between_Unlike_Gas_Molecules.txt
|
Thus far we have not concerned ourselves with the relative orientation of a pair of colliding molecules. We want to develop a more detailed model for the collision process${}^{1}$ itself, and the first step is to specify what we mean by relative orientation.
As before, we consider a molecule of type $1$ moving with the relative velocity $v_{12}$ through a gas of stationary type$\ 2$ molecules. In unit time, molecule $1$ travels a distance $v_{12}$ and collides with many molecules of type $2$. We can characterize each such collision by the angle, $\theta$, between the velocity vector and the line of centers of the colliding pair. For glancing collisions, we have $\theta ={\pi }/{2}$. For head-on collisions, we have $\theta =0$. All else being equal, the collision will be more violent the smaller the angle $\theta$. Evidently, we can describe the average effect of collisions more completely if we can specify the frequency of collisions as a function of $\theta$. More precisely, we want to find the frequency of collisions in which this angle lies between $\theta$ and $\theta +d\theta$.
When a collision occurs, the distance between the molecular centers is ${\sigma }_{12}$. We can say that the center of molecule $2$ is at a particular point on the surface of a sphere, of radius ${\sigma }_{12}$, circumscribed about molecule $1$. As sketched in Figure 13, we can rotate the line of centers around the velocity vector, while keeping the angle between them constant at $\theta$. As we do so, the line of centers traces out a circle on the surface of the sphere; collisions that put the center of molecule$\ 2$ at any two points on this circle are completely equivalent. Letting the radius of this circle be $r$, we see that $r=\sigma_{12}\mathrm{sin} \theta$. Evidently, for spherical molecules, specifying $\theta$ specifies the relative orientation at the time of collision.
If we now allow $\theta$ to vary by $d\theta$, the locus of equivalent points on the circumscribed sphere expands to a band. Measured along the surface of the sphere, the width of this band is ${\sigma }_{12}d\theta$. As molecule $1$ moves through the gas of stationary type $2$ molecules, this band sweeps out a cylindrical shell. Molecule $1$ collides, at an angle between $\theta$ and $\theta +d\theta$, with every type $2$ molecule in this cylindrical shell. Conversely, every type $2$ molecule in this cylindrical shell collides with molecule $1$ at an angle between $\theta$ and $\theta +d\theta$. (Molecule $1$ also collides with many other type $2$ molecules, but those collisions are at other angles; they have different orientations.) In unit time, the length of the cylindrical shell is $v_{12}$. The volume of the cylindrical shell is its length times its cross-sectional area.
The cross-section of the cylindrical shell is a circular annulus. Viewing the annulus as a rectangular strip whose length is the circumference of the shell and whose width is the radial thickness of the annulus, the area of the annulus is the circumference times the radial thickness. Since the radius of the shell is $r={\sigma }_{12}\mathrm{sin} \theta$, its circumference is $2\pi {\sigma }_{12} \mathrm{sin} \theta$. The radial thickness of the annulus is just the change in the distance, $r={\sigma }_{12}\mathrm{sin} \theta$, between the velocity vector and the wall of the cylinder when $\theta$ changes by a small amount $d\theta$. This is
$dr=\left(\frac{dr}{d\theta }\right)d\theta ={\sigma }_{12} \mathrm{cos} \theta d\theta \nonumber$
Therefore, the area of the annulus is
$2\pi {\sigma }^2_{12} \mathrm{sin} \theta \mathrm{cos} \theta d\theta \nonumber$
and the volume of the cylindrical shell swept out by a type 1 molecule (traveling at exactly the speed $v_{12}$) in unit time is
$2\pi {\sigma }^2_{12}v_{12} \mathrm{sin} \theta \mathrm{cos} \theta d\theta \nonumber$
We again let $N_2$ be the number of molecules of type $2$ per unit volume. The number of collisions, per unit time, between a molecule of type $1$, traveling at exactly $v_{12}$, and molecules of type $2$, in which the collision angle lies between $\theta$ and $\theta +d\theta$ is
$2\pi N_2{\sigma }^2_{12}v_{12} \mathrm{sin} \theta \mathrm{cos} \theta d\theta \nonumber$
We need to find the number of such collisions in which the relative velocity lies between $v_{12}$ and $v_{12}+dv_{12}$. The probability of finding $v_{12}$ in this interval is $\left(df\left(v_{12}\right)/dv_{12}\right)dv_{12}$. Let $d\widetilde{\nu }_{12}\left({\nu }_{12},\theta \right)$ be the number of collisions made in unit time, by a type $1$ molecule, with molecules of type $2$, in which the collision angle is between $\theta$ and $\theta +d\theta$, and the scalar relative velocity is between $v_{12}$ and $v_{12}+dv_{12}$. This is just the number of collisions when the relative velocity is $v_{12}$ multiplied by the probability that the relative velocity is between $v_{12}$ and $v_{12}+dv_{12}$. We have the result we need:
\begin{aligned} d\widetilde{\nu }_{12}\left(v_{12},\theta \right) & = 2\pi N_2{\sigma }^2_{12}v_{12}\left(\frac{df\left(v_{12}\right)}{dv_{12}}\right) \mathrm{sin} \theta ~ \mathrm{cos} \theta ~ d\theta dv_{12} \ ~ & =8 { \pi }^2N_2{\sigma }^2_{12} \left(\frac{\mu }{2\pi kT}\right)^{3/2}v^3_{12}exp\left(\frac{-\mu v^2_{12}}{2kT}\right) \times \mathrm{sin} \theta ~ \mathrm{cos} \theta ~ d\theta dv_{12} \end{aligned} \nonumber
Recognizing that possible values of $\theta$ lie in the range $0\le \theta <{\pi }/{2}$ and that possible values of $v_{12}$ lie in the range $0\le v_{12}<\infty$, we can find the frequency of all possible collisions, $\widetilde{\nu }_{12}$, by summing over all possible values of $\theta$ and $v_{12}$. That is,
\begin{aligned} \widetilde{\nu }_{12} & =8{\pi }^2N_2{\sigma }^2_{12} \left(\frac{\mu }{2\pi kT}\right)^{3/2}\int^{\infty }_0 v^3_{12}exp\left(\frac{-\mu v^2_{12}}{2kT}\right) dv_{12} \times \int^{\pi /2}_0 \mathrm{sin} \theta ~ \mathrm{cos} \theta ~ d\theta \ ~ & =8{\pi }^2N_2{\sigma }^2_{12} \left(\frac{\mu }{2\pi kT}\right)^{3/2}\left[2 \left(\frac{kT}{\mu }\right)^2\right]\left[\frac{1}{2}\right] \ ~ & =N_2{\sigma }^2_{12} \left(\frac{8\pi kT}{\mu }\right)^{1/2} \end{aligned} \nonumber
In Section 4.12, we obtained this result by a slightly different argument, in which we did not explicitly consider the collision angle, $\theta$.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/04%3A_The_Distribution_of_Gas_Velocities/4.15%3A_The_Geometry_of_A_Collision_between_Spherical_Molecules.txt
|
It is useful to extend our model of molecular collisions to suppose that one or both of the molecules can undergo chemical change as a result of the collision. In doing so, we are introducing some ideas that we develop further in Chapter 5.
When we ask about the factors that determine whether such a reaction can occur, there can be several possibilities. We want to focus on one such factor—the violence of the collision. We expect that a collision is more likely to result in a reaction the harder the two molecules hit one another. When we try to formulate our basis for this expectation, we see that the underlying idea is that a collision deforms the colliding molecules. The more violent the collision, the greater the deformation, and the greater the likelihood of reaction becomes.
To proceed, we need to be more precise about what we mean by the violence of the collision. Evidently, what we have in mind has two components: the relative velocity and the collision angle. If the collision is a glancing one, $\theta ={\pi }/{2}$, we expect the effect on the molecules to be minimal, even if the relative velocity is high. On the other hand, a direct collision, $\theta \approx 0$, might lead to reaction even if the relative velocity is comparatively low. With these ideas in mind, we see that a reasonable model is to suppose that forces acting along the line of centers can lead to reaction, whereas forces acting perpendicular to the line of centers cannot. If the colliding molecules have complex shapes, this may be a poor assumption.
We also need a way to specify how much deformation occurs in a collision. If we want to specify the deformation by describing specific changes in the molecular structures, this is a complex problem. For a general model, however, we can avoid this level of detail. To do so, we recognize that any deformation can proceed only until the work done in deforming the molecules equals the energy that can be expended to do this work. As the molecules are deformed, their potential energies change. The maximum change in this potential energy is just the amount of kinetic energy that the colliding molecules can use to effect this deformation. We can identify this amount of kinetic energy with the component of the molecules’ kinetic energy that is associated with their relative motion along the line of centers.
If we now associate a threshold level of deformation with the occurrence of a chemical change, the kinetic energy required to effect this deformation determines whether the change can occur. If the available kinetic energy is less than that required to achieve the threshold level of deformation, reaction cannot occur. If the available kinetic energy exceeds this minimum, reaction takes place. We call the minimum kinetic energy the activation energy and usually represent it by the symbol ${\epsilon }_a$. (In discussing reaction rates, we usually express the activation energy per mole and represent it as $E_a$, where $E_a=\overline{N}{\epsilon }_a.) \nonumber$
We can apply these ideas to our model for collision between spherical molecules. In Section 4.10, we develop relative velocity coordinates. It follows that we can partition kinetic energy of the two-particle system into a component that depends on the velocity of the center of mass and a component that depends on the relative velocity. That is, we have
\begin{aligned} KE & =\frac{m_1v^2_1}{2}+\frac{m_2v^2_2}{2} \ & =\frac{m_1m_2}{2\mu }\left(\dot{x}^2_0+\dot{y}^2_0+z^2_0\right)+\mu \left(\dot{x}^2_{12}+\dot{y}^2_{12}+\dot{z}^2_{12}\right) \ & =\frac{m_1m_2v^2_0}{2\mu }+\frac{\mu v^2_{12}}{2} \end{aligned} \nonumber
Only the component that depends on the relative velocity can contribute to the deformation of the colliding molecules. The relative velocity can be resolved into components parallel and perpendicular to the line of centers. The parallel component is the projection of the velocity vector onto the line of centers. This is $v_{12} \mathrm{cos} \theta$, and the perpendicular component is $v_{12}\mathrm{sin} \theta$. We see that the kinetic energy associated with the relative motion of particles 1 and 2 has a component
$\frac{\mu v^2_{12} \mathrm{cos}^2 \theta}{2} \nonumber$
parallel to the line of centers and a component
$\frac{\mu v^2_{12} \mathrm{sin}^2 \theta}{2} \nonumber$
perpendicular to it.
The idea that the kinetic energy parallel to the line of centers must exceed ${\varepsilon }_a$ for reaction to occur can now be expressed as the requirement that
${\epsilon }_a<\frac{\mu v^2_{12} \mathrm{cos}^2 \theta}{2} \nonumber$
When we consider all possible collisions between molecules 1 and 2, the collision angle varies from 0 to ${\pi }/{2}$. However, only those collisions for which $v_{12}$ satisfies the inequality above will have sufficient kinetic energy along the line of centers for reaction to occur. The smallest value of $v_{12}$ that can satisfy this inequality occurs when $\theta =0$. This minimum relative velocity is
$v^{minimun}_{12}= \left(2{\epsilon }_a/{\mu }\right)^{1/2} \nonumber$
For relative velocities in excess of this minimum, collisions are effective only when
$\mathrm{cos} \theta > \left(2{\epsilon }_a/ \mu v^2_{12}\right)^{1/2} \nonumber$
so that
$\theta < \mathrm{cos}^{-1} \left(2{\epsilon }_a/\mu v^2_{12}\right)^{1/2} \nonumber$
Let us designate the frequency of collisions satisfying these constraints as $\widetilde{\nu }_{12}\left({\epsilon }_a\right)$. Recalling that
$d\widetilde{\nu }_{12}\left({\nu }_{12},\theta \right)= 8{\pi }^2N_2{\sigma }^2_{12} \left(\frac{\mu }{2\pi kT}\right)^{3/2}v^3_{12} \mathrm{exp}\left(\frac{-\mu v^2_{12}}{2kT}\right) \times \mathrm{sin} \theta ~ \mathrm{cos} \theta ~ d\theta dv_{12} \nonumber$
we see that
$\widetilde{\nu }_{12}\left({\varepsilon }_a\right)=8{\pi }^2N_2{\sigma }^2_{12} \left(\frac{\mu }{2\pi kT}\right)^{3/2} \times \int^{\infty }_{v_{12}=\left(2{\epsilon }_a/{\mu }\right)^{1/2}} \int^{\mathrm{cos}^{-1} \left(2{\epsilon }_a/ \mu v^2_{12}\right)^{1/2}}_{\theta =0} v^3_{12}\mathrm{\ exp}\left(\frac{-\mu v^2_{12}}{2kT}\right) \times \mathrm{sin} \theta ~ \mathrm{cos} \theta ~ d\theta dv_{12} \nonumber$
The integral involving $\theta$ is
$\int^{\mathrm{cos}^{-1} \left( 2{\epsilon }_a/\mu v^2_{12}\right)^{1/2}}_{\theta =0} \mathrm{sin} \theta ~ \mathrm{cos} \theta ~ d\theta = \left[\frac{\mathrm{sin}^2 \theta}{2}\right]^{\mathrm{cos}^{-1} \left(2{\epsilon }_a/\mu v^2_{12}\right)^{1/2}}_0=\frac{1}{2}\left[1-\frac{2{\epsilon }_a}{\mu v^2_{12}}\right] \nonumber$
where, to evaluate the integral at its upper limit, we note that the angle $\theta ={{\mathrm{cos}}^{-1} {\left({2{\epsilon }_a}/{\mu v^2_{12}}\right)}^{{1}/{2}}\ }$ lies in a triangle whose sides have lengths as indicated in Figure 14.
The collision frequency becomes
$\widetilde{\nu }_{12}\left({\epsilon }_a\right)=4{\pi }^2N_2{\sigma }^2_{12} \left(\frac{\mu }{2\pi kT}\right)^{3}/{2}\times \int^{\infty }_{v_{12}=\left(2{\epsilon }_a/{\mu }\right)^{1/2}} \left[1-\frac{2{\epsilon }_a}{\mu v^2_{12}}\right]v^3_{12}\ \mathrm{exp}\left(\frac{-\mu v^2_{12}}{2kT}\right)dv_{12} \nonumber$
This integral can be evaluated by making the substitution $v_{12}= \left({2\epsilon }/{\mu }\right)^{1/2}$. The lower limit of integration becomes ${\epsilon }_a$; we have
\begin{aligned} \int^{\infty }_{v_{12}=\left(2{\epsilon }_a/{\mu }\right)^{1/2}} & \left[1-\frac{2{\epsilon }_a}{\mu v^2_{12}}\right]v^3_{12}\ \mathrm{exp}\left(\frac{-\mu v^2_{12}}{2kT}\right)dv_{12} \ & =\frac{2}{{\mu }^2}\int^{\infty }_{{\epsilon }_a} \left(\epsilon -{\epsilon }_a\right)\mathrm{exp}\left(\frac{-\epsilon }{kT}\right)d\epsilon \ & =2 \left(\frac{kT}{\mu }\right)^2\mathrm{exp}\left(\frac{-{\epsilon }_a}{kT}\right) \end{aligned} \nonumber
Then
\begin{aligned} \widetilde{\nu }_{12}\left({\varepsilon }_a\right) & =4{\pi }^2N_2{\sigma }^2_{12} \left(\frac{\mu }{2\pi kT}\right)^{3/2}\times 2 \left(\frac{kT}{\mu }\right)^2\mathrm{exp}\left(\frac{-{\epsilon }_a}{kT}\right) \ & =N_2{\sigma }^2_{12} \left(\frac{8\pi kT}{\mu }\right)^{1/2}\mathrm{exp}\left(\frac{-{\epsilon }_a}{kT}\right) \end{aligned} \nonumber
Note that when ${\epsilon }_a=0$, this reduces to the same expression for $\widetilde{\nu }_{12}$ that we have obtained twice previously. The frequency of collisions having kinetic energy along the line of centers in excess of ${\epsilon }_a$ depends exponentially on $-{\epsilon }_a/{kT}$. All else being equal, this frequency increases as the temperature increases; it decreases as the activation energy increases.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/04%3A_The_Distribution_of_Gas_Velocities/4.16%3A_The_Energy_of_A_Collision_between_Gas_Molecules.txt
|
1. For an oxygen molecule at 25 C, calculate (a) the most probable velocity, (b) the average velocity, (c) the root-mean-square velocity.
2. For a gas of oxygen molecules at 25 C and 1.00 bar, calculate (a) the collision frequency, (b) the mean time between collisions, (c) the mean free path. The diameter of an oxygen molecule, as estimated from gas-viscosity measurements, is 3.55 x 10${}^{-10}$ m.
3. For oxygen molecules at 25 C, calculate (a) the fraction with speeds between 150 and 151 m s${}^{-1}$, (b) the fraction with speeds between 400 and 401 m s${}^{-1}$, (c) the fraction with speeds between 550 and 551 m s${}^{-1}$.
4. For a hydrogen molecule at 100 C, calculate (a) the most probable velocity, (b) the average velocity, (c) the root-mean-square velocity.
5. For a gas of hydrogen molecules at 100 C and 1.00 bar, calculate (a) the collision frequency, (b) the mean time between collisions, (c) the mean free path. The diameter of a hydrogen molecule, as estimated from gas-viscosity measurements, is 2.71 x 10${}^{-10}$ m.
6. For a uranium hexafluoride (UF${}_{6}$) molecule at 100 C, calculate (a) the most probable velocity, (b) the average velocity, (c) the root-mean-square velocity.
7. For a gas of uranium hexafluoride molecules at 100 C and 1.00 bar, calculate (a) the collision frequency, (b) the mean time between collisions, (c) the mean free path. Assume that the diameter of a uranium hexafluoride molecule is $7.0\times {10}^{-10}\ \mathrm{m}$.
8. What is the average kinetic energy of hydrogen molecules at $100$ C? What is the average kinetic energy of uranium hexafluoride $\left(UF_6\right)$ molecules at $100$ C?
9. Assuming the temperature in interstellar space is $2.73$ K, calculate, for a hydrogen atom, (a) the most probable velocity, (b) the average velocity, (c) the root-mean-square velocity.
10. Assuming that interstellar space is occupied entirely by hydrogen atoms at a particle density of ${10}^2$ molecules ${\mathrm{m}}^{-3}$, calculate (a) the collision frequency, (b) the mean number of years between collisions, (c) the mean free path. Assume that the diameter of a hydrogen atom is $2.40\times {10}^{-10}\ \mathrm{m}$.
11. Ignoring any effects attributable to its charge and assuming that the temperature is $2.73$ K, calculate, for an electron in interstellar space, (a) the most probable velocity, (b) the average velocity, (c) the root-mean-square velocity.
12. If a wall of a gas-filled container contains a hole, gas molecules escape through the hole. If all of the molecules that hit the hole escape, but the hole is so small that the number escaping has no effect on the velocity distribution of the remaining gas molecules, we call the escaping process effusion. That is, we call the process effusion only if it satisfies three rather stringent criteria. First, the hole must be large enough (and the wall must be thin enough) so that most molecules passing through the hole do not hit the sides of the hole. Second, a molecule that passes through the hole must not collide with anything on the other side that can send it back through the hole into the original container. Third, the hole must be small enough so that the escaping molecules do not create a pressure gradient; the rate at which gas molecules hit the hole and escape must be determined entirely by the equilibrium distribution of gas velocities and, of course, the area of the hole. Show that the number of molecules effusing through a hole of area $A$ in time $t$ is
$At\left(\frac{N}{V}\right) \left(\frac{kT}{2\pi m}\right)^{1/2} \nonumber$
where $\left({N}/{V}\right)$ is the number density of molecules in the container, and $m$ is their molecular mass.
13. A vessel contains hydrogen and oxygen at $350$ K and partial pressures of $0.50$ bar and $1.50$ bar, respectively. These gases effuse into a vacuum. What is the ratio of hydrogen to oxygen in the escaping gas?
14. How could we use effusion to estimate the molecular weight of an unknown substance?
15. An equimolar mixture of ${}^{235}UF_6$ and ${}^{238}UF_6$ is subjected to effusion. What is the ratio of ${}^{235}U$ to ${}^{238}U$ in the escaping gas?
16. Calculate the number of nitrogen molecules that collide with ${10}^{-6}\ \mathrm{m}^2$ of wall in ${10}^{-6}\ \mathrm{s}$, if the pressure is $1.00$ bar and the temperature is $300$ K.
17. Air is approximately $20\%$ oxygen and $80\%$ nitrogen by volume. Assume that oxygen and nitrogen molecules both have a radius of $1.8\times {10}^{-8}$ m. For air at $1.0$ bar and $298$ K, calculate:
(a) The number of collisions that one oxygen molecule makes with nitrogen molecules every second.
(b) The number of collisions that occur between oxygen and nitrogen molecules in one cubic meter of air every second.
(c) The number of collisions that one oxygen molecule makes with other oxygen molecules every second.
(d) The number of collisions that occur between oxygen molecules in one cubic meter of air every second.
(e) The number of collisions that occur between oxygen and nitrogen molecules in one cubic meter each second in which the kinetic energy along the line of centers exceeds $100\ \mathrm{kJ}\ \mathrm{mol}^{-1}$ or $1.66\times {10}^{-19}$ J per collision.
(f) The number of oxygen-nitrogen collisions that occur in which the kinetic energy along the line of centers exceeds $50$ kJ $\mathrm{mol}^{-1}$.
18. Show that $\int^{\infty }_0 {\rho }_v\left(v\right) dv\neq 1.$
19. For what volume element, ʋ, is
$P\left(\textrm{ʋ}\right)=f_{xyz}\left(v_x,v_y,v_z\right)? \nonumber$
20. Using the model we develop in Section 2.10:
(a) Show that the pressure, $P_1\left(v\right)$, attributable to a single molecule of mass $m$ and velocity $v$ in a container of volume $V$ is
$P_1\left(v\right)=\frac{mv^2}{3V} \nonumber$
(b) In Section 4.6, we find that this pressure is
$\delta P_1\left(v\right)=\frac{2mv^2 \mathrm{cos}^2 \theta}{V} \nonumber$
for a molecule whose velocity vector lies between $\theta$ and $\theta +d\theta$ and between $\varphi$ and $\varphi +d\varphi$. This angular region comprises a solid angle whose magnitude is $d\textrm{Ω}= \mathrm{sin} \theta d\theta d\varphi$. Since the solid angle surrounding a given point is $4\pi$, the probability that a randomly oriented velocity vector lies between $\theta$ and $\theta +d\theta$ and between $\varphi$ and $\varphi +d\varphi$ is
$\frac{d\textrm{Ω}}{4\pi }=\frac{\mathrm{sin} \theta d\theta d\varphi}{4\pi } \nonumber$
Therefore, given that the scalar component of a molecule’s velocity is $v$, its contribution to the pressure at $A$ is
$dP_1\left(v\right)=\left(\frac{mv^2}{2\pi V}\right) \mathrm{cos}^2 \theta ~ \mathrm{sin} \theta ~ d\theta d\varphi \nonumber$
To find the pressure contribution made by this molecule irrespective of the values of $\theta$ and $\varphi$, we must integrate $dP_1\left(v\right)$ over all values of $\theta$ and $\varphi$ that allow the molecule to impact the wall at $A$. Recalling that these ranges are $0\le \theta <{\pi }/{2}$ and $0\le \varphi <2\pi$, show that
$P_1\left(v\right)=\frac{mv^2}{3V} \nonumber$
21. Taking $P_1\left(v\right)={mv^2}/{3V}$ as the contribution made to the pressure by one molecule whose velocity is $v$:
(a) Show that the expected value for the contribution made to the pressure by one molecule when the Maxwell–Boltzmann distribution function describes the distribution of molecular velocities is
$\left\langle P_1\left(v\right)\right\rangle =\frac{kT}{V} \nonumber$
(b) Show that the variance of the contribution made to the pressure by one molecule is
${\sigma }^2_{P_1\left(v\right)}=\frac{2k^2T^2}{3V^2} \nonumber$
What is the standard deviation, ${\sigma }_{P_1\left(v\right)}$?
(c) What is the value of the ratio
$\frac{\sigma_{P_1\left(v\right)}}{ \left\langle P_1\left(v\right)\right\rangle} \nonumber$
(d) Taking $3\times {10}^{15}$ as the number of collisions of $N_2$ molecules at $1$ bar and $300$ K with one one square millimeter per microsecond, what pressure, $P_{avg}$, would we find if we could measure the individual contribution made by each collision and compute their average? What would be the variance, ${\sigma }^2_{avg}$, of this average? The standard deviation, ${\sigma }_{avg}$? The ratio ${\sigma }_{avg}/P_{avg}$?
22. Let $\epsilon =mv^2/2$ be the translational kinetic energy of a gas molecule whose mass is $m$. Show that the probability density function for $\epsilon$ is
$\frac{df}{d\epsilon }=2\pi \left(\frac{1}{\pi kT}\right)^{3/2} \epsilon ^{1/2}exp\left(\frac{-\epsilon }{kT}\right) \nonumber$
Letting the translational kinetic energy per mole be $E=\overline{N}\epsilon$, show that
$\frac{df}{dE}=2\pi \left(\frac{1}{\pi RT}\right)^{3/2}E^{1/2}exp\left(\frac{-E}{RT}\right) \nonumber$
Notes
${}^{1}$ Our collision model and quantitative treatment of the role of activation energy in chemical reaction rates follow those given by Arthur A. Frost and Ralph G. Pearson, Kinetics and Mechanism, 2${}^{nd}$ Ed., John Wiley and Sons, New York, 1961, pp 65-68. See also R. H. Fowler, Statistical Mechanics, Cambridge University Press, New York, 1936.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/04%3A_The_Distribution_of_Gas_Velocities/4.17%3A_Problems.txt
|
Chemical kinetics is the study of how fast chemical reactions occur and of the factors that affect these rates. The study of reaction rates is closely related to the study of reaction mechanisms, where a reaction mechanism is a theory that explains how a reaction occurs.
• 5.1: Chemical Kinetics
We can distinguish two levels of detail in a chemical reaction mechanism: The first is the series of elementary processes that occurs for a given net reaction. This is called the stoichiometric mechanism. Frequently it is also possible to infer the relative positions of all of the atoms during the course of a reaction. This sort of model is called an intimate mechanism or detailed mechanism.
• 5.2: Reaction Rates and Rate Laws
hen we talk about the rate of a particular reaction, we intend to specify the amount of chemical change that occurs in unit time because of that reaction. It is usually advantageous to specify the amount of chemical change in units of moles. We can specify the amount of chemical change by specifying the number of moles of a reactant that are consumed, or the number of moles of a product that are produced, per second, by that reaction.
• 5.3: Simultaneous Processes
The number of moles of a substance in a system can change with time because several processes occur simultaneously. Not only can a given substance participate in more than one reaction, but also the amount of it that is present can be affected by processes that are not chemical reactions. A variety of transport process can operate to increase or decrease the amount of the substance that is present in the reaction mixture.
• 5.4: The Effect of Temperature on Reaction Rates
In practice, rate constants vary in response to changes in several factors. Indeed, they are usually the same in two experiments only if we keep everything but the reagent concentrations the same. Another way of saying this is that the rate law captures the dependence of reaction rate on concentrations, while the dependence of reaction rate on any other variable appears as a dependence of rate constants on that variable.
• 5.5: Other Factors that Affect Reaction Rates
A reaction that occurs in one solvent usually occurs also in a number of similar solvents. For example, a reaction that occurs in water will often occur with a low molecular weight alcohol—or an alcohol-water mixture —as the solvent. Typically, the same rate law is observed in a series of solvents, but the rate constants are solvent-dependent.
• 5.6: Mechanisms and Elementary Processes
To see what we mean by an elementary process, let us consider some possible mechanisms for the base hydrolysis of methyl iodide. In this reaction, a carbon–iodide bond is broken and a carbon–oxygen bond is formed. While any number of reaction sequences sum to this overall equation, we can write down three that are reasonably simple and plausible.
• 5.7: Rate Laws for Elementary Processes
If we think about an elementary bimolecular reaction rate law between molecules A and B , we recognize that the reaction can occur only when the molecules come into contact. They must collide before they can react. So the probability that they react must be proportional to the probability that they collide, and the number of molecules of product formed per unit time must be proportional to the number of A−B collisions that occur in unit time.
• 5.8: Experimental Determination of Rate Laws
The determination of a rate law is a matter of finding an empirical equation that adequately describes reaction-rate data. We can distinguish two general approaches: One approach is to measure reaction rate directly. That is, measure the reaction rate in experiments where the concentrations of reactants and products are known. The other is to measure a concentration at frequent time intervals as reaction goes nearly to completion and seek a differential equation consistent with these data.
• 5.9: First-order Rate Processes
First-order rate processes are ubiquitous in nature - and commerce. In chemistry we are usually interested in first-order decay processes; in other subjects, first-order growth is common. We can develop our appreciation for the dynamics - and mathematics - of first-order processes by considering the closely related subject of compound interest.
• 5.10: Rate Laws by the Study of Initial Rates
In concept, the most straightforward way to measure reaction rate directly is to measure the change in the concentration of one reagent in a short time interval immediately following initiation of the reaction. The initial concentrations are known from the way the reaction mixture is prepared. If necessary, the initial mixture can be prepared so that known concentrations of products or reaction intermediates are present.
• 5.11: Rate Laws from Experiments in a Continuous Stirred Tank Reactor
A continuous stirred tank reactor (CSTR)—or capacity-flow reactor—is a superior method of collecting kinetic data when the rate law is complex. Unfortunately, a CSTR tends to be expensive to construct and complex to operate.
• 5.12: Predicting Rate Laws from Proposed Mechanisms
Because a proposed mechanism can only be valid if it is consistent with the rate law found experimentally, the rate law plays a central role in the investigation of chemical reaction mechanisms. The discussion above introduces the problems and methods associated with collecting rate data and with finding an empirical rate law that fits experimental concentration-versus-time data. We turn now to finding the rate laws that are consistent with a particular proposed mechanism.
• 5.13: The Michaelis-Menten Mechanism for Enzyme-catalyzed Reactions
The rates of enzyme-catalyzed reactions can exhibit complex dependence on the relative concentrations of enzyme and substrate. Most of these features are explained by the Michaelis-Menten mechanism, which postulates a rapid equilibration of enzyme and substrate with their enzyme-substrate complex. Transformation of the substrate occurs within this complex. The reaction products do not complex strongly with the enzyme. After the substrate has been transformed, the products diffuse away.
• 5.14: The Lindemann-Hinshelwood Mechanism for First-order Decay
First-order kinetics for a unimolecular reaction corresponds to a constant probability that a given molecule reacts in unit time. We outline a simple mechanism that rationalizes this fact that assumes the probability of reaction is zero unless the molecule has some minimum energy. For molecules whose energy exceeds the threshold value, we assume that the probability of reaction is constant. However, when collisions are frequent, a molecule can have excess energy only for brief intervals.
• 5.15: Why Unimolecular Reactions are First Order
The total energy of a molecule is distributed among numerous degrees of freedom. The molecule has translational kinetic energy, rotational kinetic energy, and vibrational energy. When it acquires excess energy through a collision with another molecule, the additional energy could go directly into any of these modes. However, before the molecule can react, enough energy must find its way into some rather particular mode.
• 5.16: The Mechanism of the Base Hydrolysis of Co(NH₃)₅Xⁿ⁺
The rate law is rarely sufficient to establish the mechanism of a particular reaction. The base hydrolysis of cobalt pentammine complexes is a reaction for which numerous lines of evidence converge to establish the mechanism. To illustrate the range of data that can be useful in the determination of reaction mechanisms, we summarize this evidence here.
• 5.17: Chemical Equilibrium as the Equality of Rates for Opposing Reactions
The equilibrium constnat that describe the relative concentrations of the species in equilibrium can be extracted from kinetic rate laws.
• 5.18: The Principle of Microscopic Reversibility
The equilibrium constant expression is an important and fundamental relationship that relates the concentrations of reactants and products at equilibrium. We deduce it above from a simple model for the concentration dependence of elementary-reaction rates. In doing so, we use the criterion that the time rate of change of any concentration must be zero at equilibrium. Clearly, this is a necessary condition; if any concentration is changing with time, the reaction is not at equilibrium.
• 5.19: Microscopic Reversibility and the Second Law
The principle of microscopic reversibility requires that parallel mechanisms give rise to the same expression for the concentration-dependence of the equilibrium constant. That is, the function that characterizes the equilibrium composition must be the same for each mechanism.
• 5.20: Problems
05: Chemical Kinetics Reaction Mechanisms and Chemical Equilibrium
A reaction mechanism describes the sequence of bond-making, bond-breaking, and intramolecular-rearrangement steps that results in the overall chemical change. These individual steps are called elementary reactions or elementary processes. In an elementary reaction, no intermediate species is formed; that is, none of the arrangements of atoms that occur during the elementary reaction has a lifetime greater than the duration of a molecular vibration, which is typically from ${\mathrm{10}}^{\mathrm{-12}}$ to ${\mathrm{10}}^{\mathrm{-14}}$ seconds. We expand on this point in Section 5.6.
We can distinguish two levels of detail in a chemical reaction mechanism: The first is the series of elementary processes that occurs for a given net reaction. This is called the stoichiometric mechanism. Frequently it is also possible to infer the relative positions of all of the atoms during the course of a reaction. This sort of model is called an intimate mechanism or detailed mechanism.
A rate law is an equation that describes how the observed reaction rate depends on the concentrations of the species involved in the reaction. This concentration dependence can be determined experimentally. We will see that any series of elementary reactions predicts the dependence of reaction rates on concentrations, so one of the first tests of a proposed mechanism is that it be consistent with the rate law that is observed experimentally. (If the overall reaction proceeds in more than one step and the concentration of an intermediate species becomes significant, we may need more than one equation to adequately describe the rates of all of the reactions that occur.)
The rate law plays a central role in our study of reaction rates and mechanisms. We infer the rate law from experimental measurements. We must be able to prove that the experimental rate law is consistent with any mechanism that we propose. The rate law that we deduce from experimental rate data constitutes an experimental fact. Our hypothesized mechanism is a theory. We can entertain the idea that the theory may be valid only so long as its predictions about the rate law are consistent with the experimental result. We can predict rate laws for elementary processes by rather simple arguments. For a mechanism involving a series of elementary processes, we can often predict rate laws by making simplifying assumptions. When simplifying assumptions are inadequate, we can use numerical integration to test agreement between the proposed mechanism and experimental observations of the dependence of the reaction rate on the concentrations of the species involved in the reaction. We will see that a given experimental rate law may be consistent with any of several mechanisms. In such cases, we must develop additional information in order to discriminate among the several mechanisms.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/05%3A_Chemical_Kinetics_Reaction_Mechanisms_and_Chemical_Equilibrium/5.01%3A_Chemical_Kinetics.txt
|
Chemical reactions occur under a wide variety of circumstances: Many chemicals are manufactured by passing a homogeneous mixture of gaseous reactants through a bed of solid catalyst pellets. Corrosion of metals is a reaction between the solid metal and oxygen from the air, often catalyzed by other common chemical species like water or chloride ion. An enormous number of biological reactions occur within the cells of living organisms. In the laboratory, we typically initiate a reaction by mixing the reactants, often with a solvent, in a temperature-controlled vessel.
Chemical reactions can be carried out in batch reactors and in a wide variety of flow reactors. A batch reactor is simply a container, in which we initiate the reaction by mixing the reactants with one another (and any additional ingredients), and in which the reaction occurs and the products remain—until we get around to removing them. A reaction carried out under these conditions is called a batch reaction. If any of the reactants are gases, a batch reactor must be sealed to prevent their escape. Otherwise, we may leave the reactor open to the atmosphere as the reaction occurs.
Flow reactor have been designed to achieve a variety of objectives. Nevertheless, they have a number of characteristics in common. A flow reactor is a container, into which reactants and other ingredients are injected. The products are recovered by withdrawing portions of the reaction mixture from one or more locations within the reactor. The rates at which materials are injected or withdrawn are usually constant. In the simplest case, the reactants are mixed at one end of a long tube. The reacting mixture flows through the tube. If the tube is long enough, the mixture emerging from the other end contains the equilibrium concentrations of reactants and products. In such a tubular reactor, it is usually a good approximation to assume that the material injected during one short time interval does not mix with the material injected during the next time interval as they pass through the tube. We view the contents of the reactor as a series of fluid “plugs” that traverse the reactor independently of one another and call this behavior plug flow.
In Section 5.11, we discuss another simple flow stirred tank, called a continuous stirred-tank reactor (CSTR) or a capacity-flow reactor. A CSTR consists of a single constant-volume vessel into which reactants are continuously injected and from which reaction mixture is continuously withdrawn. The contents of this container are constantly stirred. In our discussion, we assume that the reactor is completely filled with a homogeneous liquid solution. We express the rate of reaction within the CSTR in moles per liter of reactor volume per second, $\mathrm{mol\ }{\mathrm{L}}^{\mathrm{-1}}{\mathrm{s}}^{\mathrm{-1}}$.
When we talk about the rate of a particular reaction, we intend to specify the amount of chemical change that occurs in unit time because of that reaction. It is usually advantageous to specify the amount of chemical change in units of moles. We can specify the amount of chemical change by specifying the number of moles of a reactant that are consumed, or the number of moles of a product that are produced, per second, by that reaction. If we do so, the amount of chemical change depends on the stoichiometric coefficient of the reactant or product that we choose. Moreover, the rate is proportional to the size of the system. Since the properties of reaction rates that are of interest to us are usually independent of the size of the system, we find it convenient to express reaction rates as moles per second per unit system size, so that the most convenient units are usually concentration per second.
For reactors containing heterogeneous catalysts, we typically express the reaction rate in moles per unit volume of catalyst bed per second. For corrosion of a metal surface, we often express the rate in moles per unit area per second. For biological reactions, we might express the reaction rate in moles per gram of biological tissue per second. For reactions in liquid solutions, we typically express the rate in moles per liter of reaction mixture per second, $\mathrm{mol\ }{\mathrm{L}}^{\mathrm{-1}}{\mathrm{s}}^{\mathrm{-1}}$ or $\underline{M}\ {\mathrm{s}}^{\mathrm{-1}}$.
Evidently, we need to express the rate of a reaction in a way that accounts for the stoichiometry of the reaction and is independent of the size of the system. Moreover, we must distinguish the effect of the reaction on the number of moles of a reagent present from the effects of other processes, because competing reactions and mechanical processes can affect the amount of a substance that is present.
To develop the basic idea underlying our definition of reaction rate, let us consider a chemical substance, $A$, that undergoes a single reaction in a closed system whose volume is $V$. For a gas-phase reaction, this volume can vary with time, so that $V=V\left(t\right)$. (The volume of any open system can vary with time.) Since the system is closed, the reaction is the is the only process that can change the amount of $A$ that is present. Let $\Delta n_A$ be the increase in the number of moles of $A$ in a short interval, $\Delta t$, that includes time $t$. Let the average rate at which the number of moles of $A$ increases in this interval be $\overline{r}\left(A\right)=\Delta {n_A}/{\Delta t}$. The corresponding instantaneous rate is
\begin{align*} r\left(A\right) &={\mathop{\mathrm{lim}}_{\Delta t\to 0} \left(\frac{\Delta n_A}{\Delta t}\right)\ } \[4pt] &=\frac{dn_A}{dt} \end{align*}
To express this information per unit volume, we can define the instantaneous rate of reaction of $A$, at time $t$, as
\begin{align*} R\left(A\right) &=\frac{1}{V\left(t\right)}\ {\mathop{\mathrm{lim}}_{\Delta t\to 0} \left(\frac{\Delta n_A}{\Delta t}\right)\ } \[4pt] &=\frac{1}{V\left(t\right)}\frac{dn_A}{dt} \end{align*}
Experimental studies of reaction rate are typically done in constant-volume closed systems under conditions in which only one reaction occurs. Most of the discussion in this chapter is directed toward reactions in which these conditions are satisfied, and the system comprises a single homogeneous phase. In the typical case, we mix reactants with a liquid solvent in a reactor and immerse the reactor in a constant-temperature bath. Under these conditions, the rate at which a particular substance reacts is equal to the rate at which its concentration changes. Writing $\left[A\right]$ to designate the molarity, $\mathrm{mol\ }{\mathrm{L}}^{\mathrm{-1}}$, of $A$, we have
\begin{align*} R\left(A\right)&=\frac{1}{V}\ \frac{dn_A}{dt} \[4pt] &=\frac{d\left({n_A}/{V}\right)}{dt} \[4pt] &=\frac{d\left[A\right]}{dt} \end{align*}
If we express a reaction rate as a rate of concentration change, it is essential that both conditions be satisfied. If both $n_A$ and $V$ vary with time, we have
\begin{align*} \frac{d\left[A\right]}{dt} &=\frac{d\left({n_A}/{V}\right)}{dt} \[4pt] &=\frac{1}{V}\frac{dn_A}{dt}-\frac{n_A}{V^2}\frac{dV}{dt} \end{align*}
The instantaneous rate at which substance $A$ undergoes a particular reaction is equal to $V^{-1}\left({dn_A}/{dt}\right)$ only if the reaction is the sole process that changes $n_A$; the contribution to ${d\left[A\right]}/{dt}$ made by $\left(n_AV^{-2}\right){dV}/{dt}$ vanishes only if the volume is constant.
If a single reaction is the only process that occurs in a particular system, the rate at which the number of moles of any reactant or product changes is a measure of the rate of the reaction. However, these rates depend on the stoichiometric coefficients and the size of the system. For a reaction of specified stoichiometry, we can use the extent of reaction, $\xi$ to define a unique reaction rate, $R$. The amounts of reactants and products present at any time are fixed by the initial conditions and the stoichiometry of the reaction. Let us write $n_A$ to denote the number of moles of reagent $A$ present at an arbitrary time and $n^o_A$ to denote the number of moles of $A$ present at the time ($t=0$) that the reaction is initiated. We define the extent of reaction as the change in the number of moles of a product divided by the product’s stoichiometric coefficient or as the change in the number of moles of a reactant divided by the negative of the reactant’s stoichiometric coefficient. For the stoichiometry
$aA+bB+\dots \to cC+dD+\dots \nonumber$
we have
\begin{align*} \xi &=\frac{n_A-n^o_A}{-a}=\frac{n_B-n^o_B}{-b}=\dots \[4pt] &=\frac{n_C-n^o_C}{c}=\frac{n_D-n^o_D}{d}=\dots \end{align*}
If $A$ is the limiting reagentlimiting reagent, $\xi$ varies from zero, when the reaction is initiated with $n_A=n^o_A$, to ${n^o_A}/{a}$, when $n_A=0$. At any time, $t$, we have
\begin{align*} \frac{d\xi }{dt} &=-\frac{1}{a}\frac{dn_A}{dt}=-\frac{1}{b}\frac{dn_B}{dt}=\dots \[4pt] &= \frac{1}{c}\frac{dn_C}{dt}=\ \ \ \frac{1}{d}\frac{dn_D}{dt}=\dots \end{align*}
and we can define a unique reaction rate as
\begin{align*} R &=\frac{1}{V\left(t\right)}\frac{d\xi }{dt} \[4pt] &=-\frac{1}{aV\left(t\right)}\frac{dn_A}{dt} \[4pt] &= -\frac{1}{bV\left(t\right)}\frac{dn_B}{dt}=\dots \[4pt] &= \frac{1}{cV\left(t\right)}\frac{dn_C}{dt} \[4pt] &= \frac{1}{dV\left(t\right)}\frac{dn_D}{dt}=\dots \end{align*}
The relationship between the instantaneous rate at which reactant $A$ undergoes this reaction, $R\left(A\right)$, and the reaction rate, $R$, is $R\left(A\right)=\frac{1}{V\left(t\right)}\frac{dn_A}{dt}=-\frac{a}{V\left(t\right)}\frac{d\xi }{dt}=-aR \nonumber$
If the volume is constant, we have
\begin{align*} \frac{\xi }{V} &= \frac{\left[A\right]-{\left[A\right]}_0}{-a}=\frac{\left[B\right]-{\left[B\right]}_0}{-b}=\dots \[4pt] &=\frac{\left[C\right]-{\left[C\right]}_0}{c}=\frac{\left[D\right]-{\left[D\right]}_0}{d}=\dots \end{align*} \nonumber
and the reaction rate is
\begin{align*} R &=\frac{1}{V}\frac{d\xi}{dt} = -\frac{1}{a}\frac{d\left[A\right]}{dt} = -\frac{1}{b}\frac{d\left[B\right]}{dt}=\dots \[4pt] &= \frac{1}{c}\frac{d\left[C\right]}{dt} = \frac{1}{d}\frac{d\left[D\right]}{dt}=\dots \end{align*} \nonumber
The name “extent of reaction” is sometimes given to the fraction of the stoichiometrically possible reaction that has occurred. To distinguish this meaning, we call it the fractional conversion, $\chi$. When $A$ is the stoichiometrically limiting reactant, the fractional conversion is
$\chi =-\frac{n_A-n^o_A}{n^o_A} \nonumber$
The extent of reaction, $\xi$, and the fractional conversion, $\chi$, are related as$\chi =\frac{a\xi }{n^o_A}$
We have $0\le \xi \le {n^o_A}/{a}$ and $0\le \chi \le 1$.
The rate of a reaction usually depends on the concentrations of some or all of the substances involved. The dependence of reaction rate on concentrations is the rate law. It must be determined by experiment. For reaction
$aA+bB+\dots \to cC+dD+\dots \nonumber$
the observed rate law is often of the form
\begin{align*} R&=\frac{1}{V}\frac{d\xi }{dt} \[4pt] &=k{\left[A\right]}^m{\left[B\right]}^n\dots {\left[C\right]}^p{\left[D\right]}^q \end{align*}
where $m$, $n$, , $p$, $q$, are small positive or negative integers or (less often) simple fractions.
We use a conventional terminology to characterize rate laws like this one. We talk about the order in a chemical species and the order of the reaction. To characterize the rate law above, we say that the reaction is “$m$${}^{th}$ order in compound $A$,” “$n$${}^{th}$ order in compound $B$,” “$p$${}^{th}$ order in compound $C$,” and “$q$${}^{th}$ order in compound $D$.” We also say that the reaction is “${\left(m+n+\dots +p+q+\dots \right)}^{\mathrm{th}}$ order overall.” Here $k$ is an experimentally determined parameter that we call the rate constant or rate coefficient.
It frequently happens that we are interested in an overall chemical change whose stoichiometric mechanism involves two or more elementary reactions. In this case, an exact rate model includes a differential equation for each elementary reaction. Nevertheless, it is often possible to approximate the rate of the overall chemical change by a single differential equation, which may be a relatively complex function of the concentrations of the species present in the reaction mixture. For the reaction above, the experimental observation might be
\begin{align*} R &=\frac{1}{V}\frac{d\xi }{dt} \[4pt] &=\frac{k_1\left[A\right]\left[B\right]}{k_2\left[C\right]+k_3\left[D\right]} \end{align*} \nonumber
In such cases, we continue to call the differential equation the rate law. The concept of an overall order is no longer defined. The constants ($k_1$, $k_2$, and $k_3$) may or may not be rate constants for elementary reactions that are part of the overall process. Nevertheless, it is common to call any empirical constant that appears in a rate law a rate constant. In a complex rate law, the constants can often be presented in more than one way. In the example above, we can divide numerator and denominator by, say, $k_3$, to obtain a representation in which the constant coefficients have different values, one of which is unity.
Most of the rest of this chapter is devoted to understanding the relationship between an observed overall reaction rate and the rates of the elementary processes that contribute to it. Our principal objective is to understand chemical equilibrium rates at in terms of competing forward and reverse reactions. At equilibrium, chemical reactions may be occurring rapidly; however, no concentration changes can be observed because each reagent is produced by one set of reactions at the same rate as it is consumed by another set. For the most part, we focus on reactions that occur in closed constant-volume systems.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/05%3A_Chemical_Kinetics_Reaction_Mechanisms_and_Chemical_Equilibrium/5.02%3A_Reaction_Rates_and_Rate_Laws.txt
|
The number of moles of a substance in a system can change with time because several processes occur simultaneously. Not only can a given substance participate in more than one reaction, but also the amount of it that is present can be affected by processes that are not chemical reactions. A variety of transport process can operate to increase or decrease the amount of the substance that is present in the reaction mixture: A pure solid reactant could dissolve in a reacting solution, or a product could precipitate from it, as the reaction proceeds. A reacting species could diffuse into a reactor across a semi-permeable membrane. Controlled amounts of a reacting species could be added, either continuously or at specified intervals.
Each of the simultaneous processes contributes to the change in the number of moles of $A$ present. At every instant, each of these contributions can be characterized by a rate. Over a short time interval, $\Delta t$, let ${\Delta }_in_A$ be the contribution that the $i^{th}$ process makes to the change in the amount of $A$ in volume $V$. If, even though the $i^{th}$ process may not be a reaction, we use $R_i\left(A\right)$ to represent its rate, its contribution to the rate at which the amount of $A$ changes is
$R_i\left(A\right)={\mathop{\mathrm{lim}}_{\Delta t\to 0} \left(\frac{{\Delta }_in_A}{V\left(t\right)\Delta t}\right)=\frac{1}{V\left(t\right)}\ }\frac{d_in_A}{dt} \nonumber$
If there are numerous such processes, whose rates are $R_1\left(A\right)$, $R_2\left(A\right)$, ..., $R_i\left(A\right)$,, $R_{\omega }\left(A\right)$, the observed overall rate is
$R\left(A\right)=\sum^{\omega }_{i=1}{R_i\left(A\right)}=\frac{1}{V\left(t\right)}\sum^{\omega }_{i=1}{\frac{d_in_A}{dt}} \nonumber$ If the volume is constant,
$R_i\left(A\right)=\frac{1}{V}\frac{d_in_A}{dt}=\frac{d_i\left[A\right]}{dt} \nonumber$
and
$R\left(A\right)=\sum^{\omega }_{i=1}{R_i\left(A\right)}=\sum^{\omega }_{i=1}{\frac{d_i\left[A\right]}{dt}}=\frac{d\left[A\right]}{dt} \nonumber$
To illustrate these ideas, let us consider the base hydrolyses of methyl and ethyl iodide. No intermediates are observed in these reactions. If we carry out the base hydrolysishydrolysis:ethyl iodide of methyl iodide,
$\ce{CH_3I + OH^{-} -> CH3OH + I^{-}} \nonumber$
in a closed constant-volume system, we can express the reaction rate in several equivalent ways:
$R\left(CH_3I\right)=\frac{1}{V}\frac{d\xi \left(CH_3I\right)}{dt}=\frac{d\left[CH_3OH\right]}{dt}=\frac{d\left[I^-\right]}{dt}=-\frac{d\left[CH_3I\right]}{dt}=-\frac{d\left[OH^-\right]}{dt} \nonumber$
If a mixture of methyl and ethyl iodide is reacted with aqueous base, both hydrolysis reactions consume hydroxide ion and produce iodide ion. The rates of these individual processes can be expressed as
$R\left(CH_3I\right)=\frac{1}{V}\frac{d\xi \left(CH_3I\right)}{dt}=\frac{d\left[CH_3OH\right]}{dt}=-\frac{d\left[CH_3I\right]}{dt} \nonumber$
and
\begin{align*} R\left(CH_3CH_2I\right) &=\frac{1}{V}\frac{d\xi \left(CH_3CH_2I\right)}{dt}=\frac{d\left[CH_3CH_2OH\right]}{dt} \[4pt] &=-\frac{d\left[CH_3CH_2I\right]}{dt} \end{align*}
but the rates at which the concentrations of hydroxide ion and iodide ion change depend on the rates of both reactions. We have
\begin{align*} \frac{d\left[I^-\right]}{dt} &=-\frac{d\left[OH^-\right]}{dt} \[4pt] &=R\left(CH_3I\right)+R\left(CH_3CH_2I\right) \[4pt] &=-\frac{1}{V}\frac{d\xi \left(CH_3I\right)}{dt}-\frac{1}{V}\frac{d\xi \left(CH_3CH_2I\right)}{dt} \end{align*}
In principle, either of the reaction rates can be measured by finding the change, over a short time interval, in the number of moles of a particular substance present.
Simultaneous processes occur when a reaction does not go to completion. The hydrolysis of ethyl acetate,
$\ce{CH_3CO_2CH_2CH_3 + H_2O \rightleftharpoons CH_3CO_2H + CH_3CH_2OH} \nonumber$
can reach equilibrium before the limiting reactant is completely consumed. The reaction rate, defined as $R=V^{-1}\left({d\xi }/{dt}\right)$, falls to zero. However, ethyl acetate molecules continue to undergo hydrolysis; the extent of reaction becomes constant because ethyl acetate molecules are produced from acetic acid and ethanol at the same rate as they are consumed by hydrolysis. Evidently, the rate of the forward reaction does not fall to zero even though the net reaction rate does.
Let $R_f$ represent the number of moles of ethyl acetate undergoing hydrolysis per unit time per unit volume. Let $R_r$ represent the number of moles of ethyl acetate being produced per unit time per unit volume. The net rate of consumption of ethyl acetate is $R=R_f-R_r$. At equilibrium, $R=0$, and $R_f=R_r>0$. In such cases, it can be ambiguous to refer to “the reaction” or “the rate of reaction.” The rates of the forward and of the net reaction are distinctly different things.
So long as no intermediate species accumulate to significant concentrations in the reaction mixture, we can find the forward and reverse rates for a reaction like this, at any particular equilibrium composition, in a straightforward way. When we initiate reaction with no acetic acid or ethanol present, the rate of the reverse reaction must be zero. We can find the rate law for the forward reaction by studying the rate of the hydrolysis reaction when the product concentrations are low. Under these conditions, $R_r=0$ and $R\approx R_f$. From the rate law that we find and the equilibrium concentrations, we can calculate the rate of the forward reaction at equilibrium. Likewise, when the ethyl acetate concentration is low, the rate of the hydrolysis reaction is negligible in comparison to that of the esterification reaction. We have $R_f\approx 0$ and $R={-R}_r$, and we can find the rate law for the esterification reaction by studying the rate of the esterification reaction when the concentration of ethyl acetate is negligible. From this rate law, we can calculate the rate of the reverse reaction at the equilibrium concentrations.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/05%3A_Chemical_Kinetics_Reaction_Mechanisms_and_Chemical_Equilibrium/5.03%3A_Simultaneous_Processes.txt
|
In practice, rate constants vary in response to changes in several factors. Indeed, they are usually the same in two experiments only if we keep everything but the reagent concentrations the same. Another way of saying this is that the rate law captures the dependence of reaction rate on concentrations, while the dependence of reaction rate on any other variable appears as a dependence of rate constants on that variable.
Temperature usually has a big effect. The experimentally observed dependence of rate constants on temperature can be expressed in a compact fashion. Over small temperature ranges it can usually be expressed adequately by the Arrhenius equation:
$k=A\ \mathrm{exp}\left(\frac{-E_a}{RT}\right) \nonumber$
where $E_a$ and $A$ are called the Arrhenius activation energy and the frequency factor (or pre-exponential factor), respectively.
The Arrhenius equation is an empirical relationship. As we see below for our collision-theory model, theoretical treatments predict that the pre-exponential term, A, is weakly temperature dependent. When we investigate reaction rates experimentally, the temperature dependence of A is usually obscured by the uncertainties in the measured rate constants. It is often said, as a rough rule of thumb, that the rate of a chemical reaction doubles if the temperature increases by 10 K. However, this rule can fail spectacularly. A reaction can even proceed more slowly at a higher temperature, and there are multi-step reactions for which this is observed.
5.05: Other Factors that Affect Reaction Rates
A reaction that occurs in one solvent usually occurs also in a number of similar solvents. For example, a reaction that occurs in water will often occur with a low molecular weight alcohol—or an alcohol-water mixture —as the solvent. Typically, the same rate law is observed in a series of solvents, but the rate constants are solvent-dependent.
Other chemical species that are present in the reaction medium (but which are neither products nor reactants) can also affect observed reaction rates. Any such species meets the usual definition of a catalyst. However, common practice restricts use of the word “catalyst” to a chemical species that substantially increases the rate of the reaction. A chemical species that decreases the rate of the reaction is usually called an inhibitor. If we think that the rate effect of the non-reacting species results from a non-specific or a greater-than-bonding-distance interaction with one or more reacting species, we call the phenomenon a medium effect. A solvent effect is a common kind of medium effect; altering the solvent affects the reaction rate even though the solvent does not form a chemical bond to any of the reactants or products. Dissolved salts can affect reaction rates in a similar way. Such effects often occur when the degree of charge separation along the path of an elementary reaction is significantly different from that in the reactants.
Isotopic substitution in a reactant can affect the reaction rate. (Replacement of a hydrogen atom with a deuterium atom is the most common case.) The effect of an isotopic substitution on a reaction rate is called a kinetic isotope effect. Kinetic isotope effects can provide valuable information about the reaction mechanism. A kinetic isotope effect is expected if the energy needed make or break a chemical bond to the isotopically substituted atom is a significant component of the activation energy for the reaction. Kinetic isotope effects are usually small in comparison to other factors that affect reaction rates. A ten-fold change in the reaction rate is a big kinetic isotope effect. Effects much smaller than this are often useful; indeed, the absence of a kinetic isotope effect can help distinguish among alternative mechanisms.
In studies of reaction rates that are focused on finding the reaction mechanism, many characteristics of the reaction that are not strictly rate-related can be important. These include the stereochemistry of the product; the Walden inversion that accompanies S${}_{N}$2 reactions at tetrahedral carbon centers is a notable example. Isotopic substitution that occurs incidental to a reaction can help establish that an intermediate is formed. The effects of competing reactions are often significant. The study of competing reactions is frequently helpful when the reaction involves a short-lived and otherwise undetectable intermediate. The use of isotopic substitution and competing reactions is illustrated in Section 5.16, in which we review the base hydrolysis of cobalt (III)cobalt pentaammine complexes, ${Co{\left(NH_3\right)}_5X}^{n+}$.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/05%3A_Chemical_Kinetics_Reaction_Mechanisms_and_Chemical_Equilibrium/5.04%3A_The_Effect_of_Temperature_on_Reaction_Rates.txt
|
To see what we mean by an elementary process, let us consider some possible mechanisms for the base hydrolysis of methyl iodide:
$\ce{CH_3I + OH^{-} \to CH_3OH + I^{-}}. \nonumber$
In this reaction, a carbon–iodide bond is broken and a carbon–oxygen bond is formed. While any number of reaction sequences sum to this overall equation, we can write down three that are reasonably simple and plausible. The $\ce{C \bond{-} I}$ could be broken first and the $\ce{C \bond{-} OH}$ bond formed thereafter. Alternatively, the $\ce{C\bond{-}OH}$ bond could be formed first and the $\ce{C \bond{-} I}$ bond broken thereafter. In the first case, we have an intermediate species, $\ce{CH^{+}_3}$, of reduced coordination number, and in the second we have an intermediate, $\ce{HO \bond{-} CH3 \bond{-} I^{-}}$, of increased coordination number. Finally, we can suppose that the bond-forming and bond-breaking steps occur simultaneously, so that no intermediate species is formed at all.
• Heterolytic bond-breaking precedes bond-making \begin{align*} \ce{CH_3I} &\to \ce{CH^+_3 + I^{-}} \[4pt] \ce{CH^{+}_3 + OH^{-}} &\to \ce{CH_3OH} \end{align*} \tag{mechanism a}
• Bond-making precedes bond-breaking $\ce{CH_3I + OH^{-} \to HO\bond{...}CH_3\bond{...} I^{-}} \[4pt] \ce{HO \bond{...} CH_3 \bond{...} I^{-} -> CH_3OH + I^{-}} \tag{mechanism b}$
• Bond-breaking and bond-making are simultaneous $\ce{CH_3I + OH^{-} -> [ HO \bond{...} CH_3 \bond{...} I^{-} ]^{\ddagger} -> CH3OH + I^{-}} \tag{mechanism c}$
The distinction between mechanism (b) and mechanism (c) is that an intermediate is formed in the former but not in the latter. Nevertheless, mechanism (c) clearly involves an intermediate structure in which both the incoming and the leaving group are bonded to the central carbon atom. The distinction between mechanisms (b) and (c) depends on the nature of the intermediate structure. In mechanism (b), we suppose that the intermediate is a bona fide chemical entity; once a molecule of it is formed, that molecule has a finite lifetime. In (c), we suppose that the intermediate structure is transitory; it does not correspond to a molecule with an independent existence.
For this distinction to be meaningful, we must have a criterion that establishes the shortest lifetime we are willing to associate with “real molecules.” It might seem that any minimum lifetime we pick must be wholly arbitrary. Fortunately this is not the case; there is a natural definition for a minimum molecular lifetime. The definition arises from the fact that molecules undergo vibrational motions. If a collection of atoms retains a particular relative orientation for such a short time that it never undergoes a motion that we would recognize as a vibration, it lacks an essential characteristic of a normal molecule. This means that the period of a high-frequency molecular vibration (roughly ${10}^{-14}$ s) is the shortest time that a collection of atoms can remain together and still have all of the characteristics of a molecule. If a structure persists for more than a few vibrations, it is reasonable to call it a molecule, albeit a possibly very unstable one.
In mechanism (c) the structure designated $\ce{[HO \bond{...}CH_3 \bond{...}I^{-}]^{\ddagger}}$ depicts a transitory arrangement of the constituent atoms. The atomic arrangement does not persist long enough for the $\ce{HO\bond{-}CH_3}$ bond or the $\ce{CH_3\bond{-}I}$ bond to undergo vibrational motion. A structure with these characteristics is called an activated complex or a transition state for the reaction, and a superscript double dagger, $\mathrm{\ddagger}$, is conventionally used to signal that a structure has this character. The distinction between a bona fide intermediate and a transition state is clear enough in principle, but it can be very difficult to establish experimentally.
These considerations justify our earlier definition: An elementary reaction is one in which there are no intermediates. Any atomic arrangement that occurs during an elementary reaction does not persist long enough to vibrate before the arrangement goes on to become products or reverts to reactants.
An elementary reaction is one in which there are no intermediates.
We can distinguish a small number of possible kinds of elementary reaction: termolecular elementary reactions, bimolecular elementary reactions, and unimolecular reactions. A single molecule can spontaneously rearrange to a new structure or break into smaller pieces. Two molecules can react to form one or more products. Three molecules can react to produce products. Or we can imagine that some larger number of molecules reacts. We refer to these possibilities as unimolecular, bimolecular, termolecular, and higher-molecularity processes.
The stoichiometry of many reactions is so complicated as to preclude the possibility that they could occur as a single elementary process. For example, the reaction
$\ce{3Fe^{2+} + HCrO_4^{-} + 7H^{+} -> 3Fe^{3+} + Cr^{3+} + 4H2O} \nonumber$
can not plausibly occur in a single collision of three ferrous ions, one chromate ion, and seven hydronium ions. It is just too unlikely that all of these species could find themselves in the same place, at the same time, in the proper orientation, and with sufficient energy to react. In such cases, the stoichiometric mechanism must be a series of elementary steps. For this reaction, a skeletal representation of one plausible series is
\begin{align*} \ce{Fe(II) + Cr(VI)} &\to \ce{Fe(III) + Cr(V)} \[4pt] \ce{Fe(II) + Cr(V)} &\to \ce{Fe(III) + Cr(IV)} \[4pt] \ce{Fe(II) + Cr(IV)} &\to \ce{Fe(III) + Cr(III)} \end{align*}
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/05%3A_Chemical_Kinetics_Reaction_Mechanisms_and_Chemical_Equilibrium/5.06%3A_Mechanisms_and_Elementary_Processes.txt
|
Bimolecular Elementary Processes
If we think about an elementary bimolecular reaction rate law between molecules $A$ and $B$, we recognize that the reaction can occur only when the molecules come into contact. They must collide before they can react. So the probability that they react must be proportional to the probability that they collide, and the number of molecules of product formed per unit time must be proportional to the number of $A-B$ collisions that occur in unit time. In our development of the collision theory for bimolecular reactions in the gas phase, (§4-12 to §4-16), we find that the number of such collisions is proportional to the concentration of each reactant. It is clear that this conclusion must apply to any bimolecular reaction.
If we have a vessel containing some concentration of $A$ molecules and some concentration of $B$ molecules, the collection experiences some number of $A-B$ collisions per unit time. If we double the concentration of $B$ molecules, each $A$ molecule is twice as likely as before to encounter a $B$ molecule. Indeed, for any increase in the concentration of $B$ molecules, the number of collisions of an $A$ molecule with $B$ molecules increases in the same proportion. The number of $A-B$ collisions must be proportional to the concentration of $B$ molecules. Likewise, increasing the concentration of A molecules must increase the number of $A-B$ collisions proportionately; the number of $A-B$ collisions must also be proportional to the concentration of $A$ molecules. We conclude that the rate for any bimolecular reaction between molecular substances $A$ and $B$ is described by the equations
$R=\frac{1}{V}\frac{dn_A}{dt}=\frac{1}{V}\frac{dn_B}{dt}=-k_2\left[A\right]\left[B\right] \nonumber$
This is a second-order rate law, and the proportionality constant, $k_2$, is called a second-order rate constant.
In Section 5.4-5.16, we derive an equation for the frequency with which a type $1$ molecule collides with type $2$ molecules in the gas phase when the concentration of type $2$ molecules is $N_2$ and the kinetic energy along the line of centers exceeds a threshold value, $\epsilon_a$ per molecule, or $E_a$ per mole. The rate at which such collisions occur is
$\rho_{12} \left(\epsilon_a\right) =N_1N_2\sigma^2_{12}\left(\frac{8\pi kT}{\mu }\right)^{1/2}\mathrm{exp}\left(\frac{-\epsilon_a}{kT}\right) \nonumber$
which is just $\mathrm{exp}\left(-{\epsilon_a}/{kT}\right)$ times the rate at which collisions of any energy occur between molecules of type $1$ and molecules of type $2$. If reaction occurs at every collision between a molecule $1$ and a molecule $2$ in which the kinetic energy along the line of centers exceeds $\epsilon_a$, the collision rate, $\rho_{12}\left(\epsilon_a\right)$, equals the reaction rate. We have $R=\rho_{12}\left(\epsilon_a\right)$.
If the temperature-dependence of the rate constant is given by the Arrhenius equation, the rate of the bimolecular reaction between species 1 and species 2 is
$R=k_2N_1N_2=\left[A\ \mathrm{exp}\left(\frac{-E_a}{RT}\right)\right]N_1N_2 \nonumber$
where $A$ is independent of temperature and $E_a=\overline{N}\epsilon_a$. The collision-theory model for the bimolecular reaction is almost the same; the difference being a factor of $T^{1/2}$ in the pre-exponential factor, $\ {\sigma }^2_{12}{\left({8\pi kT}/{\mu }\right)}^{1/2}$. The effect of the $T^{1/2}$ term is usually small in comparison to the effect of temperature in the exponential term. Thus, the temperature dependence predicted by collision theory, which is a highly simplified theoretical model, and that predicted by the Arrhenius equation, which is an empirical generalization usually used to describe data taken over a limited temperature range, are in substantial agreement.
Experimentally determined values of the pre-exponential factor for gas-phase bimolecular reactions can approach the value calculated from collision theory. However, particularly for reactions between polyatomic molecules, the experimental value is often much smaller than the calculated collision frequency. We rationalize this observation by recognizing that our colliding-spheres model provides no role for the effect of molecular structures. When the colliding molecules are not spherical, the collision angle is an incomplete description of their relative orientation. If the relative orientation of two colliding molecules is unfavorable to reaction, it is entirely plausible that they can fail to react no matter how energetic their collision. To recognize this effect, we suppose that the reaction rate is proportional to a steric factor, $\gamma$, where $\gamma$ represents the probability that a colliding pair of molecules have the relative orientation that is necessary for reaction to occur. Of course, $\gamma$ must be less than one. Taking this amplification of the collision model into account, the relationship between the reaction rate and the collision frequency becomes
$\frac{d\xi }{dt}=\gamma \rho_{12}\left(\varepsilon_a\right) \nonumber$
When we consider reactions in solution, we recognize that there are usually many more solvent molecules than reactant molecules. As a result, collisions of a reactant molecule with solvent molecules are much more frequent than collisions of a molecule of one reactant with a molecule of another reactant. The high frequency of collisions with solvent molecules means that the net distance moved by a reactant molecule in unit time is much less in solution than in a gas. This decreases the probability that two reactant molecules will meet. On the other hand, once two reactant molecules near one another, the solvent molecules tend to keep them together, and they are likely to collide with one another many times before they finally drift apart. (This is known as the solvent-cage effect.) We can expect these effects to roughly offset one another.
Termolecular Elementary Processes
A termolecular elementary process is a reaction in which three reactant molecules collide. For this to happen, an $A$ molecule and a $B$ molecule must be very close to one another at exactly the time that a $C$ molecule encounters the pair of them. If the reactants are not very concentrated, the probability that a given $A$ molecule is very close to a $B$ molecule during any short time interval is small. The probability that this $A$ molecule will be hit by a $C$ molecule during the same time interval is also very small. The probability that all three species will collide at the same time is the product of two small probabilities; under any given set of conditions, the number of collisions involving three molecules is smaller than the number of collisions between two molecules. The probability of a termolecular collision and hence the rate of a termolecular elementary process is proportional to the concentrations of all three reacting species
$R\ =\frac{1}{V}\frac{dn_A}{dt}\ =\frac{1}{V}\frac{dn_B}{dt}\ =\frac{1}{V}\frac{dn_C}{dt}\ =-k_2\left[A\right]\left[B\right]\left[C\right] \nonumber$
However, the low probability of a termolecular collision means that we can expect the termolecular rate constant, $k_3$, to be very small. If termolecular mechanisms are rare, higher-molecularity mechanisms must be exceedingly rare, if, indeed, any occur at all. For most chemical reactions, the mechanism is a series of unimolecular and bimolecular elementary reactions.
Unimolecular Elementary Processes
A unimolecular elementary process is one in which a molecule spontaneously undergoes a chemical change. If we suppose that there is a constant probability that any given $A$ molecule undergoes reaction in unit time, then the total number reacting in unit time is proportional to the number of $A$ molecules present. Let the average number of moles reacting in unit time be ${\overline{\Delta n}}_A$, the number of molecules in the system be $n_A$, and the proportionality constant be $k$. (We choose a unit of time that is small enough to insure that ${\overline{\Delta n}}_A\ll n_A$.) If the probability of reaction is constant, we have ${\overline{\Delta n}}_A=-kn_A$. Since ${\overline{\Delta n}}_A$ is the number of moles that react in unit time, the number of moles that react in time $\Delta t$ is $\Delta n_A={\overline{\Delta n}}_A\Delta t$, so that
${\overline{\Delta n}}_A=\frac{\Delta n_A}{\Delta t}=-kn_A \nonumber$
Dividing by the volume of the system, we have
$\frac{1}{V}\frac{\Delta n_A}{\Delta t}=-k\frac{n_A}{V} \nonumber$
In the limit that $\Delta t\to 0$, the term on the left becomes the reaction rate, $R=V^{-1}\left({dn_A}/{dt}\right)$, and since ${n_A}/{V=\left[A\right]\ }$, we have
$R=\frac{1}{V}\frac{dn_A}{dt}=-k\left[A\right] \nonumber$
Thus, a constant reaction probability implies that a unimolecular reaction has a first-order rate law. If the volume is constant, we have
$\frac{d\left[A\right]}{dt}=-k\left[A\right] \nonumber$
The idea that a unimolecular reaction corresponds to a constant reaction probability can be rationalized by introducing a simple model of the reaction process. This model assumes that reactant molecules have a distribution of energies, that only molecules whose energies exceed some minimum can react, and that this excess energy must be in some specific internal motion before the reaction can occur. Molecules exchange energy by colliding with one another. When a molecule acquires excess energy as the result of a collision, redistribution of this energy among the motions available to the molecule is not instantaneous. A characteristic length of time is required for excess energy to reach the specific internal motion that leads to reaction. Any given molecule can retain excess energy only for the short time between two collisions. The molecule gains excess energy in one collision and loses it in a subsequent one. Reaction can occur only if the excess energy reaches the specific internal motion before the molecule undergoes a deactivating collision. (We return to these ideas in §14 and §15.)
In summary, only two kinds of elementary processes are needed to develop a mechanism for nearly any chemical change. These elementary processes and their rate laws are:
Unimolecular A $\to$ $\frac{\boldsymbol{d}\left[\boldsymbol{A}\right]}{\boldsymbol{dt}}\boldsymbol{=-}\boldsymbol{k}\left[\boldsymbol{A}\right] \nonumber$
Bimolecular A + B $\to$ $\boldsymbol{\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ }\frac{\boldsymbol{d}\left[\boldsymbol{A}\right]}{\boldsymbol{dt}}\boldsymbol{=-}\boldsymbol{k}\left[\boldsymbol{A}\right]\left[\boldsymbol{B}\right] \nonumber$
Finally, we should note that we develop these rate laws for elementary processes under the assumption that the rate at which molecules collide is proportional to the concentrations of the colliding species. In doing so, we implicitly assume that intermolecular forces of attraction or repulsion have no effect on this rate. When our goal is to predict rate laws from reaction mechanisms, this assumption is almost always an adequate approximation. However, when we study chemical equilibria, we often find that we must allow for the effects of intermolecular forces in order to obtain an adequate description. In chemical thermodynamics, we provide for the effects of such forces by introducing the idea of a chemical activity. The underlying idea is that the chemical activity of a compound is the effective concentration of the compound—we can view it as the concentration “corrected” for the effects of intermolecular forces.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/05%3A_Chemical_Kinetics_Reaction_Mechanisms_and_Chemical_Equilibrium/5.07%3A_Rate_Laws_for_Elementary_Processes.txt
|
The determination of a rate law is a matter of finding an empirical equation that adequately describes reaction-rate data. We can distinguish two general approaches to this task. One approach is to measure reaction rate directly. That is, for $A+B\to C$, we measure the reaction rate in experiments where the concentrations, $[A]$, $[B]$, and $[C]$, of reactants and products are known. The other is to measure a concentration at frequent time intervals as a batch reaction goes nearly to completion. We then seek a differential equation that is consistent with this concentration-versus-time data.
If the reaction is the only process that affects $\Delta n_C$, direct measurement of the reaction rate can be effected by measuring $\Delta n_C$ over a short time interval, $\Delta t$, in which the concentrations, $[A]$, $[B]$, and $[C]$, do not change appreciably. This is often difficult to implement experimentally, primarily because it is difficult to measure small values of $\Delta n_C$ with the necessary accuracy at known values of $[A]$, $[B]$, and $[C]$.
Method of Initial Rates
The method of initial rates is an experimentally simple method in which the reaction rate is measured directly. Initial-rate measurements are extensively used in the study of enzyme-catalyzed reactions. Direct measurement of reaction rate can also be accomplished using a flow reactor. We discuss the method of initial rates, a particular kind of flow reactor known as a CSTR, and enzyme catalysis in Sections 5.10, 5.11, and 5.13, respectively.
The most common reaction-rate experiment is a batch reaction in which we mix the reactants as rapidly as possible and then monitor the concentration vs. time of one (or more) of the reactants or products as the reaction proceeds. We do the mixing so that the initially mixed reactants are at a known temperature, which can be maintained constant for the remainder of the experiment. The data from such an experiment are a set of concentrations and the times at which they are measured. To find the rate law corresponding to these concentration-versus-time data, we employ a trial-and-error procedure. We guess what the rate law is likely to be. We then obtain a general solution for this differential equation. This solution predicts the dependence of concentrations versus time as a function of one or more rate constants. If we can obtain a satisfactory fit of experimental concentration-versus-time data to the concentration-versus-time equation predicted by the rate law, we conclude that the rate law is a satisfactory representation of the experimental data.
For a reaction $A\to C$ in a closed constant-volume system, we would want to test a first-order rate law rate law, which we can express in several alternative ways:
$\frac{1}{V}\frac{d\xi }{dt}= -\frac{d\left[A\right]}{dt}= \frac{d\left[C\right]}{dt}= k\left[A\right] \nonumber$
Using the changing concentration of A to express the rate, separating variables, and integrating between the initial concentration $\left[A\right]=\left[A\right]_0$ at $t=0$ and concentration $\left[A\right]$ at time $t$ gives
$\int^{\left[A\right]}_{\left[A\right]_0}{\frac{d\left[A\right]}{\left[A\right]}}= -k\int^t_0 dt \nonumber$
so that
$\ln \frac{\left[A\right]}{\left[A\right]_0} = -kt \nonumber$
or
$\left[A\right]=\left[A\right]_0 \mathrm{exp}\left(-kT\right) \nonumber$
Frequently it is convenient to introduce the extent of reaction or the concentration of a product as a parameter. In the present instance, if the initial concentration of $C$ is zero, $\left[C\right]={\xi }/{V}=x$. Then at any time, t, we have $\left[A\right]={\left[A\right]}_0-x$, and the first-order rate equation can be written as
$\frac{dx}{dt}=k \left({\left[A\right]}_0-x\right) \nonumber$
which we rearrange and integrate between the limits $x\left(0\right)=0$ and $x\left(t\right)=x$ as
$\int^x_0 \frac{-dx}{\left[A\right]_0-x}= -k\int^t_0{dt} \nonumber$
To give
$\ln \left(\frac{\left[A\right]_0-x}{\left[A\right]_0}\right)= -kt \nonumber$
It is easy to test whether concentration versus time data conform to the first-order decay model. If they do, a plot of $\ln \left(\left[A\right]_0-x\right)$ or $\ln \left[A\right]$, versus time, $t$, is a straight line.
For a reaction $2A\to C$, we would want to test a rate law rate of the form
$\frac{1}{V}\frac{d\xi }{dt}= -\frac{1}{2}\frac{d\left[A\right]}{dt} =\frac{d\left[C\right]}{dt} =k\left[A\right]^2 \nonumber$
If the initial concentration of $C$ is zero, $\left[C\right]={\xi }/{V}=x$, and $\left[A\right]=\left[A\right]_0-2x$ at any time $t$. The rate law can be written as
$\frac{dx}{dt}=k \left(\left[A\right]_0-2x\right)^2 \nonumber$
and rearranged and integrated as
$\int^x_0{\frac{-dx}{\left(\left[A\right]_0-2x\right)^2}}= -k\int^t_0{dt} \nonumber$
to give
$\frac{1}{{\left[A\right]}_0-2x}-\frac{1}{\left[A\right]_0}=2kt \nonumber$
or
$\frac{1}{\left[A\right]}-\frac{1}{\left[A\right]_0}=2kt \nonumber$
If concentration-versus-time data conform to this second-order rate law, a plot of ${\left[A\right]}^{-1}$ versus time is a straight line.
For a reaction $A+B\to C$, we would want to test a rate law of the form
$\frac{1}{V}\frac{d\xi }{dt}= -\frac{d\left[A\right]}{dt}= -\frac{d\left[B\right]}{dt}=\frac{d\left[C\right]}{dt}=k\left[A\right]\left[B\right] \nonumber$
If the initial concentration of $C$ is again zero, $\left[C\right]={\xi }/{V}=x$, $\left[A\right]={\left[A\right]}_0-x$ and $\left[B\right]={\left[B\right]}_0-x$ at any time $t$. The rate law can be written as
$\frac{dx}{dt}=k\ \left({\left[A\right]}_0-x\right) \left(\left[B\right]_0-x\right) \nonumber$
If ${\left[A\right]}_0\neq {\left[B\right]}_0$, this can be integrated (by partial fractions) to give
$\frac{1}{\left[B\right]_0-\left[A\right]_0}\ln \frac{\left[A\right]_0\left(\left[B\right]_0-x\right)}{\left[B\right]_0\left(\left[A\right]_0-x\right)} =kt \nonumber$
If experimental data conform to this equation, a plot of
$\ln \frac{\left({\left[B\right]}_0-x\right)}{\left({\left[A\right]}_0-x\right)} \nonumber$
versus time is linear. In practice, this often has disadvantages, and experiments to study reactions like this typically exploit the technique of flooding.
Flooding is a widely used experimental technique that enables us to simplify a complex rate law in a way that makes it more convenient to test experimentally. In the case we are considering, we can often arrange to carry out the reaction with the initial concentration of $B$ much greater than the initial concentration of $A$. Then the change that occurs in the concentration of $B$ during the reaction has much less effect on the reaction rate than the change that occurs in the concentration of $A$; in the rate equation, it becomes a good approximation to let $\left[B\right]=\left[B\right]_0$ at all times. (For a fuller consideration of this point, see problem 5.23.) The second-order rate equation simplifies to
$\frac{d\left[A\right]}{dt}= -\frac{d\left[C\right]}{dt}= -k_{obs}\left[A\right] \nonumber$
where
$k_{obs}=k{\left[B\right]}_0 \nonumber$
Since the simplified rate equation is approximately first order, the observed rate constant, $k_{obs}$, is the slope of a plot of $\ln \left[A\right]$ versus $t$. $k_{obs}$ is called a pseudo-first-order rate constant.
Of course, one such experiment tests only whether the true rate law is first order in $[A]$. It tells nothing about the dependence on $[B]$. If we do several such experiments at different initial concentrations of $B$, the resulting set of $k_{obs}$ values must be directly proportional to the corresponding ${[B]}_0$ values. This can be tested graphically by plotting $k_{obs}$ versus $[B]_0$. If the rate law is first order in $[B]$, the resulting plot is linear with an intercept of zero. The slope of this plot is the second-order rate constant, $k$.
Flooding works by simplifying the rate law that is observed in a given experiment. Similar simplification can be achieved by designing the experiment so that the initial concentrations of two or more reactants are proportional to their stoichiometric coefficients. For the reaction $A+B\to C$ and the expected rate law $\frac{d\left[C\right]}{dt}=k\left[A\right]\left[B\right] \nonumber$
we would initiate the experiment with equal concentration of reactants $A$ and $B$. Letting ${\left[A\right]}_0={\left[B\right]}_0=\alpha$ and $\left[C\right]={\xi }/{V}=x$, the concentrations of $A$ and $B$ at longer times become $\left[A\right]=\left[B\right]=\alpha -x$. The rate law becomes effectively second order.
$\frac{dx}{dt}=k{\left(\alpha -x\right)}^2 \nonumber$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/05%3A_Chemical_Kinetics_Reaction_Mechanisms_and_Chemical_Equilibrium/5.08%3A_Experimental_Determination_of_Rate_Laws.txt
|
First-order rate processes are ubiquitous in nature—and commerce. In chemistry we are usually interested in first-order decay processes; in other subjects, first-order growth is common. We can develop our appreciation for the dynamics—and mathematics—of first-order processes by considering the closely related subject of compound interest.
Compound Interest
When a bank says that it pays 5% annual interest, compounded annually, on a deposit, it means that for every $1.00 we deposit at the beginning of a year, the bank will add 5% or$0.05 to our account at the end of the year, making our deposit worth \$1.05. If we let the value of our deposit at the end of year $n$ be $P\left(n\right)$, and the interest rate (expressed as a fraction) be $r$, with $r>0$, we can write
$P\left(1\right)=P\left(0\right)+\Delta P=P\left(0\right)+rP\left(0\right)=\left(1+r\right)P\left(0\right)\nonumber$
where we represent the first year’s interest by $\Delta P=rP\left(0\right)$. If we leave all of the money in the account for an additional year, we will have
$P\left(2\right)=\left(1+r\right)P\left(1\right)={\left(1+r\right)}^2P\left(0\right)\nonumber$
and after t years we will have
$P\left(t\right)={\left(1+r\right)}^tP\left(0\right)\nonumber$
Sometimes a bank will say that it pays 5% annual interest, compounded monthly. Then the bank means that it will compute a new balance every month, based on $r=0.05\ {\mathrm{year}}^{\mathrm{-1}}=\left({0.05}/{12}\right)\ {\mathrm{month}}^{\mathrm{-1}}$. After one month
$P\left(1\ \mathrm{month}\right)=\left(1+\frac{0.05}{12}\right)P\left(0\right)\nonumber$
and after $n$ months
$P\left(n\ \mathrm{months}\right)={\left(1+\frac{0.05}{12}\right)}^nP\left(0\right)\nonumber$
If we want the value of the account after $t$ years, we have, since $n=12t$,
$P\left(t\right)={\left(1+\frac{0.05}{12}\right)}^{12t}P\left(0\right)\nonumber$
If the bank were to say that it pays interest at the rate $r$, compounded daily, the balance at the end of $t$ years would be
$P\left(t\right)={\left(1+\frac{r}{365}\right)}^{365t}P\left(0\right)\nonumber$
For any number of compoundings, $m$, at rate $r$, during a year, the balance at the end of $t$ years would be
$P\left(t\right)={\left(1+\frac{r}{m}\right)}^{mt}P\left(0\right)\nonumber$
Sometimes banks speak of continuous compounding, which means that they compute the value of the account at time $t$ as the limit of this equation as $m$ becomes arbitrarily large. That is, for continuous compounding, we have
$P\left(t\right)={\mathop{\mathrm{lim}}_{m\to \infty } \left[{\left(1+\frac{r}{m}\right)}^{mt}\right]\ }P\left(0\right)\nonumber$
Fortunately, we can think about the continuous compounding of interestinterest:continuous compounding in another way. What we mean is that the change in the value of the account, $\Delta P$, over a short time interval, $\Delta t$, is given by
$\Delta P=rP\Delta t\nonumber$
where $P$ is the (initial) value of the account for the interval $\Delta t$, and $r$ is the fractional change in $P$ during one unit of time. So we can write
${\mathop{\mathrm{lim}}_{\Delta t\to 0} \left(\frac{\Delta P}{\Delta t}\right)\ }=\frac{dP}{dt}=rP\nonumber$
Separating variables to obtain ${dP}/{P}=rdt$ and integrating between the limits $P=P\left(0\right)$ at $t=0$ and $P=P\left(t\right)$ at $t=t$, we obtain
$\ln \frac{P\left(t\right)}{P\left(0\right)}=rt\nonumber$ or
$P\left(t\right)=P\left(0\right)\mathrm{exp}\left(rt\right)\nonumber$
Comparing the two equations we have derived for continuous compounding, we see that
$\mathrm{exp}\left(rt\right)={\mathop{\mathrm{lim}}_{m\to \infty } {\left(1+\frac{r}{m}\right)}^{mt}\ }\nonumber$
Continuous compounding of interest is an example of first-order or exponential growth. Other examples are found in nature; the growth of bacteria normally follows such an equation. Reflection suggests that such behavior should not be considered remarkable. It requires only that the increase per unit time in some quantity, $P$, be proportional to the amount of $P$ that is already present: $\Delta P=rP\Delta t$. Since $P$ measures the number of items (dollars, molecules, bacteria) present, this is equivalent to our observation in Section 5.7 that a first-order process corresponds to a constant probability that a given individual item will disappear (first-order decay) or reproduce (first-order growth) in unit time. For a first-order decay we have, keeping $r>0$,
$\Delta P=-rP\Delta t\nonumber$
In the limit as $\Delta t\to 0$,
$\frac{dP}{dt}=-rP\nonumber$
which has solution
$P\left(t\right)=P\left(0\right)\mathrm{exp}\left(-rt\right)\nonumber$
First-order growth and first-order decay both depend exponentially on $rt$. The difference is in the sign of the exponential term. For exponential growth, $P\left(t\right)$ becomes arbitrarily large as $t\to \infty$; for exponential decay, $P\left(t\right)$ goes to zero. If the concentration of a chemical species $A$ decreases according to a first-order rate law, we have
${\ln \frac{\left[A\right]}{\left[A\right]_0}\ }=-kt\nonumber$
The units of the rate constant, $k$, are ${\mathrm{s}}^{\mathrm{-1}}$. The half-life of a chemical reaction is the time required for one-half of the stoichiometrically possible change to occur. For a first-order decay, the half-life, $t_{1/2}$, is the time required for the concentration of the reacting species to decrease to one-half of its value at time zero; that is, when the time is $t_{1/2}$, the concentration is $\left[A\right]=\left[A\right]_0/2$. Substituting into the integrated rate law, we find that the half-life of a first-order decay is independent of concentration; the half-life is
$t_{1/2}=\frac{\ln 2}{k}\nonumber$
5.10: Rate Laws by the Study of Initial Rates
In concept, the most straightforward way to measure reaction rate directly is to measure the change in the concentration of one reagent in a short time interval immediately following initiation of the reaction. The initial concentrations are known from the way the reaction mixture is prepared. If necessary, the initial mixture can be prepared so that known concentrations of products or reaction intermediates are present. The initial reaction rate is approximated as the measured concentration change divided by the elapsed time. The accuracy of initial-rate measurements is often poor. This can result from concentration variations associated with initiation of the reaction; the actual mixing process is not instantaneous and significant reaction can occur before the mixture becomes truly homogeneous. Measuring small changes in concentration with sufficient accuracy can also be difficult.
Enzymes are naturally occurring catalysts for biochemical reactions. In the study of enzyme-catalyzed reactions, it is usually possible to select the enzyme concentration and other reaction conditions so that the initial rate can be measured with adequate accuracy. For such studies, initial-rate measurements are used extensively. For other types of reactions, the method of initial rates is usually less effective than alternative methods.
To illustrate the application of the method, suppose we have a reaction
$A+B+C\to \ Products \nonumber$
and that we are able to measure small changes in $[A]$ with good accuracy. We seek a rate law of the form
$-\frac{d[A]}{dt}=f\left([A],[B],[C]\right) \nonumber$
For any given experiment we approximate ${d[A]}/{dt}$ by ${\Delta [A]}/{\Delta t}$, and approximate the average concentrations of the reagents over the interval $\Delta t$ by their initial values: $[A]={[A]}_0$, $[B]={[B]}_0$, and $[C]={[C]}_0$. By carrying out an number of such experiments with suitably chosen initial concentrations, we can determine the functional form of the rate law and evaluate the rate constants that appear in it.
Table $1$: Hypothetical reaction rate data
${\Delta [A]}/{\Delta }t$, $\underline{\mathrm{M\ }}$ $\Delta t$,${\mathrm{s}}^{\mathrm{-1}}$ ${[A]}_0$ ${[B]}_0$ ${[C]}_0$
$-2\times {10}^{-7}$ 1000 0.010 0.010 0.010
$-4\times {10}^{-7}$ 500 0.010 0.010 0.020
$-2\times {10}^{-7}$ 1000 0.010 0.020 0.010
$-8\times {10}^{-7}$ 2500 0.020 0.010 0.010
Table $1$ presents data for a hypothetical reaction that serve to illustrate the basic concept. We suppose that initial rates have been determined for four different combinations of initial concentrations. Comparison of the first and second experiments indicates that doubling $[C]$ doubles the reaction rate, indicating that the rate depends on $[C]$ to the first power. Comparison of the first and third experiments indicates that doubling $[B]$ leaves the reaction rate unchanged, implying that the rate is independent of $[B]$. Comparison of the first and fourth experiments indicates that doubling $[A]$ increases the reaction rate by a factor of four, implying that the rate is proportional to the second power of $[A]$. We infer that the rate law is
$-\frac{d[A]}{dt}=k{[A]}^2{[B]}^0{[C]}^1 \nonumber$
Given the form of the rate law, an estimate of the value of the rate constant, $k$, can be obtained from the data for each experiment. For this illustration, we calculate $k=0.20\ \underline{\mathrm{M}}\mathrm{\ }{\mathrm{s}}^{\mathrm{-1}}$ from each of the experiments.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/05%3A_Chemical_Kinetics_Reaction_Mechanisms_and_Chemical_Equilibrium/5.09%3A_First-order_Rate_Processes.txt
|
A continuous stirred tank reactor (CSTR)—or capacity-flow reactor—is a superior method of collecting kinetic data when the rate law is complex. Unfortunately, a CSTR tends to be expensive to construct and complex to operate. Figure $1$ gives a schematic representation of the essential features of a CSTR. Fresh reagents are fed to a reactor vessel of volume $V$ at a constant rate. A portion of the reactor contents is continuously removed at the same volumetric flow rate. Because the addition and removal of material occur at the same rate, the reactor is always filled with a fixed volume of reaction mixture. The reaction vessel and its contents are maintained at a constant temperature. The vessel contains a stirrer, which operates continuously and at a high enough speed to keep the contents of the vessel homogeneous (free of concentration and temperature gradients) at all times.
The essential idea involved in the operation of a CSTR is that, after the passage of sufficient time, the concentrations of the various species present in the reactor become constant. We say that the reactor contains steady-state concentrations of the reactants and products. When the reactor reaches this steady state, processes that increase reagent concentrations are occurring at the same rate as processes that decrease them.
Let the reaction be of the form:
$A+B \to \mathrm{Products}. \nonumber$
The concentrations of the reagents in the feed solution are known. Let the concentration of $A$ in the fresh feed solution be ${\left[A\right]}_0$. Let the rate at which fresh reagent-containing solution is fed to the reactor be $u\ {\mathrm{L}}^{\mathrm{-1}}{\mathrm{s}}^{\mathrm{-1}}$. Homogeneous reaction mixture is withdrawn from the vessel at the same flow rate. The amount of $A$ in the reactor is increased by the flow of fresh reactant solution into the reactor. It is decreased both by reaction and by the flow of solution out of the reactor. The steady-state reaction rate, $R=R\left(A\right)$, is the number of moles of reactant $A$ consumed by the reaction per unit time per unit volume of reaction vessel after all of the reagent concentrations have become constant. Since $A$ is a reactant, this rate is
$R=-\frac{1}{V}\frac{d_rn_A}{dt} \nonumber$
where ${d_rn_A}/{dt}$ is the contribution that the reaction makes to the rate at which the number of moles of $A$ in the reactor changes. Since all of the reaction occurs within the vessel, and the vessel is entirely filled with the solution, $R$ is also the number of moles of $A$ consumed by reaction per unit time per unit volume of solution.
At steady state, the number of moles of $A$ in the reactor is determined by:
1. the number of moles of $A$ entering the reactor per unit time,
2. the number of moles of $A$ being consumed by reaction per unit time, and
3. the number of moles of $A$ leaving the reactor in the effluent stream per unit time.
In unit time, the number of moles entering with the feed is given by $u{\left[A\right]}_0$; the number leaving with the effluent is given by $u\left[A\right]$. In unit time, the contribution that the reaction makes to the change in the number of moles of $A$ present is $-RV$. When the steady state is reached, the number of moles entering, plus the change due to reaction, must equal the number of moles leaving:
$\textbf{moles flowing in } \mathbf{+} \textbf{ change in moles due to reaction } - \textbf{ moles flowing out } \mathbf{= 0} \nonumber$
or, in unit time,
$u{\left[A\right]}_0-RV-u\left[A\right]=0 \nonumber$
Solving for $R$, we have
$R=-\frac{u}{V}\left(\left[A\right]-{\left[A\right]}_0\right) \nonumber$
(We define reaction rate so that $R>0$. If $A$ is produced by the reaction, the mass-balance equation is $u{\left[A\right]}_0+RV-u\left[A\right]=0$.)
As with the method of initial rates, the rate law is determined by measuring reaction rates in a series of experiments in which the steady-state concentrations of the various reactants and products vary. For each experiment it is necessary to determine both the reaction rate and the steady-state concentration of each reagent that might be involved in the rate law. Using the equation above, the rate is calculated from the difference between a reagent concentration in the feed solution and its steady-state concentration in the reactor. The concentration of each reagent in the effluent is the same as its concentration in the reactor, so the necessary concentration information can be obtained by chemical analysis of the effluent solution. The chemical analysis must be done in such a way that no significant reaction occurs between the time the material leaves the reaction vessel and the time the analysis is completed.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/05%3A_Chemical_Kinetics_Reaction_Mechanisms_and_Chemical_Equilibrium/5.11%3A_Rate_Laws_from_Experiments_in_a_Continuous_Stir.txt
|
Because a proposed mechanism can only be valid if it is consistent with the rate law found experimentally, the rate law plays a central role in the investigation of chemical reaction mechanisms. The discussion above introduces the problems and methods associated with collecting rate data and with finding an empirical rate law that fits experimental concentration-versus-time data. We turn now to finding the rate laws that are consistent with a particular proposed mechanism. For simplicity, we consider reactions in closed constant-volume systems.
In principle, numerical integration can be used to predict the concentration at any time of each of the species in any proposed reaction mechanism. This prediction can be compared to experimental observations to see whether they are consistent with the proposed mechanism. To do the numerical integration, it is necessary to know the initial concentrations of all of the chemical species and to know, or assume, values of all of the rate constants. The initial concentrations are known from the procedure used to initiate the reaction. However, the rate constants must be determined by some iterative procedure in which initial estimates of the rate constants are used to predict concentration-versus-time data that can be compared to the experimental results to produce refined estimates.
In practice, we tailor our choice of reaction conditions so that we can use various approximations to test whether a proposed mechanism can explain the data. We now consider the most generally useful of these approximations.
In this discussion, we assume that the overall reaction goes to completion; that is, at equilibrium the concentration of the reactant whose concentration is limiting has become essentially zero. If the overall reaction involves more than one elementary step, then an intermediate compound is involved. A valid mechanism must include this intermediate, and more than one differential equation may be needed to characterize the time rate of change of all of the species involved in the reaction. We focus on conditions and approximations under which the rate of appearance of the final products in a multi-step reaction mechanism can be described by a single differential equation, the rate law.
We examine the application of these approximations to a particular reaction mechanism. When we understand the application of these approximations to this mechanism, the ways in which they can be used in other situations are clear.
Consider the following sequence of elementary steps
$\ce{ A + B <=>[k_1][k_2] C ->[k_3] D} \nonumber$
whose kinetics are described by the following simultaneous differential equations:
\begin{align*} \frac{d[A]}{dt}=\frac{d[B]}{dt}&={-k}_1[A][B]+k_2\left[C\right] \[4pt] \frac{d\left[C\right]}{dt} &=k_1[A][B]-k_2\left[C\right]-k_3\left[C\right] \[4pt] \frac{d\left[D\right]}{dt} &=k_3\left[C\right] \end{align*}
The general analytical solution for this system of coupled differential equations can be obtained, but it is rather complex, because $\left[C\right]$ increases early in the reaction, passes through a maximum, and then decreases at long times. In principle, experimental data could be fit to these equations. The numerical approach requires that we select values for $k_1$, $k_2$, $k_3$, ${[A]}_0$, ${[B]}_0$, ${\left[C\right]}_0$, and ${\left[D\right]}_0$, and then numerically integrate to get $[A]$, $[B]$, $\left[C\right]$, and $\left[D\right]$ as functions of time. In principle, we could refine our estimates of $k_1$, $k_2$, and $k_3$ by comparing the calculated values of one or more concentrations to the experimental ones. In practice, the approximate treatments we consider next are more expedient.
When we begin a kinetic study, we normally have a working hypothesis about the reaction mechanism, and we design our experiments to simplify the differential equations that apply to it. For the present example, we will assume that we always arrange the experiment so that ${\left[C\right]}_0=0$ and ${\left[D\right]}_0=0$. In consequence, at all times:
${[A]}_0=[A]+\left[C\right]+\left[D\right]. \nonumber$
Also, we restrict our considerations to experiments in which ${[B]}_0\gg {[A]}_0$. This exemplifies the use of flooding. The practical effect is that the concentration of $B$ remains effectively constant at its initial value throughout the entire reaction, which simplifies the differential equations significantly. In the present instance, setting ${[B]}_0\gg {[A]}_0$ means that the rate-law term $k_1[A][B]$ can be replaced, to a good approximation, by $k_{obs}[A]$, where $k_{obs}=k_1{[B]}_0$.
Once we have decided upon the reaction conditions we are going to use, whether the resulting concentration-versus-time data can be described by a single differential equation depends on the relative magnitudes of the rate constants in the several steps of the overall reaction. Particular combinations of relationships that lead to simplifications are often referred to by particular names; we talk about a combination that has a rate-determining step, or one that involves a prior equilibrium, or one in which a steady-state approximation is applicable. To see what we mean by these terms, let us consider some particular relationships that can exist among the rate constants in the mechanism above.
Case I
Suppose that $k_1[A][B]\gg k_2\left[C\right]$ and $k_3\gg k_2$. We often describe this situation by saying, rather imprecisely, that the reaction to convert $C$ to $D$ is very fast and that the reaction to convert $C$ back to $A$ and $B$ is very slow—compared to the reaction that forms $C$ from $A$ and $B$. When $C$ is produced in these circumstances, it is converted to $D$ so rapidly that we never observe a significant concentration of $C$ in the reaction mixture. The formation of a molecule of $C$ is tantamount to the formation of a molecule of $D$, and the reaction produces $D$ at essentially the same rate that it consumes $A$ or $B$. We say that the first step, $A+B\to C$, is the rate-determining step in the reaction. We have
$-\frac{d[A]}{dt}=\ -\frac{d[B]}{dt}\approx \frac{d\left[D\right]}{dt} \nonumber$
The assumption that $k_1[A][B]\gg k_2\left[C\right]$ means that we can neglect the smaller term in the equation for ${d[A]}/{dt}$, giving the approximation
$\frac{d[A]}{dt}=\ \frac{d[B]}{dt}=\ -\frac{d\left[D\right]}{dt}=\ -k_1[A][B] \nonumber$
Letting $\left[D\right]=x$ and recognizing that our assumptions make $\left[C\right]\approx 0$, the mass-balance condition, ${[A]}_0=[A]+\left[C\right]+\left[D\right]$, becomes $[A]={[A]}_0-x$. Choosing ${[B]}_0\gg {[A]}_0$ means that $k_1[B]\approx k_1{[B]}_0=k_{I,obs}$. The rate equation becomes first-order:
$\frac{dx}{dt}=k_{I,obs}\left(\ {[A]}_0-x\right) \nonumber$
Since $k_{I,obs}$ is not strictly constant, it is a pseudo-first-order rate constant. The disappearance of $A$ is said to follow a pseudo-first-order rate equation.
The concept of a rate-determining step step is an approximation. In general, the consequence we have in mind when we invoke this approximation is that no intermediate species can accumulate to a significant concentration if it is produced by the rate-determining step or by a step that occurs after the rate-determining step. We do not intend to exclude the accumulation of a species that is at equilibrium with another product. Thus, in the mechanism
$\ce{ A ->[k] B <=> C} \nonumber$
we suppose that the conversion of $A$ to $B$ is rate-determining and that the interconversion of $B$ and $C$ is so rapid that their concentrations always satisfy the equilibrium relationship
$K=\dfrac{[C]}{[B]}. \nonumber$
For the purpose at hand, we do not consider $B$ to be an intermediate; $B$ is a product that happens to be at equilibrium with the co-product, $C$.
Case II
Suppose that $k_1[A][B]\gg k_3\left[C\right]$. In this case $A+B\to C$ is fast compared to the rate at which $C$ is converted to $D$, and we say that $C\to D$ is the rate-determining step. We can now distinguish three sub-cases depending upon the way $\left[C\right]$ behaves during the course of the reaction.
Case IIa: Suppose that $k_1[A][B]\gg k_3\left[C\right]$ and $k_3\gg k_2$. Then $A+B\to C$ is rapid and essentially quantitative. That is, within a short time of initiating the reaction, all of the stoichiometrically limiting reactant is converted to $C$. Letting $\left[D\right]=x$ and recognizing that our assumptions make $[A]\approx 0$, the mass-balance condition,
${[A]}_0=[A]+\left[C\right]+\left[D\right] \nonumber$
becomes
$\left[C\right]={[A]}_0-x. \nonumber$
After a short time, the rate at which $D$ is formed becomes
$\frac{d\left[D\right]}{dt}=k_3\left[C\right] \nonumber$ or $\frac{dx}{dt}=k_3\left(\ {[A]}_0-x\right) \nonumber$
The disappearance of $C$ and the formation of $D$ follow a first-order rate law.
Case IIb: If the forward and reverse reactions in the first elementary process are rapid, then this process may be effectively at equilibrium during the entire time that $D$ is being formed. (This is the case that $k_1[A][B]\gg k_3\left[C\right]$ and $k_2\gg k_3$.) Then, throughout the course of the reaction, we have
$K_{eq}={\left[C\right]}/{[A][B]} \nonumber$
Letting $\left[D\right]=x$ and making the further assumption that $[A]\gg \left[C\right]\approx 0$ throughout the reaction, the mass-balance condition, ${[A]}_0=[A]+\left[C\right]+\left[D\right]$, becomes $[A]={[A]}_0-x$. Substituting into the equilibrium-constant expression, we find
$\left[C\right]=K_{eq}{[B]}_0\ \left({[A]}_0-x\right) \nonumber$
Substituting into ${d\left[D\right]}/{dt}=k_3\left[C\right]$ we have
$\frac{dx}{dt}=k_3K_{eq}{[B]}_0\ \left({[A]}_0-x\right)=k_{IIa,obs}\left({[A]}_0-x\right) \nonumber$
where $k_{IIa,obs}=k_3K_{eq}{[B]}_0$. The disappearance of A and the formation of D follow a pseudo-first-order rate equation. The pseudo-first-order rate constant is a composite quantity that is directly proportional to ${[B]}_0$.
Case IIc: If we suppose that the first step is effectively at equilibrium during the entire time that $D$ is being produced (as in case IIb) but that $\left[C\right]$ is not negligibly small compared to $[A]$, we again have $K_{eq}={\left[C\right]}/{[A][B]}$. With $\left[D\right]=x$, the mass-balance condition becomes $[A]={[A]}_0-\left[C\right]-x$. Eliminating $[A]$ between the mass-balance and equilibrium-constant equations gives
$\left[C\right]=\frac{K_{eq}{[B]}_0\left({[A]}_0-x\right)}{1+K_{eq}{[B]}_0} \nonumber$
so that ${d\left[D\right]}/{dt}=k_3\left[C\right]$ becomes
$\frac{dx}{dt}=\left(\frac{{k_3K}_{eq}{[B]}_0}{1+K_{eq}{[B]}_0}\right)\left({[A]}_0-x\right)=k_{IIc,obs}\left({[A]}_0-x\right) \nonumber$
The formation of $D$ follows a pseudo-first-order rate equation. (The disappearance of $A$ is also pseudo-first-order, but the pseudo-first-order rate constant is different.) As in Case IIb, the pseudo-first-order rate constant, $k_{IIc,obs}$, is a composite quantity, but now its dependence on ${[B]}_0$ is more complex. The result for Case IIc reduces to that for Case IIb if $K_{eq}{[B]}_0\ll 1$.
Case III
In the cases above, we have assumed that one or more reactions are intrinsically much slower than others are. The differential equations for this mechanism can also become much simpler if all three reactions proceed at similar rates, but do so in such a way that the concentration of the intermediate is always very small, $\left[C\right]\approx 0$. If the concentration of $C$ is always very small, then we expect the graph of $\left[C\right]$ versus time to have a slope, ${d\left[C\right]}/{dt}$, that is approximately zero. In this case, we have
$\frac{d\left[C\right]}{dt}=k_1[A][B]-k_2\left[C\right]-k_3\left[C\right]\approx 0 \nonumber$
so that $\left[C\right]=\frac{k_1[A][B]}{k_2+k_3} \nonumber$
With $\left[D\right]=x$, ${d\left[D\right]}/{dt}=k_3\left[C\right]$ becomes
$\frac{dx}{dt}=\left(\frac{k_1k_3{[B]}_0}{k_2+k_3}\right)\left({[A]}_0-x\right)=k_{III,obs}\left({[A]}_0-x\right) \nonumber$
As in the previous cases, the disappearance of $A$ and the formation of $D$ follow a pseudo-first-order rate equation. The pseudo-first-order rate constant is again a composite quantity, which depends on ${[B]}_0$ and the values of all of the rate constants.
Case III illustrates the steady-state approximation, in which we assume that the concentration of an intermediate species is much smaller than the concentrations of other species that affect the reaction rate. Under these circumstances, we can be confident that the time-derivative of the intermediate’s concentration is negligible compared to the reaction rate, so that it is a good approximation to set it equal to zero. The idea is simply that, if the concentration is always small, its time-derivative must also be small. If the graph of the intermediate’s concentration versus time is always much lower than that of other participating species, then its slope will be much less.
Equating the time derivative of the steady-state intermediate’s concentration to zero produces an algebraic expression that involves the intermediate’s concentration. Solving this expression for the concentration of the steady-state intermediate makes it possible to greatly simplify the set of simultaneous differential equations that is predicted by the mechanism. When there are multiple intermediates to which the approximation is applicable, remarkable simplifications can result. This often happens when the mechanism involves free-radical intermediates.
The name “steady-state approximation” is traditional. When we use it, we do so on the understanding that the “state” which is approximately “steady” is the concentration of the intermediate, not the state of the system. Since a net reaction is occurring, the state of the system is distinctly not constant.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/05%3A_Chemical_Kinetics_Reaction_Mechanisms_and_Chemical_Equilibrium/5.12%3A_Predicting_Rate_Laws_from_Proposed_Mechanisms.txt
|
An enzyme is a molecule produced by a living organism. Its enzymes are essential to the life processes of the organism. Often enzymes are large molecules that are catalytically active only when folded into a particular conformation. Molecules whose reactions are catalyzed by an enzyme are customarily referred to as substrates for that enzyme. Two aspects of enzymatic catalysis are remarkable. First, it is often found that an enzyme can discriminate between two very similar substrates, greatly enhancing the reaction rate of one while having a much smaller effect on the rate of the other. Second, the rate enhancements achieved by enzyme catalysis are often very large; rate increases by a factor of ${\mathrm{10}}^{\mathrm{6}}$ are observed.
Simple mechanistic and kinetic models are sufficient to explain these essential features of enzyme catalysis. We consider the simplest case. Both the mechanisms and the rate laws for enzymatic reactions can become much more complex than the model we develop. (The literature of enzyme catalysis uses a specialized vocabulary. Problem 32 introduces some of this terminology and some of the mechanistic complications that can be observed.)
The catalytic specificity of enzymes is explained by the idea that the enzyme and the substrate have complex three-dimensional structures. These structures complement one another in the sense that enzyme and substrate can fit together tightly, bringing the catalytically active parts of the enzyme’s structure into close proximity with those substrate chemical bonds that are changed in the reaction. This is often called the lock and key model for enzyme specificity, invoking the idea that the detailed features of the enzyme’s structure are shaped to fit into the structure of its substrate, just as a key is machined to match the arrangement of tumblers in the lock that it opens. Figure 2 illustrates this idea.
The rates of enzyme-catalyzed reactions can exhibit complex dependence on the relative concentrations of enzyme and substrate. Most of these features are explained by the Michaelis-Menten mechanism, which postulates a rapid equilibration of enzyme and substrate with their enzyme-substrate complex. Transformation of the substrate occurs within this complex. The reaction products do not complex strongly with the enzyme. After the substrate has been transformed, the products diffuse away. The enzyme can then complex with another substrate molecule and catalyze its reaction. Representing the enzyme, substrate, enzyme–substrate complex, and products as $E$, $S$, $ES$, and $P$, respectively, the simplest-case Michaelis-Menten mechanism is
• Complexation equilibrium: $ES\ \begin{array}{c} K \ \rightleftharpoons \ k \end{array} \ E+S \nonumber$
• Substrate transformation: $ES\to E+P \nonumber$
where $K$ is the equilibrium constant for the dissociation of the enzyme–substrate complex, and $k$ is the rate constant for the rate-determining transformation of the enzyme–substrate complex into products.
Since the first-order decay of the enzyme–substrate complex is the product-forming step, the reaction rate is expected to be
$\frac{d\left[P\right]}{dt}=k\left[ES\right] \nonumber$
Letting the product concentration be $\left[P\right]=x$ and the initial concentrations of enzyme and substrate be $E_0$ and $S_0$, respectively, material balance requires
$E_0=\left[E\right]+\left[ES\right]$ and $S_0=\left[S\right]+\left[ES\right]+x$
Most experiments are done with the substrate present at a much greater concentration than the enzyme, $S_0\gg E_0$. In this case, $\left[ES\right]$ is negligible in the $S_0$ equation, and the concentration of free substrate becomes $\left[S\right]=S_0-x$. The enzyme-substrate equilibrium imposes the relationship
$K=\frac{\left[E\right]\left[S\right]}{\left[ES\right]}=\frac{\left[E\right]\left(S_0-x\right)}{\left[ES\right]} \nonumber$
Using this relationship to eliminate $\left[E\right]$ from the expression for $E_0$ gives
$E_0=\frac{K\left[ES\right]}{S_0-x}+\left[ES\right]=\left(\frac{K}{S_0-x}+1\right)\left[ES\right] \nonumber$
The rate law for product formation becomes
$\frac{dx}{dt}=k\left[ES\right]=\frac{kE_0\left(S_0-x\right)}{K+\left(S_0-x\right)} \nonumber$
If $K\ll S_0-x$, the dependence on substrate cancels, and the rate law becomes pseudo-zero-order
$\frac{dx}{dt}=kE_0=k_{obs} \nonumber$
where the observed zero-order rate constant depends on $E_0$. If $K\gg S_0-x$, the rate law becomes pseudo-first-order
$\frac{dx}{dt}=\frac{kE_0}{K}\left(S_0-x\right)=k_{obs}\left(S_0-x\right) \nonumber$
with pseudo-first-order rate constant $k_{obs}={kE_0}/{K}$. If neither of these simplifications is applicable, fitting experimental data to the integrated rate law becomes inconvenient.
In practice, the integrated form of the rate law is seldom used. As noted earlier, enzyme catalysis is usually studied using the method of initial rates. That is, the rate is approximated as ${dx}/{dt}\approx {\Delta x}/{\Delta t}$ by measuring the amount of product formed over a short time interval, $\Delta t$, in which $x\ll S_0$. The rate and equilibrium constants can then be found by fitting the rate measured at various initial concentrations, $E_0$ and $S_0$, to the equation
$\frac{\Delta x}{\Delta t}\approx \frac{kE_0S_0}{K+S_0} \nonumber$
Since the concentration of the enzyme–substrate complex is normally small compared to the concentration of the substrate, the Michaelis-Menten mechanism can also be analyzed by applying the steady-state approximation to $\left[ES\right]$. If the product-forming step occurs so rapidly that the concentration of the enzyme–substrate complex is not maintained at the equilibrium value, the steady-state treatment becomes necessary.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/05%3A_Chemical_Kinetics_Reaction_Mechanisms_and_Chemical_Equilibrium/5.13%3A_The_Michaelis-Menten_Mechanism_for_Enzyme-catal.txt
|
First-order kinetics for a unimolecular reaction corresponds to a constant probability that a given molecule reacts in unit time. In Section 5.7, we outline a simple mechanism that rationalizes this fact. This mechanism assumes that the probability of reaction is zero unless the molecule has some minimum energy. For molecules whose energy exceeds the threshold value, we assume that the probability of reaction is constant. However, when collisions are frequent, a molecule can have excess energy only for brief intervals.
The Lindemann-Hinshelwood mechanism for gas-phase unimolecular reactions provides a mathematical model of these ideas. Since molecules exchange energy via collisions, any given molecule acquires excess energy by collisions with other molecules, and loses it within a short time through other collisions. If it retains its excess energy long enough, it will react. If collisions are very infrequent, every molecule that acquires excess energy reacts before it undergoes a deactivating collision. In this case the reaction rate is proportional to the rate at which molecules acquire excess energy, which is proportional to the number of collisions. In a collection of $\ce{A}$ molecules, the total number of $\ce{A \bond{-} A}$ collisions is proportional to $\ce{[A]^2}$ not $\ce{[A]}$ and so the reaction rate depends on $\ce{[A]^2}$ not $\ce{[A]}$.
We represent molecules with excess energy as $A^*$, and assume that all $\ce{A}^*$ molecules undergo reaction with a constant probability. $\ce{A}^*$ molecules are formed in collisions between $A$ molecules, and they are deactivated by subsequent collisions with $A$ molecules.
\begin{align} \ce{A + A} & \ce{<=>[k_1][k_2] A}^* + \ce{A} \tag{step 1} \[4pt] \ce{A}^* & \ce{->[k_3] P} \tag{step 2} \end{align}
where $P$ is the product(s) of the reaction. The rate at which the number of moles of $A^*$ molecules changes is
$\frac{1}{V}\frac{dn_{\ce{A}^*}}{dt}=k_1{[\ce{A}]}^2-k_2 [\ce{A}^*][\ce{A}]-k_3[\ce{A}^*] \nonumber$
and since we suppose that $[\ce{A}^*]$ is always very small, the steady-state approximation applies, so that ${dn_{\ce{A}^*}}/{dt} \approx 0$, and
$[\ce{A}^*]=\frac{k_1{[\ce{A}]}^2}{k_2[\ce{A}]+k_3} \nonumber$
The reaction rate is given by
\begin{align} \frac{1}{V}\frac{d\xi }{dt} &=k_3[\ce{A}^*] \nonumber \[4pt] &=\frac{k_1k_3{[\ce{A}]}^2}{k_2[\ce{A}]+k_3} \label{law} \end{align}
When $k_2[\ce{A}]\gg k_3$, the rate of deactivating collisions between $\ce{A}^*$ and $\ce{A}$ is greater than the rate at which $\ce{A}^*$ molecules go on to become products, the rate law (Equation \ref{law}) for consumption of $[\ce{A}]$ becomes first order:
\begin{align*} \lim_{k_2[\ce{A}]\gg k_3} \frac{k_1k_3{[\ce{A}]}^2}{k_2[\ce{A}]+k_3} &= \frac{k_1k_3{[\ce{A}]}^\cancel{2}}{k_2\cancel{[\ce{A}]}} \[4pt] &= \dfrac{k_1k_3}{k_2} [\ce{A}] \[4pt] &= k_{\text{high pressure}} [\ce{A}] \end{align*}
where the first order rate constant $k_{\text{high pressure}}$ is a function of $k_1$, $k_2$ and $k_3$. This is termed the rate law’s high-pressure limit.
The low-pressure limit occurs when $k_2[\ce{A}]\ll k_3$. The rate law (Equation \ref{law}) becomes second order in $[\ce{A}]$:
\begin{align*} \lim_{k_2[\ce{A}]\ll k_3} \frac{k_1k_3{[\ce{A}]}^2}{k_2[\ce{A}]+k_3} &= \frac{k_1 \cancel{k_3}{[\ce{A}]}^2}{\cancel{k_3}} \[4pt] &= k_{\text{low pressure}} [\ce{A}]^2 \end{align*}
where the second order rate constant $k_{\text{low pressure}}$ is a function of only $k_1$. The rate of product formation becomes equal to the rate at which $\ce{A}^*$ molecules are formed.
5.15: Why Unimolecular Reactions are First Order
In the discussion above, we assume that a molecule with energy in excess of the minimum activation energy undergoes reaction with some fixed probability, represented by the rate constant $k_3$. A complete answer to the question of why unimolecular processes are characteristically first-order (in the high pressure limit) requires that we rationalize this assumption. Another way of phrasing this question is to ask why the activated molecule does not react immediately: Why isn’t $k_3\approx \infty$?
The total energy of a molecule is distributed among numerous degrees of freedom. The molecule has translational kinetic energy, rotational kinetic energy, and vibrational energy. When it acquires excess energy through a collision with another molecule, the additional energy could go directly into any of these modes. However, before the molecule can react, enough energy must find its way into some rather particular mode. If, for example, the reaction involves the breaking of a chemical bond, and the collision puts excess kinetic energy into the molecule’s translational modes, the reaction can occur only after some part of the excess translational energy has been converted to excess vibrational energy in the breaking bond. This intramolecular transfer of energy among the molecule’s various internal modes is time-dependent.
From this perspective, the probability that an excited molecule will react in unit time is the probability that the necessary energy will reach the critical locus in unit time. The reshuffling of energy among the molecule’s internal modes is a stochastic process, and the probability that the reshuffling will put the necessary energy where it is needed is a constant characteristic of the molecule.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/05%3A_Chemical_Kinetics_Reaction_Mechanisms_and_Chemical_Equilibrium/5.14%3A_The_Lindemann-Hinshelwood_Mechanism_for_First-o.txt
|
This chapter focuses on the relationship between rate laws and reaction mechanisms. We have noted that the rate law is rarely sufficient to establish the mechanism of a particular reaction. The base hydrolysis of cobalt pentammine complexes is a reaction for which numerous lines of evidence converge to establish the mechanism. To illustrate the range of data that can be useful in the determination of reaction mechanisms, we summarize this evidence here.
Cobalt(III) complexes usually undergo substitution reactions at readily measurable rates. Cobalt pentammine complexes, $\ce{Co(NH3)5X^{n+}}$, have been studied extensively.
Mechanism 1
In acidic aqueous solutions, the reaction
$\ce{ Co(NH3)_5X^{n+} + Y^{p-} -> Co(NH3)_5Y^{m+} + X^{q-}} \nonumber$
($\ce{X^{q-}}$, $\ce{Y^{p-}}$ = $\ce{Cl^{-}}$, $\ce{Br^{-}}$, $\ce{NO^{-}2}$, $\ce{SCN^{-}}$, $\ce{CH3CO^{-}2}$, etc.) usually proceeds exclusively through the aquo complex, $\ce{Co(NH3)_5{OH_2}^{3+}}$. The first step in the reaction is the breaking of a $\ce{Co \bond{-} X}$ bond and the formation of a $\ce{Co \bond{-} OH2}$ bond (step 1). Subsequently, a $Y^{p-}$ moiety can replace the aquo group (step 2). For example:
\begin{align*} \ce{Co(NH3)_5Cl^{2+} + H2O} &\rightleftharpoons \ce{Co(NH3)_5(OH2)^{3+} + Cl^{-}} \tag{step 1}\[4pt] \ce{Co(NH3)5(OH2)^{3+} + Br^{-}} &\rightleftharpoons \ce{Co(NH3)5Br^{2+} + H2O } \tag{step 2} \end{align*}
In aqueous solution, water is always present at a much higher concentration than the various possible entering groups $Y^{p-}$, so it is reasonable that it should be favored in the competition to form the new bond to $\ce{Co(III)}$. Nevertheless, we expect the strength of the $\ce{Co \bond{-} Y}$ bond to be an indicator of the nucleophilicity of $Y^{p-}$ in these substitution reactions. The fact that the aquo complex is the predominant reaction product strongly suggests that the energetics of the reaction are dominated by the breaking of the $\ce{Co \bond{-} X}$ bond; formation of the new bond to the incoming ligand apparently has little effect. Whether the old $\ce{Co \bond{-}X}$ bond has been completely broken (so that $\ce{Co(NH3)_5^{3+}}$ is a true intermediate) before the new $\ce{Co \bond{-} OH2}$ bond has begun to form remains an issue on which it is possible to disagree.
Mechanism 2
There is a conspicuous exception to the description given above. When the entering group, $Y^{p-}$, is the hydroxide ion, the reaction is
$\ce{ Co(NH3)_5X^{n+} + OH^{-} \to Co(NH3)_5OH^{2+} + X^{q-}} \nonumber$
This is called the base-hydrolysis reaction. It is faster than the formation of the aquo complex in acidic solutions, and the rate is law found to be
$\frac{d\left[\ce{Co(NH3)_5OH^{2+}}\right]}{dt}=k\ \left[\ce{Co(NH3)_5X^{n+}}\right]\left[\ce{OH^{-}}\right] \nonumber$
This rate law is consistent with ${\mathrm{S}}_{\mathrm{N}}\mathrm{2\ }$nucleophilic attack by the hydroxide ion at the cobalt center, so that $\ce{Co \bond{-} OH}$ bond formation occurs simultaneously with breaking of the $\ce{Co \bond{-} X}$ bond. However, this interpretation means that the hydroxide ion is a uniquely effective nucleophile toward cobalt(III). Nucleophilic displacements have been investigated on many other electrophiles. In general, hydroxide is not a particularly effective nucleophile toward other electrophilic centers. So, assignment of an ${\mathrm{S}}_{\mathrm{N}}\mathrm{2}$ mechanism to this reaction is reasonable only if we can explain why hydroxide is uniquely reactive in this case and not in others.
Mechanism 3
An alternative mechanism, usually labeled the ${\mathrm{S}}_{\mathrm{N}}\mathrm{1CB}$ (Substitution, Nucleophilic, first-order in the Conjugate Base mechanism) mechanism, is also consistent with the second-order rate law. In this mechanism, hydroxide removes a proton from one of the ammine ligands, to give a six-coordinate intermediate, containing an amido ($\ce{NH^{-}_2}$) ligand (step 1). This intermediate loses the leaving group $X^{q-}$ in the rate determining step to form a five-coordinate intermediate, $\ce{Co(NH3)4NH^{2+}}$ (step 2). This intermediate picks up a water molecule to give the aquo complex (step 3). In a series of proton transfers to (step 4) and from (step 5) the aqueous solvent, the aquo complex rearranges to the final product. With $\ce{Cl^{-}}$ as the leaving group, the ${\mathrm{S}}_{\mathrm{N}}\mathrm{1CB}$ mechanism is
\begin{align*} \ce{Co(NH3)_5{Cl}^{2+}+OH^{-}} &\rightleftharpoons \ce{Co(NH3)_4(NH2){Cl}^{+} + H_2O} \tag{step 1}\[4pt] \ce{Co(NH3)_4(NH2){Cl}^{+}} &\rightleftharpoons \ce{Co(NH3)_4{(NH2)}^{2+} + {Cl}^{-}} \tag{step 2} \[4pt] \ce{Co(NH3)_4{(NH2)}^{2+} + H_2O} &\rightleftharpoons \ce{Co(NH3)_4{(NH2)(OH2)}^{2+}} \tag{step 3} \[4pt] \ce{Co(NH3)_4{(NH2)(OH2)}^{2+} + OH^{-}} &\rightleftharpoons \ce{Co(NH3)_4{(NH2)OH}^{+} + H_2O} \tag{step 4} \[4pt] \ce{Co(NH3)_4{(NH2)OH}^{+} + H_2O} &\rightleftharpoons \ce{Co(NH3)_5{OH}^{2+} + OH^{-}} \tag{step 5} \end{align*}
The evidence in favor of the ${\mathrm{S}}_{\mathrm{N}}\mathrm{1CB}$ mechanism is persuasive. It requires that the ammine protons be acidic, so that they can undergo the acid–base reaction in the first step. That this reaction occurs is demonstrated by proton-exchange experiments. In basic $\ce{D_2O}$, the ammine protons undergo $\ce{H-D}$ exchange according to
$\ce{Co(NH3)5Cl^{2+} + D2O \rightleftharpoons Co(ND3)5Cl^{2+} + HDO} \nonumber$
The ammine protons are also necessary; base hydrolysis does not occur for similar compounds, like $\ce{Co(2,2^'-bipyridine)_2(O_2CCH_3)^{+}_2}$, in which there are no protons on the nitrogen atoms that are bound to cobalt (i.e., there are no $\ce{H-N-Co}$ moieties).
The evidence that $\ce{Co(NH3)_4{(NH2)}^{2+}}$ is an intermediate is also persuasive. When the base hydrolysis reaction is carried out in the presence of other possible entering groups, $Y^{p-}$, the rate at which $\ce{Co(NH3)_5X^{n+}}$ is consumed is unchanged, but the product is a mixture of $\ce{Co(NH3)_5{OH}^{2+}}$ and $\ce{Co(NH3)5Y^{n+}}$. If this experiment is done with a variety of leaving groups, $X^{q-}$, the proportions of $\ce{Co(NH3)_5{OH}^{2+}}$ and $\ce{Co(NH3)5Y^{n+}}$ are constant—independent of which leaving group the reactant molecule contains. These observations are consistent with the hypothesis that all reactants, $\ce{Co(NH3)_5X^{n+}}$, give the same intermediate, $\ce{Co(NH3)_4{(NH2)}^{2+}}$. The product distribution is always the same, because it is always the same species undergoing the product-forming reaction.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/05%3A_Chemical_Kinetics_Reaction_Mechanisms_and_Chemical_Equilibrium/5.16%3A_The_Mechanism_of_the_Base_Hydrolysis_of_Co%28NH.txt
|
Suppose that the bimolecular reaction
$\ce{A + B <=>[k_f][k_r] C + D} \nonumber$
occurs as an elementary process. From our conclusions about the concentration dependencies of elementary reactions, the rate of the net reaction is
\begin{align*} R &= \frac{1}{V}\frac{d\xi }{dt} \[4pt] &= -\frac{1}{V}\frac{dn_A}{dt} \[4pt] &= k_f [A][B]-k_r [C][D] \end{align*}
at any time. In particular, this rate equation must remain true at arbitrarily long times—times at which the reaction has reached equilibrium and at which ${dn_A}/{dt}=0$. Therefore, at equilibrium we have
\begin{align} K &=\frac{k_f}{k_r} \label{eq2A}\[4pt] &=\frac{[C]_{eq}[D]_{eq}}{[A]_{eq}[B]_{eq}} \label{eq2B} \end{align}
where the concentration-term subscripts serve to emphasize that the concentration values correspond to the reaction being in a state of equilibrium. We see that the ratio of rate constants, ${k_f}/{k_r}$, characterizes the equilibrium state. This constant is so useful we give it a separate name and symbol, the equilibrium constant, $K$.
Now, let us consider the possibility that the reaction is not an elementary process, but instead proceeds by a two-step mechanism involving an intermediate, $E$:
\begin{align*} \ce{A + B } \, & \ce{<=> [k_1][k_2] E } \[4pt] \ce{E} & \,\ce{ <=> [k_3][k_4] C + D} \end{align*}
The sum of these elementary processes yields the same overall reaction as before. This mechanism implies the following differential equations:
\begin{align*}\frac{1}{V} \frac{dn_A}{dt} &=-k_1[A][B]+k_2[E] \ \frac{1}{V}\frac{dn_D}{dt} &=k_3[E]-k_4[C][D] \end{align*}
At equilibrium, both $n_A$ and $n_D$ must be constant, so both differential equations must be equal to zero. Hence, at equilibrium,
\begin{align*} \frac{k_1}{k_2} &=\frac{[E]_{eq}}{[A]_{eq}[B]_{eq}} \[4pt] \frac{k_3}{k_4} &=\frac{[C]_{eq}[D]_{eq}}{[E]_{eq}} \end{align*}
Multiplying these, we have
\begin{align} K &=\frac{k_1k_3}{k_2k_4} \label{eq4A} \[4pt] &=\frac{[C]_{eq}[D]_{eq}}{[A]_{eq}[B]_{eq}} \label{eq4B} \end{align}
The concentration dependence of the equilibrium constant for the two-step mechanism (Equation \ref{eq4B}) is the same as for case that the reaction is an elementary process (Equation \ref{eq2B}). As far as the description of the equilibrium system is concerned, the only difference is that the equilibrium constant is interpreted as a function of different rate constants (Equation \ref{eq2A} vs. Equation \ref{eq4A}).
For the general reaction
$\ce{ aA + bB + \dots <=> fF + gG + \dots } \nonumber$
we see that any sequence of elementary reactions will give rise to the same concentration expression for the equilibrium system. Whatever the mechanism, reactant $A$ must appear $a$ times more often on the left side of elementary reactions than it does on the right. Product $F$ must appear $f$ times more often on the right side of elementary reactions than it does on the left. Any intermediates must appear an equal number of times on the left and on the right in the various elementary reactions. As a result, when we form the ratio of forward to reverse rate constants for each of the elementary reactions and multiply them, the concentration of reactant $A$ must appear in the product to the $-a$ power, the concentration of product $F$ must appear to the $+f$ power, and the concentrations of the intermediates must all cancel out. We conclude that the condition for equilibrium in the general case is
$K=\frac{{\left[F\right]}^f{\left[G\right]}^g\dots }{{[A]}^a{[B]}^b\dots } \nonumber$
where we drop the “$eq$” subscripts, trusting ourselves to remember that the equation is valid only when the concentration terms apply to the equilibrated system.
Pure Phases
When the reaction involves a pure phase as a reactant or product, we observe experimentally that the amount of the pure phase present in the reaction mixture does NOT affect the position of equilibrium. The composition of the reaction solution is the same so long as the solution is in contact with a finite amount of the pure phase. This means that we can omit the concentration of the substance that makes up the pure phase when we write the equilibrium-constant expression. In writing the equilibrium constant expression, we can take the concentration the substance to be an arbitrary constant. Unity is usually the most convenient choice for this constant.
To rationalize this experimental observation within our kinetic model for equilibrium, we postulate that the rate at which molecules leave the pure phase is proportional to the area, $S$, of the phase that is in contact with the reaction solution; that is,
$R_{leaving}=k_1S. \nonumber$
We postulate that the rate at which molecules return to the pure phase from the reaction solution is proportional to both the area and the concentration of the substance in the reaction solution. If the pure phase consists of substance $A$, we have
$R_{returning}=k_2S[A]. \nonumber$
At equilibrium, we have $R_{leaving}=R_{returning}$, so that $k_1S=k_2S[A]$, and $[A]={k_1}/{k_2}=\mathrm{constant}$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/05%3A_Chemical_Kinetics_Reaction_Mechanisms_and_Chemical_Equilibrium/5.17%3A_Chemical_Equilibrium_as_the_Equality_of_Rates_f.txt
|
The equilibrium constant expression is an important and fundamental relationship that relates the concentrations of reactants and products at equilibrium. We deduce it above from a simple model for the concentration dependence of elementary-reaction rates. In doing so, we use the criterion that the time rate of change of any concentration must be zero at equilibrium. Clearly, this is a necessary condition; if any concentration is changing with time, the reaction is not at equilibrium. However, our deduction uses another assumption that we have not yet emphasized. We assume that the forward and reverse rates of each elementary step are equal when the overall reaction is at equilibrium. This is a special case of the principle of microscopic reversibility
Definition: Principle of Microscopic Reversibility
Any molecular process and its reverse occur with equal rates at equilibrium
The principle of microscopic reversibility applies to any molecular process; it is inferred from the fact that such processes can be described by their equations of motion if the initial state of the constituent particles can be specified. The equations of motion can be either classical mechanical or quantum mechanical. We consider the implications of the principle for molecular processes that constitute elementary reactions. However, the principle also applies to equilibria in other molecular processes, notably the absorption and emission of radiation.
When we apply it to elementary reactions, we see that the principle of microscopic reversibility provides a necessary and sufficient condition for equilibrium from a reaction-mechanism perspective. The principle also imposes several significant conditions on the sequences of elementary processes that constitute a mechanism and on their relative rates.
In the previous section, we see that microscopic reversibility provides a sufficient basis for deducing the relationship relating reactant and product concentrations at equilibrium—the equilibrium constant expression—from our rate equations for elementary reactions. We now want to see that the principle of microscopic reversibility is indeed necessary. That is, setting ${\boldsymbol{d}{\boldsymbol{n}}_{\boldsymbol{x}}}/{\boldsymbol{dt}\boldsymbol{=}\boldsymbol{0}}$ for all species, $\boldsymbol{X}$, involved in the reaction is not in itself sufficient to assure that the system is at equilibrium.
We consider the triangular network of elementary reactions${}^{3,4}$ shown in Figure 3. This network gives rise to the following reaction-rate equations:
$V^{-1}\dfrac{{dn}_A}{dt}=-k_{1f}\left[A\right]+k_{1r}\left[B\right]+k_{3f}\left[C\right]-k_{3r}\left[A\right] \nonumber$
$V^{-1}\dfrac{{dn}_B}{dt}=+k_{1f}\left[A\right]-k_{1r}\left[B\right]+k_{2r}\left[C\right]-k_{2f}\left[B\right] \nonumber$
$V^{-1}\dfrac{{dn}_C}{dt}=+k_{2f}\left[B\right]-k_{2r}\left[C\right]+k_{3r}\left[A\right]-k_{3f}\left[C\right] \nonumber$
At equilibrium each of these equations must equal zero. Since we have three equations in three unknowns, it might at first appear that we can solve for the three unknowns $\left[A\right]$, $\left[B\right]$, and $\left[C\right]$. We can see, however, either from the equations themselves or by considering the physical situation that they represent, that only two of these equations are independent. That is, we have
$\frac{dn_A}{dt}+\frac{dn_B}{dt}+\frac{dn_C}{dt}=0 \nonumber$
While we cannot solve for $\left[A\right]$, $\left[B\right]$, and $\left[C\right]$ independently, we can solve for their ratios, which are
$\frac{\left[B\right]}{\left[A\right]}=\frac{k_{1f}k_{2r}+k_{2r}k_{3r}+k_{1f}k_{3f}}{k_{2f}k_{3f}+k_{1f}k_{3f}+k_{1r}k_{2r}} \nonumber$
$\frac{\left[C\right]}{\left[B\right]}=\frac{k_{1f}k_{2f}+k_{1r}k_{3r}+k_{2f}k_{3r}}{k_{1f}k_{2r}+k_{2r}k_{3r}+k_{1f}k_{3f}} \nonumber$
$\frac{\left[A\right]}{\left[C\right]}=\frac{k_{2f}k_{3f}+k_{1r}k_{3f}+k_{1r}k_{2r}}{k_{1f}k_{2f}+k_{1r}k_{3r}+k_{2f}k_{3r}} \nonumber$
Since we deduce these equations from the condition that all the time derivatives are zero, it might seem that they should represent the criteria for the system of reactions to be at equilibrium. Purely as a name for easy reference, let us call these equations the cyclic equilibrium set.
When we consider the reactions one at a time, we deduce the following equilibrium relationships:
$\frac{\left[B\right]}{\left[A\right]}=\frac{k_{1f}}{k_{1r}}\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \frac{\left[C\right]}{\left[B\right]}=\frac{k_{2f}}{k_{2r}}\ \ \ \ \ \ \ \ \ \ \ \ \ \frac{\left[A\right]}{\left[C\right]}=\frac{k_{3f}}{k_{3r}} \nonumber$
For easy reference, let us call these equations the one-at-a-time set.
Now, it cannot be true that both sets of relationships specify a sufficient condition for the system to be at equilibrium. To see this, let us first suppose that the principle of microscopic reversibility is a sufficient condition for equilibrium. Then the one-at-a-time set of equations must be sufficient to uniquely specify the position of equilibrium. It is easy to show that a set of rate constants that satisfies the one-at-a-time set also satisfies the cyclic set. Therefore, if microscopic reversibility is a sufficient condition for equilibrium, the cyclic network rate equations are necessarily equal to zero at equilibrium. In short, if we assume that microscopic reversibility is a sufficient condition for equilibrium, we encounter no inconsistencies, because the cyclic set of equations is satisfied by the same equilibrium-concentration ratios.
On the other hand, if we suppose that setting ${dn_x}/{dt=0}$ for all species, $X$, is a sufficient condition for equilibrium, then the cyclic set of equations must be sufficient to uniquely specify the position of equilibrium. Let us consider a particular set of rate constants:
• $k_{1f}=k_{2f}=k_{3f}=1$ and
• $k_{1r}=k_{2r}=k_{3r}=2$.
This set of rate constants satisfies the cyclic set of equations and requires that each of the equilibrium-concentration ratios be equal to 1. In this case, the one-at-a-time set of equations implied by microscopic reversibility cannot be satisfied. (We have ${\left[B\right]}/{\left[A\right]}=1$ and ${k_{1f}}/{k_{1r}}={1}/{2}$. Therefore, ${\left[B\right]}/{\left[A\right]}\neq {k_{1f}}/{k_{1r}}$.) That is, if we assume that setting ${dn_x}/{dt=0}$, for all species, $X$, is a sufficient condition for equilibrium, we must conclude that the principle of microscopic reversibility is false. Using the contrapositive: If the principle of microscopic reversibility is true, it is false that setting ${dn_x}/{dt=0}$ for all species, $X$, is a sufficient condition for equilibrium.
Setting the derivatives for the reaction network equal to zero is not sufficient to assure that the system is at equilibrium. It is merely necessary. To assure that the network is at equilibrium, we must apply the principle of microscopic reversibility and require that each elementary process in the network be at equilibrium.
The principle of microscopic reversibility requires that any elementary process occur via the same sequence of transitory molecular structures in both the forward and reverse directions. Consequently, if a sequence of elementary steps is a mechanism for a forward reaction, the same sequence of steps—traversed backwards—must be a mechanism for the reverse reaction. The principle does not exclude the possibility that a given reaction can occur simultaneously by two different mechanisms. However, it does mean that a given reaction cannot have one mechanism in the forward direction and a second, different mechanism in the reverse direction.
In describing reaction mechanisms, we assume that the energy of the reacting molecules depends on their progress along the path that they follow during the course of the reaction. We call this path the reaction coordinate. We suppose that we can plot the energy of the system as a function of the system’s position on the path, or displacement along the reaction coordinate. In the context of such a graph, the principle of microscopic reversibility is essentially the observation that the path is the same irrespective of the direction in which it is traversed. Two such paths are sketched in Figure 4. In this sketch, $E_{1f}$ and $E_{2f}$ are the activation energies for the two forward reactions; $E_{1r}$ and $E_{2r}$ are the activation energies for the reverse reactions.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/05%3A_Chemical_Kinetics_Reaction_Mechanisms_and_Chemical_Equilibrium/5.18%3A_The_Principle_of_Microscopic_Reversibility.txt
|
The principle of microscopic reversibility requires that parallel mechanisms give rise to the same expression for the concentration-dependence of the equilibrium constant. That is, the function that characterizes the equilibrium composition must be the same for each mechanism. If, for the reaction $aA+bB\rightleftharpoons cC+dD$, the equilibrium composition for mechanism $1$ is ${\left[A\right]}_1$,$\ {\left[B\right]}_1$,$\ {\left[C\right]}_1$,$\ {\left[D\right]}_1$, and that for mechanism $2$ is ${\left[A\right]}_2$,$\ {\left[B\right]}_2$,$\ {\left[C\right]}_2$,$\ {\left[D\right]}_2$, microscopic reversibility asserts that
$K_1=\frac{{\left[C\right]}^c_1{\left[D\right]}^d_1}{{\left[A\right]}^a_1{\left[B\right]}^b_1} \nonumber$
and
$K_2=\frac{{\left[C\right]}^c_2{\left[D\right]}^d_2}{{\left[A\right]}^a_2{\left[B\right]}^b_2} \nonumber$
In and of itself, microscopic reversibility makes no assertion about the value of ${\left[A\right]}_1$ compared to that of ${\left[A\right]}_2$. While microscopic reversibility asserts that the same function characterizes the concentration relationships for parallel mechanisms, it does not assert that the numerical value of this function is necessarily the same for each of the mechanisms.
However, that these numerical values must be equal follows directly when we introduce another of our most basic observations. No matter how many mechanisms may be available to a reaction in a particular system, the concentration of any reagent can have only one value in an equilibrium state. At equilibrium, ${\left[A\right]}_1={\left[A\right]}_2$, etc.; therefore, the numerical values of the equilibrium constants must be the same: $K_1=K_2$.
The uniqueness of the equilibrium composition is a fundamental feature of our ideas about what chemical equilibrium means. Nevertheless, it is of interest to show that we can arrive at this conclusion from a different perspective: We can use an idealized machine to show that the second law of thermodynamics requires that parallel mechanisms must produce the same the equilibrium composition. Our argument is a proof by contradiction.
Let us suppose that $A$, $B$, and $C$ are gases. Suppose that the reaction $A\to B+C$ occurs in the absence of a catalyst, but that reaction occurs in the opposite direction, $C+B\to A$, when a catalyst is present. More precisely, we assume that the position of equilibrium $A\rightleftharpoons B+C$ lies to the right in the absence of the catalyst and to the left in its presence, while all other reaction conditions are maintained constant. These assumptions mean that the equilibrium composition for the catalyzed mechanism is different from that of the mechanism that does not involve the catalyst.
We can show that these assumptions imply that the second law of thermodynamics is false. If we accept the validity of the second law, this violation means that the assumptions cannot in fact describe any real system. (We are getting a bit ahead of ourselves here, inasmuch as our detailed consideration of the laws of thermodynamics begins in Chapter 6.)
Given our assumptions, we can build a machine consisting of a large cylinder, closed by a frictionless piston. The cylinder contains a mixture of $A$, $B$, and $C$, and a quantity of the catalyst. We provide a container for the catalyst, and construct the device so that the catalyst container can be opened and closed from outside the cylinder. Finally, we immerse the entire cylinder in a fluid, which we maintain at a constant temperature.
When the catalyst container is sealed, so that the gaseous contents of the cylinder are not in contact with the catalyst, reaction occurs according to $A\to B+C$, and the piston moves outward, doing work on the surroundings. When the catalyst container is open, reaction occurs according to $C+B\to A$, and the piston moves inward. Figure $1$ shows these changes schematically. At the end of a cycle, the machine is in exactly the same state as it was in the beginning, and the temperature of the reaction mixture is the same at the end of a cycle as it was at the beginning. By connecting the piston to a load, we can do net work on the load as the machine goes through a cycle. For example, if we connect the piston to a mechanical device that converts the reciprocating motion of the piston into rotary motion, we can wind a rope around an axle and thereby lift an attached weight.
We can operate this machine as an engine by alternately opening and closing the catalyst container. We can make the cylinder as large as we want, so the energy we expend in opening and closing the catalyst container can be made arbitrarily small compared to the amount of work we get out of the machine in a given cycle. All of this occurs with the machine maintained at a constant temperature. If energy is conserved, the machine must absorb heat from the bath during the cycle; otherwise, the machine would be doing work with no offsetting consumption of energy. This would be a violation of the first law of thermodynamics. (See Sections 5.7-5.11.)
From experience, we know that this machine cannot function in the manner we have described. This experience is embodied in the second law of thermodynamics we know that it is impossible to construct a machine that operates in a cycle, exchanges heat with its surroundings at only one temperature, and produces work in the surroundings (Section 9-1). Our argument assumes that two reaction mechanisms are available in a particular physical system, that they consume the same reactants, that they produce the same products, and that the equilibrium compositions are different. These assumptions imply that the second law is false. Since we are confident that it is possible for some system to satisfy the first three of these assumptions, the second law requires that the last one be false: the equilibrium compositions must be the same.
We see that there is a complementary relationship between microscopic reversibility and this statement of the second law. Microscopic reversibility asserts that a unique function of concentrations characterizes the equilibrium state for any reaction mechanism, but does not require that every mechanism reach the same state at equilibrium. This statement of the second law implies that a reaction’s equilibrium composition unique, but it does not specify a law relating the equilibrium concentrations of the reacting species. (In Chapters 9 and 13, we see that, by augmenting this statement of the second law with some additional ideas, we are led to a more rigorous statement, from which we are eventually able to infer the same functional form for the equilibrium constant.)
Microscopic reversibility asserts that a unique function of concentrations characterizes the equilibrium state for any reaction mechanism, but does not require that every mechanism reach the same state at equilibrium.
G. N. Lewis gave an early statement of the principle of microscopic reversibility. He called it “the law of entire equilibrium,” and observed that it is “a law which in its general form is not deducible from thermodynamics, but proves to be compatible with the laws of thermodynamics in all cases where a comparison is possible.”
It is worth noting that we have not shown that the existence of a unique equilibrium state implies either microscopic reversibility or the second law. Also, even though the principle of microscopic reversibility is inferred from the laws of mechanics, our development of the equilibrium constant relationship—which we do view as a law of thermodynamics—depends on our equations for the rates of elementary reactions. Our rate equations are not logical consequences of the laws of motion. Rather, they follow from assumptions we make about the average behavior of systems that contain many molecules. Consequently, we should not suppose that we have deduced a thermodynamic result (the condition for chemical equilibrium) solely from the laws of mechanics. In Section 12-2, we give brief additional consideration to the relationship between the theories of mechanics and thermodynamics. Beginning in Chapter 20, we develop thermodynamic equations by applying statistical models to the distribution of molecular energy levels.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/05%3A_Chemical_Kinetics_Reaction_Mechanisms_and_Chemical_Equilibrium/5.19%3A_Microscopic_Reversibility_and_the_Second_Law.txt
|
1. A dimeric molecule, $A_2$, dissociates in aqueous solution according to $A_2\to 2A$. One millimole of $A_2$ is dissolved rapidly in one liter of pure water. After 100 seconds, the concentration of $A_2$ is $8.0\times {10}^{-4\ }\underline{\mathrm{M}}$.
(a) What is the concentration of $A$?
(b) What is the average rate at which $A$ has been formed, in moles per liter per second?
(c) What is the average rate at which $A_2$ has reacted?
(d) What is the average reaction rate?
2. The initial concentration of $A$ in a solution is ${10}^{-2}\ \underline{\mathrm{M}}$. The initial concentrations of $B$ and $C$ are both ${10}^{-3}\ \underline{\mathrm{M}}$. The volume of the solution is 2 L. Reaction occurs according to the stoichiometry: $A+2B\to 3C$. After $50$ seconds, the concentration of $C$ is $1.3\times {10}^{-3\ }\underline{\mathrm{M}}$.
(a) What is the concentration of $A$?
(b) What is the concentration of $B$?
(c) What is the change in the extent of this reaction during this $50\ \mathrm{s}$ period?
(d) What is the average reaction rate?
3. When $A_2$ dissociates according to $A_2\to 2A$, the observed rate law is
$\frac{1}{V}\frac{dn_A}{dt}=\left(\frac{k}{2}\right) \left[\mathrm{\ }A_2\right]^{3/2} \nonumber$
(a) What is the order of the reaction in $\left[A_2\right]$?
(b) What is the order of the reaction overall?
4. For the reaction $A+2B\to 3C$, the observed rate law is $\frac{1}{V}\frac{d\xi }{dt}=\frac{k\left[A\right]{\left[B\right]}^2}{\left[C\right]} \nonumber$ (a) What is the order of the reaction in $\left[A\right]$?
(b) In$\left[B\right]$?
(c) In $\left[C\right]$?
(d) What is the order of the reaction overall?
5. You deposit $\1000$ in a bank that pays interest at a $5\%$ annual rate. How much will your account be worth after one year if the bank compounds interest annually? How much if it compounds interest monthly? Daily? Continuously?
6. We deduced that $\mathrm{exp}\left(rt\right)={\mathop{\mathrm{lim}}_{m\to \infty } {\left(1+\frac{r}{m}\right)}^{mt}\ }$.
Take $r=0.2$ and $t=10$. Calculate $\mathrm{exp}\left(rt\right)$. Calculate ${\left(1+\frac{r}{m}\right)}^{mt}$ for $m=1,\ 10,\ 100,\ {10}^3,{10}^4.$ Do the same for $r=-0.2$ and $t=10$.
7. Suppose that you invest$\ \10,000$ in the stock market and that your nest egg grows at the rate of $11\%$ per year. (This number, or something close to it, is often cited as the historical long-term average performance of equities traded on the New York Stock Exchange.) Assuming continuous compounding, what will be the value of your nest egg at the end of $30$ years?
8. Suppose instead that you “invest” your $\10,000$ in an automobile. The value of the automobile will most likely decrease with time, by, say, roughly $20\%$ per year. Assuming continuous decay at this rate, what will be the value of the automobile at the end of $5$ years? $\ 30$ years?
9. At particular reaction conditions, a compound $C$ decays in a first-order reaction with rate constant ${10}^{-3}\ {\mathrm{s}}^{\mathrm{-1}}$. If the initial concentration of $C$, is ${\left[C\right]}_0={10}^{-2}\ \mathrm{mol\ }{\mathrm{L}}^{\mathrm{-1}}$, how much does $\left[C\right]$ change in the first second? What is $\left[C\right]$ after $100$ seconds? $1000$ seconds? $2000$ seconds? $4000$ seconds?
10. In problem 9, how long does it take for one-half of the original concentration of $C$ to disappear? How does the half-life depend on the initial concentration of C?
11. $C^{14}$ is produced continuously in the upper atmosphere. It decays with a half-life of 5715 years. Since this has been going on for a long time, the concentration of $C^{14}$ in the atmosphere has reached a steady-state value. Living things continuously exchange carbon with the atmosphere, so the concentration of $C^{14}$ (i.e., the fraction of the $C$ that is $C^{14}$) in the biosphere is the same as it is in the atmosphere. When an organism dies, it ceases to exchange carbon with the biosphere, and the concentration of $C^{14}$ in its remains begins to decrease. Charcoal found at an ancient campsite during an archeological dig has a $C^{14}$ content that is 22% of the atmospheric value.
(a) What is the rate constant, in $y^{−1}$, for $C^{14}$ decay?
(b) How old is the charcoal?
(c) Reverend Smith tells his parishioners that God created the universe about 4000 B.C. If Smith accepts that the atmospheric concentration and the decay rate of $C^{14}$ have been constant since the time of creation, what would he conclude about the $C^{14}$ content of the charcoal when God created it?
12. Mordred has introduced an exotic fungus that is growing on the surface of King Arthur’s favorite pond at Camelot. Merlin has calculated that the area covered by the fungus increases by 10% per day. That is,
$d (area)/dt = (0.10/day) \times area \nonumber$
Fortunately, a trained Knight of the Round Table can clear $100 \text{ m}^2$ of fungus per day. Arthur has six trained knights who would cheerfully perform this remediation work, but all six are committed to out-of-town dragonslaying activities for the next 10 days. Merlin says that the fungus covers $2874 \text{ m}^2$ at 8:00 a.m. this morning. (The total area of the pond is about $11,200 \text{ m}^2$.) Can Arthur wait for the dragon slayers to return, or does he need to develop an alternative effective management action plan?
13. What happens to the balance in a bank account, $P(t)$, if $P(0) < 0$? What do bankers call this sort of account?
14. Suppose that you have an account whose initial balance is −\$1000. The bank will continuously compound interest on this account at the annual rate of 11%. You make continuous payments to this account at a rate of $q$ dollars/year. What must $q$ be if you want to increase the value of the account to exactly zero at the end of 10 years?
For problems 15 – 18, prove that your conclusion is correct by making an appropriate plot. In each case, the reaction occurs at constant volume.
15. The reaction $A +B \rightarrow C$ is studied with a large excess of B. $([A]_0 = 10^{−2} \underline{ \text{ M}}$. $[B]_0 = 10^{−1} \underline{ \text{ M}}$. $[C]_0 = 0.0 \underline{ \text{ M}}$). Concentration versus time data are given in the table below. What is the order of the reaction in the concentration of A, and what is the rate constant?
$\begin{array}{|c|c|} \hline \text{Time, s} & [A], \underline{ \text{ M}} \ \hline 100 & 9.1 \times 10^{−3} \ \hline 300 & 7.4 \times 10^{−3} \ \hline 500 & 6.1 \times 10^{−3} \ \hline 800 & 4.6 \times 10^{−3} \ \hline 1000 & 3.6 \times 10^{−3} \ \hline 1500 & 2.3 \times 10^{−3} \ \hline 2000 & 1.3 \times 10^{−3} \ \hline 2500 & 8.3 \times 10^{−4} \ \hline \end{array} \nonumber$
16. The following data are collected for a reaction in which $A$ dimerizes: $2A \rightarrow A_2$. What is the order of the reaction in $[A]$, and what is the rate constant?
$\begin{array}{|c|c|} \hline \text{Time, hr} & [A], ~ \underline{ \text{M}} \ \hline 0.0 & 1.0 \times 10^{−2} \ \hline 0.28 & 9.1 \times 10^{−3} \ \hline 0.56 & 8.3 \times 10^{−3} \ \hline 1.39 & 6.7 \times 10^{−3} \ \hline 2.78 & 5.0 \times 10^{−3} \ \hline 5.56 & 3.3 \times 10^{−3} \ \hline 11.10 & 2.0 \times 10^{−3} \ \hline 16.70 & 1.4 \times 10^{−3} \ \hline \end{array} \nonumber$
17. In the reaction $A + B \rightarrow C$, the rate at which $B$ is consumed is first-order in $[B]$. In a series of experiments whose results are tabulated below, the observed first-order rate constant, $k_{obs}$ is measured for the disappearance of $B$ in the presence of large excesses of $A$. What is the order of the reaction in $[A]$? The rate law? The rate constant? ($[B]_0 = 10^{−4} ~ \underline{\text{M}}$ in all experiments.)
$\begin{array}{|c|c|} \hline [A]_0, ~ \underline{\text{M}} & k_{obs}, s^{−1} \ \hline 2.0 \times 10^{−1} & 2.6 \times 10^{−5} \ \hline 1.1 \times 10^{−1} & 1.4 \times 10^{−5} \ \hline 6.3 \times 10^{−2} & 8.2 \times 10^{−6} \ \hline 2.5 \times 10^{−2} & 3.3 × 10^{−6} \ \hline 9.1 \times 10^{−3} & 1.2 \times 10^{−6} \ \hline \end{array} \nonumber$
18. In the reaction $A + B → C$, the rate at which $B$ is consumed is first-order in $[B]$. The table below presents first-order rate constants for the disappearance of $B$ in the presence of large excesses of $A$. Plot these data to test the hypothesis that
$k_{obs} = \frac{k_1 [A]_0}{1 + k_2 [A]_0} \nonumber$
What are the values of $k_1$ and $k_2$?
$\begin{array}{|c|c|} \hline [A]_0, ~ \underline{\text{M}} & k_{obs}, ~ s^{−1} \ \hline 5.0 \times 10^{−1} & 8.3 \times 10^{−5} \ \hline 2.0 \times 10^{−1} & 6.7 \times 10^{−5} \ \hline 1.0 \times 10^{−1} & 5.0 \times 10^{−5} \ \hline 5.0 \times 10^{−2} & 3.3 \times 10^{−5} \ \hline 1.4 \times 10^{−2} & 1.2 \times 10^{−5} \ \hline 7.6 \times 10^{−3} & 7.1 \times 10^{−6} \ \hline 3.0 \times 10^{−3} & 3.0 \times 10^{−6} \ \hline \end{array} \nonumber$
19. For the reaction $A +2B \rightarrow C + D$, the rate law is
$\frac{d[C]}{dt} = k[A][B]^2 \nonumber$
The volume is constant. Suggest a mechanism for this reaction that does not include a termolecular elementary process. Show that this mechanism is consistent with the rate law.
20. For the reaction $X +2Y → W + Z$, the rate law is
$d[W] dt = k[X] \nonumber$
The volume is constant. Suggest a mechanism for this reaction. Show that this mechanism is consistent with the rate law.
21. For the reaction $A_2 + 2B → 2C$, the rate law is
$\frac{d[C]}{dt} = k[A_2]^{1⁄2} [B] \nonumber$
The volume is constant. Suggest a mechanism for this reaction. Show that this mechanism is consistent with the rate law.
22. For the reaction $AB + C → A +D$, the rate law is
$d[D] dt = \frac{k_u [AB][C]}{k_v [A] +k_w[C]}\) The volume is constant. Suggest a mechanism for this reaction. Show that this mechanism is consistent with the rate law. 23. When we use the flooding technique to study a reaction rate, we often say that the concentrations of species present in great stoichiometric excess are essentially constant. This is a convenient but rather imprecise way to describe a useful approximation. Consider the reaction $A + B \rightarrow C$. Over any time interval, $\Delta t$, we have $\Delta [B] = \Delta [A]$. In absolute terms, $[B]$ is no more constant than $[A]$. Suppose that the reaction rate is described by $R = k[A][B]$ and that $[B]_0 = 100[A]_0$. Define the extent of reaction by $\xi = [A]_0 − [A] = [B]_0 − [B]$. Find \[ \frac{\partial R⁄ \partial [B]}{\partial R⁄ \partial [A]} \nonumber$
and evaluate this relative concentration dependence at 0% conversion, $\xi = 0$ (where $[A] = [A]_0$), and at 90% conversion, $\xi = 0.9[A]_0$ (where $[A] = 0.9[A]_0$). Give a more precise statement of what we mean when we say that “the concentration of $B$ is essentially constant” in such circumstances.
24. For the reaction $aA + bB \rightleftharpoons cC + dD$, we define the extent of reaction $\xi = −(n_A − n_A^o )/a$. When the reaction reaches equilibrium (at $t = \infty$), the extent of reaction becomes $\xi = −(n_A^{\infty} − n_A^o)⁄a$. If $A$ is the limiting reagent and the reaction goes to completion, the theoretical extent of reaction is $\xi_{theoretical} = n_A^0/a$. Why? It is often useful to describe the amount of reaction that has occurred as a dimensionless fraction. If the reaction does not go to completion, $n_0^{\infty} > 0$. Use $\xi$ and $\xi_{\infty}$ to express the “extent of equilibration,” $f_{equilibrium}$, as a dimensionless fraction. Use $\xi$ and $\xi_{theoretical}$ to express the “conversion,” $f_{conversion}$, as a dimensionless fraction. How would you define the “equilibrium conversion”?
25. We often exercise a degree of poetic license in talking about “fast” and “slow” steps in reaction mechanisms. In Section 5.12, Case I, for example, we say that the step that consumes $A$ to produce intermediate $C$ is “slow” but the step that consumes $C$ to produce $D$ is “fast.” We then write $−d[A]⁄dt \approx d[D]⁄dt$. Discuss.
26. Find the rate law for the simplest-case Michaelis-Menten mechanism by applying the steady-state approximation to the concentration of the enzyme– substrate complex. Under what conditions do this treatment and the result developed in the text converge to the same rate law?
27. What is the half-life of a constant-volume secondorder reaction, $2A → C$, for which
$\frac{d[A]}{dt} = − \frac{2}{V} \frac{d \xi}{dt} = −2k[A]^2 \nonumber$
28. For the reaction between oxygen and nitric oxide, $2NO + O_2 → 2NO_2$, the observed rate law, at constant volume, is
$\frac{d[NO_2]}{dt} = k[NO]^2 [O_2] \nonumber$
Show that this rate law is consistent with either of the following mechanisms:
(i) $\begin{array} 2NO \rightleftharpoons N_2O_2 & \text{ (fast equilibrium)} \ N_2O_2 + O_2 → 2NO_2 & \text{ (rate-determining step)} \end{array}$
(ii) $\begin{array} NO + O_2 \rightleftharpoons NO_3 & \text{ (fast equilibrium)} \ NO_3 + NO \rightarrow 2NO_2 & \text{ (rate-determining step)} \end{array}$
29. For the reaction between gaseous chlorine and nitric oxide, $2NO + Cl_2 → 2NOCl$, doubling the nitric oxide concentration quadruples the rate, and doubling the chlorine concentration doubles the rate.
(a) Deduce the rate law for this reaction.
(b) Keeping mind the mechanisms in problem 28, write down two possible mechanisms that are consistent with the rate law you deduced in part (a). Show that each of these mechanisms is consistent with the rate law in part (a).
30. Nitric oxide reacts with hydrogen according to the equation, $2NO +2H_2 \rightarrow N_2 +2H_2O$. At constant volume, the following kinetic data have been obtained for this reaction at 1099 K. [ 1 mm = 1 torr = (1⁄760) atm.] C. N. Hinshelwood and T. Green, J. Chem. Soc., 730 (1926)]
$\begin{array}{|c|c|c|} \hline P^0 ~ (H_2), \text{ mm} & P^0 ~ (NO), \text{ mm} & \begin{array}{c c} \text{Initial reaction} \ \text{rate, mm s}^{−1} \end{array} \ \hline 289 & 400 & 0.162 \ \hline 205 & 400 & 0.110 \ \hline 147 & 400 & 0.079 \ \hline 400 & 359 & 0.150 \ \hline 400 & 300 & 0.103 \ \hline 400 & 152 & 0.025 \ \hline \end{array} \nonumber$
(a) What is the rate law for this reaction?
(b) Suggest two mechanisms for this reaction that are consistent with the rate law you deduce in part (a).
31. Review the reactions, rate laws, and mechanisms that you considered in problems 28, 29, and 30.
(a) Does comparing these three reactions and their rate laws provide any basis for preferring one set of mechanisms to the other?
(b) Which set of mechanisms do you prefer; that is, which mechanism in each of problems 29, 30, and 31 seems more likely to you? Why?
32. The rate of an enzyme-catalyzed reaction, commonly called the velocity, $v$, is measured directly as $v = d[P]⁄dt \approx \Delta [P]⁄∆t = −∆[S]⁄∆t$. For small $S_0$, a plot of $v$ versus. the initial substrate concentration, $S_0$, increases with increasing $S_0$. For large values of $S_0$, $v$ reaches a constant value, $v_max$. The substrate concentration at which the reaction rate is equal to $v_{max}⁄2$ is defined to be the Michaelis constant, $K_M$. It is customary to express the equilibrium constant as the dissociation constant for the enzyme–substrate complex. Let the total enzyme concentration be $E_0$. For the mechanism
$\begin{array}{c c} ES \overset{K_S}{\rightleftharpoons} E + S & K_S = [E][S]/[ES] \ ES \overset{k}{ \rightarrow} E + P & \text{ (rate-determining step)} \end{array} \nonumber$
(a) Show that the velocity is given by
$v = kE_0 \left( 1 + \frac{KS}{S_0} \right) \nonumber$
(b) What is $v_{max}$?
(c) What is the Michaelis constant, $K_M$?
(d) Does a larger value of $K_M$ correspond to stronger or weaker complexation of the substrate by the enzyme?
(e) Sketch the curve of $v$ versus. $S_0$ for the reaction rate described in (a). On this sketch, identify $v_{max}$, $v_{max}⁄2$, and $K_M$.
(f) If a second substrate, $I$, can form a complex with the enzyme, the reaction rate for substrate $S$ decreases in the presence of $I$. Such substrates, $I$, are called inhibitors. Many kinds of inhibition are observed. One common distinction is between inhibitors that are competitive and inhibitors that are not competitive. Competitive inhibition can be explained in terms of a mechanism in which the enzyme equilibrates with both substrates.
$\begin{array}{c c} E_S \overset{K_S}{\rightleftharpoons} E + S & K_S = [E][S]⁄[ES] \ EI \overset{K_I}{\rightleftharpoons} E + I & K_I = [E][I]⁄[EI] \ ES \overset{k}{\rightarrow} E +P & \text{(rate-determining step)} \nonumber \end{array} \nonumber$
Show that the velocity is given by
$v = kE_0/ \left[ 1 + \frac{K_S}{S_0} \left(1 + \frac{I_0}{K_I} \right) \right] \nonumber$
(g) A series of experiments is done in which $S_0$ is varied, while $I_0$ is maintained constant. The results are described by the equation in (f). What is $v_{max}$ in this series of experiments?
(h) For the series of experiments done in (g), what is the Michaelis constant, $K_M$?
33. Consider a bimolecular reaction between molecules of substances $A$ and $B$. If there are no forces of attraction or repulsion between $A$ molecules and $B$ molecules, we expect their collision rate to be $k[A][B]$, where $k$ is a constant whose value is independent of the values of $[A]$ and $[B]$. Now suppose that molecules of $A$ and $B$ experience a strong attractive force whenever their intermolecular separation becomes comparable to, say, twice the diameter of an $A$ molecule. Will the value of $k$ be different when there is a strong force of attraction than when there is no such force?
Notes
$^1$ See Fred Basolo and Ralph G. Pearson, Mechanisms of Inorganic Reactions, $2^{\text{nd}}$ Ed., John Wiley & Sons, Inc., New York, 1967, pp 177-193.
$^2$ R.C. Tolman, The Principles of Statistical Thermodynamics, Dover Publications, 1979, (published originally in 1938 by Oxford University Press), p 163.
$^3$ R. L. Burwell and R. G. Pearson, J. Phys. Chem., 79, 300, (1966).
$^4$ George M. Fleck, Chemical Reaction Mechanisms, Holt, Rinehard, and Winston, Inc., New York, NY, 1971, pp 104-112.
$^5$ G. N. Lewis, Proc. Nat. Acad. Sci. U. S., 11, 179 (1925).
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/05%3A_Chemical_Kinetics_Reaction_Mechanisms_and_Chemical_Equilibrium/5.20%3A_Problems.txt
|
• 6.1: The Thermodynamic Perspective
Classical thermodynamics does not consider the atomic and molecular characteristics of matter. In developing it, we focus exclusively on the measurable properties of macroscopic quantities of matter. In particular, we study the relationship between the thermodynamic functions that characterize a system and the increments of heat and work that the system receives as it undergoes some change of state. In doing so, we adopt some particular perspectives.
• 6.2: Thermodynamic Systems and Variables
We characterize the system by specifying the values of enough variables so that the system can be exactly replicated. By “exactly replicated” we mean, of course, that we are not able to distinguish the system from its replicate by any experimental measurement. Any variable that can be used to characterize the system in this way is called a variable of state, a state variable, or a state function.
• 6.3: Equilibrium and Reversibility - Phase Equilibria
We call any process whose direction can be reversed by an arbitrarily small change in a thermodynamic state function a reversible process. Evidently, there is a close connection between reversible processes and equilibrium states. If a process is to occur reversibly, the system must pass continuously from one equilibrium state to another.
• 6.4: Distribution Equilibria
A system can contain more than one phase, and more than one chemical substance can be present in each phase. If one of the substances is present in two phases, we say that the substance is distributed between the two phases. We can describe the equilibrium distribution quantitatively by specifying the concentration of the substance in each phase. At constant temperature, we find experimentally that the ratio of these concentrations is approximately constant.
• 6.5: Equilibria in Chemical Reactions
Equilibria involving chemical reactions share important characteristics with phase and distribution equilibria.
• 6.6: Le Chatelier's Principle
If we start with a system that is at equilibrium, and we impose a change in conditions on it, the “initial” state of the system after the imposed change of conditions will generally not be an equilibrium state. Experience shows that the system will undergo some spontaneous change to arrive at a new equilibrium state. In these particular circumstances, Le Chatelier’s principle enables us to predict the spontaneous change that occurs.
• 6.7: The Number of Variables Required to Specify Some Familiar Systems
In the experiment or in the mathematical model, fixing two of the three intensive variables is sufficient to fix the equilibrium properties of the system. Fixing the equilibrium properties means, of course, that the state of the system is fixed to within an arbitrary factor, which can be specified either as the number of moles present or as the system volume.
• 6.8: Gibbs' Phase Rule
Gibbs found an important relationship among the number of chemical constituents, the number of phases present, and the number of intensive variables that must be specified in order to characterize an equilibrium system. This number is called the number of degrees of freedom available to the system and is given the symbol F . By specifying F intensive variables, we can specify the state of the system—except for the amount of each phase.
• 6.9: Reversible vs. Irreversible Processes
A process that is not reversible is said to be irreversible. We distinguish between two kinds of irreversible processes. A process that cannot occur under a given set of conditions is said to be an impossible process. A process that can occur, but does not do so reversibly, is called a possible process or a spontaneous process.
• 6.10: Duhem's Theorem - Specifying Reversible Change in A Closed System
• 6.11: Reversible Motion of A Mass in A Constant Gravitational Field
Let us explore our ideas about reversibility further by considering the familiar case of a bowling ball ball that can move vertically in the effectively constant gravitational field near the surface of the earth.
• 6.12: Equilibria and Reversible Processes
The distinction between a system at equilibrium and a system undergoing reversible change is razor-thin. What we have in mind goes to the way we choose to define the system and centers on the origin of the forces that affect its energy. For a system at equilibrium, the forces are fixed. For a system undergoing reversible change, some of the forces originate in the surroundings, and those that do are potentially variable.
• 6.13: The Laws of Thermodynamics
We usually consider that the first, second, and third laws of thermodynamics are basic postulates. One of our primary objectives is to understand the ideas that are embodied in these laws. We introduce these ideas here, using statements of the laws of thermodynamics that are immediately applicable to chemical systems. In the next three chapters, we develop some of the most important consequences of these ideas.
• 6.14: Thermodynamic Criteria for Change
When the state of an isolated system can change, we say that the system is capable of spontaneous change. When an isolated system is incapable of spontaneous change, we say that it is at equilibrium. Ultimately, this statement defines what we mean by (primitive) equilibrium.
• 6.15: State Functions in Systems Undergoing Spontaneous Change
• 6.16: Problems
Thumbnail: Illustration of a system exhibiting an irreversible process as rapid and slow particles mix together. (CC BY 2.5 Generic; Htkym and Dhollm via Wikipedia)
06: Equilibrium States and Reversible Processes
Classical thermodynamics does not consider the atomic and molecular characteristics of matter. In developing it, we focus exclusively on the measurable properties of macroscopic quantities of matter. In particular, we study the relationship between the thermodynamic functions that characterize a system and the increments of heat and work that the system receives as it undergoes some change of state. In doing so, we adopt some particular perspectives. The first is to imagine that we can segregate the macroscopic sample that we want to study from the rest of the universe. As sketched in Figure 1, we suppose that we can divide the universe into two mutually exclusive pieces: the system that we are studying and the surroundings, which we take to encompass everything else.
We imagine the system to be enclosed by a boundary, which may or may not correspond to a material barrier surrounding the collection of matter that we designate as the system. (For our purposes, a system will always contain a macroscopic quantity of matter. However, this is not necessary; thermodynamic principles can be applied to a volume that is occupied only by radiant energy.) Everything inside the boundary is part of the system. Everything outside the boundary is part of the surroundingssurroundings. Every increment of energy that the system receives, as either heat or work, is passed to it from the surroundings, and conversely.
An open system can exchange both matter and energy with its surroundings. A closed system can exchange energy but not matter with its surroundings. An isolated system can exchange neither matter nor energy.
Together, system and surroundings comprise the universe, thermodynamic.
If we are too literal-minded, this reference to “the universe” can start us off on unnecessary ruminations about cosmological implications. All we really have in mind is an energy-accounting scheme, much like the accountants’ system of double-entry bookkeeping, in which every debit to one account is a credit to another. When we talk about “the universe,” we are really just calling attention to the fact that our scheme involves only two accounts. One is labeled “system,” and the other is labeled “surroundings.” Since we do our bookkeeping one system at a time, the combination of system and surroundings encompasses the universe of things affected by the change.
Figure 2 schematically depicts a closed system that can exchange heat and work with its surroundings. The surroundings comprise a heat reservoir and a device that can convert potential energy in the surroundings into work exchanged with the system. The heat reservoir can exchange heat but not work with the system. In this sketch, the heat reservoir is at a constant temperature, $\widehat{T\ }\left({=T}_{surroundings}\right)$. It might comprise, for example, a large quantity of ice and water in phase equilibrium. The work-generating device cannot exchange heat, but it can exchange work with the system. A partially extended spring represents the potential energy available in the surroundings. The system can do work on the device and increase the potential energy of the spring. Alternatively, the surroundings can transfer energy to the system at the expense of the potential energy of the spring. Since nothing else in the rest of the universe is affected by these exchanges, our sketch encompasses the entire universe insofar as these changes are concerned. A system that cannot interact with anything external to itself is isolated. The combination of system and surroundings depicted in Figure 2 is itself an isolated system.
When we deal with the entropy change that accompanies some change in the state of the system, the properties of the surroundings become important. We develop the reason for this in Chapter 9. It is useful to introduce notation to distinguish properties of the surroundings from properties of the system. In Figure 2, we indicate the temperatures of system and surroundings by $T$ and $\hat{T}$, respectively. We adopt this general rule:
When a thermodynamic quantity appears with a superscripted caret, the quantity is that of the surroundings. If there is no superscripted caret, the quantity is that of the system.
Thus, $\hat{T}$, $\hat{E}$, and $\hat{S}$ are the temperature, the energy, and the entropy of the surroundings, respectively, whereas $T$, $E$, and $S$ are the corresponding quantities for the system.
We develop thermodynamics by reasoning about closed chemical systems that consist of one or more homogeneous phases. A phase can be a solid, a liquid, or a gas. A phase can consist of a single chemical substance, or it can be a homogeneous solution containing two or more chemical substances. When we say that a phase is homogeneous, we mean that the pressure, temperature, and composition of the phase are the same in every part of the phase. Since gases are always miscible, a system cannot contain two gas phases that are in contact with one another. However, multiple solid and immiscible-liquid phases can coexist.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/06%3A_Equilibrium_States_and_Reversible_Processes/6.01%3A_The_Thermodynamic_Perspective.txt
|
We characterize the system by specifying the values of enough variables so that the system can be exactly replicated. By “exactly replicated” we mean, of course, that we are not able to distinguish the system from its replicate by any experimental measurement. Any variable that can be used to characterize the system in this way is called a variable of state, a state variable, or a state function.
We can say the same thing in slightly different words by saying that the state of a system is completely specified when the values of all of its state variables are specified. If we initially have a system in some equilibrium state and change one or more of the variables that characterize it, the system will eventually reach a new equilibrium state, in which some state variables will have values different from those that characterized the original state. If we want to return the system to its original state, we must arrange matters so that the value of every state variable is the same as it was originally.
The variables that are associated with a chemical system include pressure, volume, temperature, and the number of moles of each substance present. All of these variables can be measured directly; that is, every equilibrium state of a system is associated with a specific value of each of these variables, and this value can be determined without reference to any other state of the system. Energy and entropy are also variables that are associated with a thermodynamic system. We can only measure changes in energy and entropy; that is, we can only measure energy and entropy for a process in which a system passes from one state to another.
Other important thermodynamic variables are defined as functions of pressure, volume, temperature, energy and entropy. These include enthalpy, the Gibbs free energy, the Helmholtz free energy, chemical activity, and the chemical potential. Our goals in developing the subject of chemical thermodynamics are to define each of these state functions, learn how to measure each of them, and provide a theory that relates the change that occurs in any one of them to the changes that occur in the others when a chemical system changes from one state to another.
Any interaction through which a chemical system can exchange work with its surroundings can affect its behavior. Work-producing forces can involve many phenomena, including gravitational field, electric field, and magnetic fields; surface properties; and sound (pressure) waves. In Chapter 17, we discuss the work done when an electric current passes through an electrochemical cell. Otherwise, this book focuses on pressure–volume workpressure–volume work and gives only passing attention to the job of incorporating other forms of work into the general theory. We include pressure–volume work because it occurs whenever the volume of a system changes. A thermodynamic theory that did not include volume as a variable would be of limited utility.
Thermodynamic variables can be sorted into two classes in another way. Consider the pressure, temperature, and volume of an equilibrium system. We can imagine inserting a barrier that divides this original system into two subsystems—without changing anything else. Each of the subsystems then has the temperature and pressure of the original system; however, the volume of each subsystem is different from the volume of the original system. We say that temperature and pressure are intensive variables, by which we mean that the temperature or pressure of an equilibrium system is independent of the size of the system and the same at any location within the system. Intensive variables stand in contrast to extensive variables. The magnitude of an extensive variable is directly proportional to the size of the system. Thus, volume is an extensive variable. Energy is an extensive variable. We shall see that entropy, enthalpy, the Helmholtz free energy, and the Gibbs free energy are extensive variables also.
For any extensive variable, we can create a companion intensive variable by dividing by the size of the system. For example, we can convert the mass of a homogeneous system into a companion variable, the density, by dividing by the system’s volume. We will discover that it is useful to define certain partial molar quantities, which have units like energy per mole. Partial molar quantities are intensive variables. We will find a partial molar quantity that is particularly important in describing chemical equilibrium. It is called the chemical potential, and since it is a partial molar quantity, the chemical potentialchemical potential is an intensive thermodynamic variable.
We think of a system as a specific collection of matter containing specified phases. Our goal is to develop mathematical models (equations) that relate a system’s state functions to one another. A system can be at equilibrium under a great many different circumstances. We say that the system can have many equilibrium positions. A complete description of all of these equilibrium positions requires models that can specify how much of each of the substances that make up the system is present in each phase.
However, if a system is at equilibrium, a half-size copy of it is also at equilibrium; whether a system is at equilibrium can be specified without specifying the sizes of the phases that make it up. This means that we can characterize the equilibrium states of any system that contains specified substances and phases by specifying the values of the system’s intensive variables. In general, not all of these intensive variables will be independent. The number of intensive variables that are independent is called the number of degrees of freedom available to the system. This is also the number of intensive variables that can change independently while a given system remains at equilibrium.
To completely define a particular system we must specify the size and composition of each phase. To do so, we must specify the values of some number of extensive variables. These extensive variables can change while all of the intensive variables remain constant and the system remains at equilibrium. In the next section, we review the phase equilibria of water. A system comprised of liquid and gaseous water in phase equilibrium illustrates these points. Specifying either the pressure or temperature specifies the equilibrium state to within the sizes of the two phases. For a complete description, we must specify the number of moles of water in each phase. By adding or removing heat, while maintaining the original pressure and temperature, we can change the distribution of the water between the two phases.
In 1875, J. Willard Gibbs developed an equation, called Gibbs’ phase rule, from which we can calculate the number of degrees of freedom available to any particular system. We introduce Gibbs’ phase rule in Section 6.8. The perspective and analysis that underlie Gibbs’ phase rule have a significance that transcends use of the rule to find the number of degrees of freedom available to a system. In essence, the conditions assumed in deriving Gibbs’ phase rule define what we mean by equilibrium in chemical systems. From experience, we are usually confident that we know when a system is at equilibrium and when it is not. One of our goals is to relate thermodynamic functions to our experience-based ideas about what equilibrium is and is not. To do so, we need to introduce the idea of a reversible process, in which the system undergoes a reversible change.
We will see that the states that are accessible to a system that is at equilibrium in terms of Gibbs’ phase rule are identically the states that the system can be in while undergoing a reversible change. A principal goal of the remainder of this chapter is to clarify this equivalence between the range of states accessible to the system at equilibrium and the possible paths along which the system can undergo reversible change.
The thermodynamic theory that we develop predicts quantitatively how a system’s equilibrium position changes in response to a change that we impose on one or more of its state functions. The principle of Le Chatelier makes qualitative predictions about such changes. We introduce the principle of Le Chatelier and its applications later in this chapter. In Chapter 12, we revisit this principle to understand it as a restatement, in qualitative terms, of the thermodynamic criteria for equilibrium.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/06%3A_Equilibrium_States_and_Reversible_Processes/6.02%3A_Thermodynamic_Systems_and_Variables.txt
|
To review the general characteristics of phase equilibria, let us consider a familiar system. Suppose that we have a transparent but very strong cylinder, sealed with a frictionless piston, within which we have trapped a quantity of pure liquid water at some high pressure. We can fix the pressure of the liquid water at any value we choose by applying an appropriate force to the piston. Suppose that we hold the temperature constant and force the volume to increase by withdrawing the piston in very small increments. Because pure water is not compressed easily, we find initially that the pressure of the water decreases and does so in very large increments.
However, after some small increase in the volume, we find that imposing a further volume increase changes the system’s behavior abruptly. The system undergoes a profound change. What was formerly pure liquid becomes a mixture of liquid and gas. As we impose still further volume increases, the pressure of the system remains constant, additional liquid passes from the liquid to the gas phase, and we find that we must supply substantial amounts of heat in order to keep the temperature of the system constant. If we continue to force volume increases in this manner, vaporization continues until all of the liquid evaporates.
If we impose a decrease in the volume of the two-phase system, we see the process reverse. The pressure of the system remains constant, some of the gas condenses to liquid, and the system gives up heat to the surroundings. For any given temperature, these conversions are precisely balanced at some particular pressure, and these conditions characterize a state of liquid–vapor equilibrium. At any given pressure, the equilibrium temperature is called the boiling point of the liquid. The equilibrium pressure and temperature completely specify the state of the system, except for the exact amounts of liquid and gaseous water present.
If we begin with this system in a state of liquid–vapor equilibrium, we can increase the amount of vapor by imposing a small volume increase. Conversely, we can decrease the amount of vapor by imposing a very small volume decrease. At the equilibrium temperature and pressure, changing the imposed volume by an arbitrarily small amount (from $V$ to $V\pm dV$) is sufficient to reverse the direction of the change that occurs in the system. We call any process whose direction can be reversed by an arbitrarily small change in a thermodynamic state function a reversible process. Evidently, there is a close connection between reversible processes and equilibrium states. If a process is to occur reversibly, the system must pass continuously from one equilibrium state to another.
In this description, the reversible, constant-temperature vaporization of water is driven by arbitrarily small volume changes. The system responds to these imposed volume changes so as to maintain a constant equilibrium vapor pressure at the specified temperature. We say that the reversible process “takes place at constant pressure and temperature.” We can also describe this process as being driven by arbitrarily small changes in the applied pressure: If the applied pressure exceeds the equilibrium vapor pressure by an arbitrarily small increment, $dP>0$, condensation occurs; if the applied pressure is less than the equilibrium vapor pressure by an arbitrarily small increment, $dP<0$, vaporization occurs. To describe this tersely, we introduce a figure of speech and say that the reversible process occurs “while the system pressure and the applied pressure are equal.” Literally, of course, there can be no change when these pressures are equal.
To cause water to vaporize at a constant temperature and pressure, we must add heat energy to the system. This heat is called the latent heat of vaporization or the enthalpy of vaporization, and it must be supplied from some entity in the surroundings. When water vapor condenses, this latent heat must be removed from the system and taken up by the surroundings. (The enthalpy change for vaporizing one mole of a substance is usually denoted ${\Delta }_{vap}H$. It varies with temperature and pressure. Tables usually give experimental values of the equilibrium boiling temperature at a pressure of 1 bar or 1 atm; then they give the enthalpy of vaporization at this temperature and pressure. We discuss the enthalpy function in Chapter 8.)
Four conditions are sufficient to exactly specify either the initial or the final state: the number of moles of liquid, the number of moles of gas, the pressure, and the temperature. The change is a conversion of some liquid to gas, or vice versa. We can represent this change as a transition from an initial state to a final state where $n^o_{liquid}$ and $n^o_{gas}$ are the initial numbers of moles of liquid and gas, respectively, and $\delta n$ is the incremental number of moles vaporized: $\left(P,\ T,n^o_{liquid},n^o_{gas}\right)\to \left(P,\ T,n^o_{liquid}-\delta n,n^o_{gas}+\delta n\right) \nonumber$
The initial pressure and temperature are the same as the final pressure and temperature. Effecting this change requires that a quantity of heat, $\left({\Delta }_{vap}H\right)\delta n$, be added to the system, without changing the temperature of the system.
This introduces another requirement that a reversible process must satisfy. If the reversibly vaporizing water is to take up an arbitrarily small amount of heat, the system must be in contact with surroundings that are hotter than the system. The temperature difference between the system and its surroundings must be arbitrarily small, because we can describe exactly the same process as being driven by contacting the system, at temperature $T$, with surroundings at temperature $\hat{T}+\delta \hat{T}$. If we keep the applied pressure constant at the temperature-$T$ equilibrium vapor pressure, the system volume increases. We can reverse the direction of change by changing the temperature of the surroundings from $\hat{T}+\delta \hat{T}$ to $\hat{T}-\delta \hat{T}$. If the process is to satisfy our criterion for reversibility, the difference between these two temperatures must be arbitrarily small. To describe this requirement tersely, we again introduce a figure of speech and say that the reversible process occurs “while the system temperature and the surroundings temperature are equal.”
If we repeat the water-in-cylinder experiment with the temperature held constant at a slightly different value, we get similar results. There is again a pressure at which the process of converting liquid to vapor is at equilibrium. At this temperature and pressure, both liquid and gaseous water can be present in the system, and, so long as no heat is added or removed from the system, the amount of each remains constant. When we hold the pressure of the system constant at the equilibrium value and supply a quantity of heat to the system, a quantity of liquid is again converted to gaseous water. (The quantity of heat required to convert one mole of liquid to gaseous water, phase equilibria is slightly different from the quantity required in the previous experiment. This is what we mean when we say that the enthalpy of vaporization varies with temperature.)
This experiment can be repeated for many temperatures. So long as the temperature is in the range $\mathrm{273.16}, we find a pressure at which liquid and gaseous water are in equilibrium. If we plot the results, they lie on a smooth curve, which is sketched in Figure 3. This curve represents the combinations of pressure and temperature at which liquid water and gaseous water are in equilibrium. Below \(273.16\ \mathrm{K}$, an equilibrium system containing only liquid and gaseous water cannot exist. At high pressures, a two-phase equilibrium system contains solid and liquid; at sufficiently low pressures, it contains solid and gas. Above $647.1\ \mathrm{K}$, the distinction between liquid and gaseous water vanishes. The water exists as a single dense phase. This is the critical temperature. Above the critical temperature, there is a single fluid phase at any pressure.
If we keep the pressure constant and remove heat from a quantity of liquid water, the temperature decreases until we eventually reach a temperature at which the water begins to freeze to ice. At this point, water and ice are in equilibrium. Further removal of heat does not decrease the temperature of the water–ice system; rather, the temperature remains constant and additional water freezes into ice. Only when all of the liquid has frozen does further removal of heat cause a further decrease in the temperature of the system. When we repeat this experiment at a series of temperatures, we find a continuous line of pressure–temperature points that are liquid–ice equilibrium points.
As sketched in Figure 4, the liquid–ice equilibrium line intersects the liquid–vapor equilibrium line. At this intersection, liquid water, ice, and water vapor are all in equilibrium with one another. There is only one such point. It is called the triple point of water. The ice point or melting point of water is the temperature at which solid and liquid water are in equilibrium at one atmosphere in the presence of air. The water contains dissolved air. The triple point occurs in a pure-water system; it is the temperature and pressure at which gaseous, liquid, and solid water are in equilibrium. By definition, the triple point temperature is 273.16 K. Experimentally, the pressure at the triple point is 611 Pa. Experimentally, the melting point is 273.15 K.
To freeze a liquid, we must remove heat. To fuse (melt) the same amount of the solid at the same temperature and pressure, we must add the same amount of heat. This heat is called the latent heat of fusion or the enthalpy of fusion. The enthalpy of fusion for one mole of a substance is usually denoted ${\Delta }_{fus}H$. It varies slightly with temperature and pressure. Tables usually give experimental values of the equilibrium melting temperature at a pressure of 1 bar or 1 atm; then they give the enthalpy of fusion at this temperature and pressure.
At low pressures and temperatures, ice is in equilibrium with gaseous water. A continuous line of pressure–temperature points represents the conditions under which the system contains only ice and water vapor. As the temperature increases, the ice–vapor equilibrium line ends at the triple point. The conversion of a solid directly into its vapor is called sublimation. To sublime a solid to its vapor requires the addition of heat. This heat is called the latent heat of sublimation or the enthalpy of sublimation. The enthalpy of sublimation for one mole of a substance is usually denoted ${\Delta }_{sub}H$. It varies slightly with temperature and pressure.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/06%3A_Equilibrium_States_and_Reversible_Processes/6.03%3A_Equilibrium_and_Reversibility_-_Phase_Equilibria.txt
|
A system can contain more than one phase, and more than one chemical substance can be present in each phase. If one of the substances is present in two phases, we say that the substance is distributed between the two phases. We can describe the equilibrium distribution quantitatively by specifying the concentration of the substance in each phase. At constant temperature, we find experimentally that the ratio of these concentrations is approximately constant. Letting $A$ be the substance that is distributed, we find for the distribution equilibrium
$A\left(phase\ 1\right)\rightleftharpoons A\left(phase\ 2\right) \nonumber$
the equilibrium constant
$K=\frac{\left[A\right]_{phase\ 2}}{\left[A\right]_{phase\ 1}} \nonumber$
where $K$ varies with temperature and pressure.
For example, iodineiodine is slightly soluble in water and much more soluble in chloroform. Since water and chloroform are essentially immiscible, a system containing water, chloroform, and iodine will contain two liquid phases. If there is not enough iodine present to make a saturated solution with both liquids, the system will reach equilibrium with all of the iodine dissolved in the two immiscible solvents. Experimentally, the equilibrium concentration ratio
$K=\frac{\left[I_2\right]_{water}}{\left[I_2\right]_{chloroform}} \nonumber$
is approximately constant, whatever amounts of the three substances are mixed.
We begin our development of physical chemistry by reasoning about the effects of concentrations on the properties of chemical systems. In Chapter 5, we find that rate laws expressed using concentration variables are adequate for the analysis of reaction mechanisms. Consideration of these rate laws leads us to the equilibrium constant for a chemical reaction expressed as a function of concentrations. Eventually, however, we discover that an adequately accurate theory of chemical equilibrium must be expressed using new quantities, which we call chemical activities ${}^{2}$. We can think of a chemical activity as an “effective concentration” or a “corrected concentration,” where the correction is for the effects of intermolecular interactions. When we allow for the effects of intermolecular interactions, we find that we must replace the concentration terms by chemical activities. For the distribution equilibrium constant, we have
$K=\frac{\tilde{a}_{A,phase\ 2}}{\tilde{a}_{A,phase\ 1}} \nonumber$
where $\tilde{a}_{A,phase\ 1}$ denotes the chemical activity of species $A$, in phase 1, at equilibrium.
6.05: Equilibria in Chemical Reactions
Equilibria involving chemical reactions share important characteristics with phase and distribution equilibria. In Chapter 5, we develop the equilibrium constant expression from ideas about reaction rates. For the present comparison, let us consider the equilibrium between the gases nitrogen dioxide, $NO_2$, and dinitrogen tetroxide, $N_2O_4$:
$\ce{N_2O_4\ (g) <=> 2NO_2 (g)} \nonumber$
Suppose that we trap a quantity of pure $N_2O_4$ in a cylinder closed with a piston. If we fix the temperature and volume of this system, the dissociation reaction occurs until equilibrium is achieved at some system pressure. For present purposes, let us assume that both $N_2O_4$ and $NO_2$ behave as ideal gases. The equilibrium system pressure will be equal to the sum of the partial pressures: $P=P_{N_2O_4}+P_{NO_2}$. If we now do a series of experiments, in which we hold the volume constant while allowing the temperature to change, we find a continuous series of pressure–temperature combinations at which the system is at equilibrium. This curve is sketched in Figure 5. It is much like the curve describing the dependence of the water–water-vapor equilibrium on pressure and temperature.
If we hold the temperature constant and allow the volume to vary, we can change the force on the piston to keep the total pressure constant at a new value, $P^*_{total}$. The position of the chemical equilibrium will change. At the new equilibrium position, the new $N_2O_4$ and $NO_2$ partial pressures will satisfy the total pressure relationship. When we repeat this experiment, we find that, whatever the total pressure, the equilibrium partial pressures are related to one another as sketched in Figure 6.
We find that the experimental data fit the equation
$K_P= \dfrac{P^2_{NO_2}}{P_{N_2O_4}}, \nonumber$
where $K_P$ is the equilibrium constant for the reaction. (Pressure is a measure of gas concentration. Later, we see that the equilibrium constant can be expressed more rigorously as a ratio of fugacities—or activities.)
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/06%3A_Equilibrium_States_and_Reversible_Processes/6.04%3A_Distribution_Equilibria.txt
|
In Chapter 1, we describe a very general goal: given that we create a system in some arbitrary initial state (by some “change of conditions” or “removal of some constraint”), we want to predict how the system will respond as it changes on its own (“spontaneously”) to some new equilibrium state. Under these circumstances, we need a lot of information about the system before we can make any useful prediction about the spontaneous change. In this case, we have said nothing about the condition of the system before we effect the change of conditions that creates the arbitrary “initial state”.
Our ability to make useful predictions is much greater if the system and the change of conditions have a particular character. If we start with a system that is at equilibrium, and we impose a change in conditions on it, the “initial” state of the system after the imposed change of conditions will generally not be an equilibrium state. Experience shows that the system will undergo some spontaneous change to arrive at a new equilibrium state. In these particular circumstances, Le Chatelier’s principle enables us to predict the spontaneous change that occurs.
Definition: Le Chatelier’s principle
If a change is imposed on the state of a system at equilibrium, the system adjusts to reach a new equilibrium state. In doing so, the system undergoes a spontaneous change that opposes the imposed change.
Le Chatelier’s principle is useful, and it is worthwhile to learn to apply it. The principle places no limitations on the nature of the imposed change or on the number of thermodynamic variables that might change as the system responds. However, since our reasoning based on the principle is qualitative, it is frequently useful to suppose that the imposed change is made in just one variable and that the opposing change involves just one other variable. That is, we ask how changing one of the variables that characterizes the equilibrated system changes a second such variable, “all else being equal.” Successful use of the principle often requires careful thinking about the variable on which change is imposed and the one whose value changes in response. Let us consider some applications.
Vapor–liquid equilibrium
Vapor–liquid equilibrium. Suppose that we have a sealed vial that contains only the liquid and vapor phases of a pure compound. We suppose that the vial and its contents are at a single temperature and that the liquid and the vapor are in equilibrium with one another at this temperature. What will happen if we now thermostat the vial at some new and greater temperature?
We see that the imposed change is an increase in temperature or, equivalently, an addition of heat to the system. The system cannot respond by decreasing its temperature, because the temperature change is the imposed change. Similarly, it cannot respond by changing its volume, because the system volume is fixed. Evidently, the observable consequence of increasing temperature—adding heat— must be a change in the pressure of the system. The principle asserts that the system will respond so as to consume heat. Converting liquid to vapor consumes the latent heat of vaporization, so the system can oppose the imposed addition of heat by converting liquid to vapor. This increases the pressure of the vapor. We can conclude from Le Chatelier’s principle that increasing the temperature of a system at liquid-vapor equilibrium increases the equilibrium vapor pressure.
Now suppose that we have the liquid and vapor phases of the same pure compound in a thermally isolated cylinder that is closed by a piston. We ask what will happen if we decrease the volume. That is, the imposed change is a step decrease in volume, accompanied by an increase in pressure. The new volume is fixed, but the pressure is free to adjust to a new value at the new equilibrium position. The principle asserts that the system will respond so as to decrease its pressure. Decreasing the system pressure is accomplished by condensing vapor to liquid, which is accompanied by the release of the latent heat of vaporization. Since we suppose that the system is thermally isolated during this process, the heat released must result in an increase in the temperature of the system. While the pressure can decrease from the initial non-equilibrium value, it cannot decrease to its original-equilibrium value; evidently, the new equilibrium pressure must be greater than the original pressure.
We again conclude that an increase in the equilibrium vapor pressure requires an increase in the temperature of the system. (If the volume decrease were imposed with the system immersed in a constant temperature bath, the heat evolved would be transferred from the system to the bath. The system would return to its original pressure and original temperature, albeit with fewer moles of the substance present in the gas phase.)
Ice–water equilibrium
Suppose that we have a closed system consisting of ice in equilibrium with liquid water at some temperature and pressure. What will happen if we impose an increase in the temperature this system? We suppose that the system occupies a container of fixed volume. Initially it is at equilibrium with a constant-temperature bath. We impose the change by moving the container to a new bath whose temperature is higher—but not high enough to melt all of the ice. The imposed change is a temperature increase or, equivalently, an addition of heat. The principle asserts that the system will respond by consuming heat, which it can do by converting ice to liquid. Since liquid water occupies less volume than the same mass of ice, the system pressure will decrease. We conclude that the pressure at which ice and water are at equilibrium decreases when the temperature increases. That is, the melting point increases as the pressure decreases.
Again, we can imagine that the equilibrium mixture of ice and water is contained in a thermally isolated cylinder that is closed by a piston and ask how the system must respond if we impose a step decrease in its volume. We impose the volume decrease by applying additional force to the piston. The imposed step change in the volume is accompanied by an increase in the system pressure; the new volume is fixed, but the system pressure can adjust. The principle asserts that the system will respond by decreasing its pressure. The system pressure will decrease if some of the ice melts. Melting ice consumes heat. Since we are now assuming that the system is thermally isolated, this heat cannot come from outside the system, which means that the temperature of the system must decrease. While the pressure can decrease from its initial non-equilibrium value, it cannot decrease to the value that it had in the original equilibrium position. We again conclude that increasing the pressure results in a decrease in temperature; that is, the melting point of ice increases as the pressure decreases.
Chemical reaction between gases
Chemical reaction between gases. Finally, suppose that we have a chemical equilibrium involving gaseous reagents. To be specific, let us again consider the reaction
$N_2O_4\ \left(g\right)\rightleftharpoons 2\ NO_2\ \left(g\right) \nonumber$
We suppose that this system is initially at equilibrium at some temperature and that we seek to increase the pressure while maintaining the temperature constant. (We can imagine that the system is contained in a cylinder that is closed by a piston. The cylinder is immersed in a constant-temperature bath. We increase the pressure by applying additional force to the piston. As in the examples above, we view this as a step change in volume that is accompanied by an increase of the pressure to a transitory non-equilibrium value.) The principle asserts that the system will respond by undergoing a change that opposes this pressure increase. The system can reduce its pressure by decreasing the number of moles of gas present, and it can do this by converting $NO_2$ molecules to $N_2O_4$ molecules. We conclude that there will be less $NO_2$ present at equilibrium at the higher pressure.
When we first encounter it, Le Chatelier’s principle seems to embody a remarkable insight. As, indeed, it does. However, as we think about it, we come to see it as a logical necessity. Suppose that the response of an equilibrium system to an imposed change were to augment the change rather than oppose it. Then an imposed change would reinforce itself. The slightest perturbation of any equilibrium system would cause the system to “run away” from that original position. Since no real system can be maintained at a perfectly constant set of conditions, any real system could undergo spontaneous change. Equilibrium would be unattainable. If we assume that a system must behave oppositely to the way that is predicted by Le Chatelier’s principle, we arrive at a prediction that contradicts our experience.
Le Chatelier’s principle is inherently qualitative. We will discuss it further after we develop the thermodynamic criteria for equilibrium. We will find that the thermodynamic criteria for equilibrium tell us quantitatively how two (or more) thermodynamic variables must change in concert if a system is to remain at equilibrium while also undergoing some change of condition.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/06%3A_Equilibrium_States_and_Reversible_Processes/6.06%3A_Le_Chatelier%27s_Principle.txt
|
If we are to model a physical system mathematically, we must abstract measurable properties from it—properties that we can treat as variables in our model. In Section 6.2 we found that the size of the system does not matter when we consider the variables that specify an equilibrium state. A half-size version of an equilibrium system has the same equilibrium properties. We can say that only intensive properties are relevant to the question of whether a system is at equilibrium.
The idea that we can subdivide a system without changing its equilibrium properties is subject to an important qualification. We intend that both subsystems be qualitatively equivalent to the original. For example, if we divide a system at vapor–liquid equilibrium into subsystems, each subsystem must contain some liquid and some vapor. If we subdivide it into one subsystem that is all liquid and another that is all vapor, the subsystems are not qualitatively equivalent to the original.
We can be more precise about the criterion we have in mind: An equilibrium system consists of one or more homogenous phases. Two systems can be in the same equilibrium condition only if all of the phases present in one are also present in the other. If a process changes the number of phases present in a system, we consider that the system changes from one kind of equilibrium system to a second one. We can describe one kind of equilibrium system by specifying a sufficient number of intensive variables. This description will be complete to within a specification of the exact amount of each phase present.
If we apply these ideas to a macroscopic sample of a pure gas, we know that we need four variables to completely describe the state of the gas: the number of moles of the gas, its pressure, its volume, and its temperature. This assumes that we are not interested in the motion of the container that contains the gas. It assumes also that no other extrinsic factors—like gravitational, electric, or magnetic fields— affect the behavior that we propose to model
When we do experiments in which the amount, pressure, volume, and temperature of a pure gas vary, we find that we can develop an equation that relates the values that we measure. We call this an equation of state, because it is a mathematical model that describes the state of the system. In Chapter 2, we reviewed the ideal gas equation, van der Waals equation, and the virial equation; however, we can devise many others. Whatever equation of state we develop, we know that it must have a particular property: At constant pressure and temperature, the volume must be directly proportional to the number of moles. This means that any equation of state can be rewritten as a function of concentration. For the case of an ideal gas, we have $P=\left({n}/{V}\right)RT$, where ${n}/{V}$, the number of moles per unit volume, is the gas concentration. We see that any equation of state can be expressed as a function of three intensive variables: pressure, temperature, and concentration.
The existence of an equation of state means that only two of the three intensive variables that describe the gas sample are independent of one another. At equilibrium, a sample of pure gas has two degrees of freedom. Viewed as a statement about the mathematical model, this is true because knowledge of the equation of state and any two of the intensive variables enables us to calculate the third variable. Viewed as a statement about our experimental observations, this is true because, so long as the changes are consistent with the system remaining a gas, we can change any two of these variables independently. That only two are independent is shown experimentally by the observation that we can start with a fixed quantity of gas at any pressure, temperature, and concentration and find, after taking the system through any sequence of changes whatsoever, that returning to the original pressure and temperature also restores the original concentration.
In the experiment or in the mathematical model, fixing two of the three intensive variables is sufficient to fix the equilibrium properties of the system. Fixing the equilibrium properties means, of course, that the state of the system is fixed to within an arbitrary factor, which can be specified either as the number of moles present or as the system volume.
Similar results are obtained when we study the pressure–volume–temperature behavior of pure substances in condensed phases. At equilibrium, a pure liquid or a pure solid has two degrees of freedom.
If we consider a homogeneous mixture of two non-reacting gases, we discover that three variables are necessary to fix the equilibrium properties of the system. We must know the pressure and temperature of the system and the concentration of each gas. Because the mixture must obey an equation of state, determination of any three of these variables is sufficient to fix the value of the fourth. Note that we can conclude that three intensive variables are sufficient to determine the equilibrium properties of the system even if we do not have a mathematical model for the equation of state.
If we experiment with a system in which the liquid and vapor of a pure substance are in phase equilibrium with one another, we find that there is only one independent intensive variable. (Figure 4 illustrates this for water.) To maintain phase equilibrium, the system pressure must be the equilibrium vapor pressure of the substance at the system temperature. If we keep the pressure and temperature constant at equilibrium values, we can increase or decrease the concentration (moles per unit system volume) by removing or adding heat. In this process, we change one variable, concentration, while maintaining phase equilibrium.
If we keep the pressure constant and impose a temperature increase, vaporization continues (the concentration decreases) until the liquid phase is completely consumed. In this process, two variables change, and phase equilibrium cannot be maintained. To reach a new equilibrium state in which both liquid and gas are present at a higher temperature, we must increase the pressure to the new equilibrium vapor pressure; the magnitude of the temperature increase completely determines the required pressure increase. Two intensive variables change, but the changes are not independent.
If we have pure gas, there are two independent intensive variables. If we have pure liquid, there are two independent intensive variables. However, if we have liquid and gas in equilibrium with one another, there is only one independent intensive variable. In the liquid region of the water phase diagram, we can vary pressure and temperature and the system remains liquid water. Along the liquid-gas equilibrium line, we can vary the temperature and remain at equilibrium only if we simultaneously vary the pressure so as to remain on the liquid-gas equilibrium line.
Similar statements apply if we contrast varying pressure and temperature for the pure solid to varying the pressure and temperature along the solid-liquid or the solid-gas equilibrium line. At the triple point, nothing is variable. For a fixed quantity of water, the requirement that the system be at equilibrium at the triple point fixes the system pressure, temperature, and concentration. Evidently, maintaining a phase equilibrium in a system imposes a constraint that reduces the number of intensive variables that we can control independently.
The equilibrium between water and ice is completely unaffected by the state of subdivision of the ice. The ice can be present in a single lump or as a large number of very small pieces; from experience, we know that the equilibrium behavior of the system is the same so long as some ice and some water are both present. A system contains as many phases as there are kinds of macroscopic, homogeneous, bounded portions that are either solid, liquid, or gas.
If we add a lump of pure aluminum to our ice-water system, the new system contains three phases: water, ice, and aluminum. The equilibrium properties of the new system are the same if the aluminum is added as a ground-up powder. The powder contains many macroscopic, homogeneous, bounded portions that are aluminum, but each of these portions has the same composition; there is only one kind of aluminum particle. (Molecules on the surface of a substance can behave differently from those in the bulk. When a substance is very finely divided, the fraction of the molecules that is on the surface can become large enough to have a significant effect on the behavior of the system. In this book, we do not consider systems whose behavior is surface-area dependent.)
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/06%3A_Equilibrium_States_and_Reversible_Processes/6.07%3A_The_Number_of_Variables_Required_to_Specify_Some_Familiar_Systems.txt
|
Gibbs found an important relationship among the number of chemical constituents, the number of phases present, and the number of intensive variables that must be specified in order to characterize an equilibrium system. This number is called the number of degrees of freedom available to the system and is given the symbol $F$. By specifying $F$ intensive variables, we can specify the state of the system—except for the amount of each phase. The number of chemical constituents is called the number of components and is given the symbol $C$. The number of components is the smallest number of pure chemical compounds that we can use to prepare the equilibrium system so that it contains an arbitrary amount of each phase. The number of phases is given the symbol $P$. The relationship that Gibbs found between $C$, $P$, and $F$ is called Gibbs’ phase rule or just the phase rule. The phase rule applies to equilibrium systems in which any component can move freely between any two phases in which that component is present.
We suppose that the state of the system is a continuous function of its state functions. If $F$, intensive, independent variables, $X_1$, $X_2$, , $X_F$, are sufficient to specify the state of an equilibrium system, then $X_1+dX_1$, $X_2+dX_2$, , $X_F+dX_F$ specify an incrementally different equilibrium state of the same system. This means that the number of degrees of freedom is also the number of intensive variables that can be varied independently while the system changes reversibly—subject to the condition that there is no change in either the number or kinds of phases present. Moreover, if we keep the system’s intensive variables constant, we can change the size of any phase without changing the nature of the system. This means that Gibb’s phase rule applies to any equilibrium system, whether it is open or closed.
A system containing only liquid water contains one component and one phase. By adjusting the temperature and pressure of this system, we can arrive at a state in which both liquid and solid are present. For present purposes, we think of this as a second system. Since the second system can be prepared using only liquid water (or, for that matter, only ice) it too contains only one component. However, since it contains both liquid and solid phases, the second system contains two phases. We see that the number of components required to prepare a system in such a way that it contains an arbitrary amount of each phase is not affected by phase equilibria. However, the number of componentsnumber of components is affected by chemical equilibria and by any other stoichiometric constraints that we impose on the system. The number of components is equal to the number of chemical substances present in the system, less the number of stoichiometric relationship among these substances.
Let us consider an aqueous system containing dissolved acetic acid, ethanol, and ethyl acetate. For this system to be at equilibrium, the esterification reaction
$\ce{CH3CO2H + CH3CH2OH <=> CH3CO2CH2CH3 + H2O} \nonumber$
must be at equilibrium. In general we can prepare a system like this by mixing any three substances chosen from the set: acetic acid, ethanol, ethyl acetate, and water. Hence, there are three components. The esterification reaction, or its reverse, then produces an equilibrium concentration of the fourth substance. However, there is a special case with only two components. Suppose that we require that the equilibrium concentrations of ethanol and acetic acid be exactly equal. In this case, we can prepare the system by mixing ethyl acetate and water. Then the stoichiometry of the reaction assures that the concentration condition will be met; indeed, this is the only way that the equal-concentration condition can be met exactly.
In this example, there are four chemical substances. The esterification reaction places one stoichiometric constraint on the amounts of these substances that can be present at equilibrium, which means that we can change only three concentrations independently. The existence of this constraint reduces the number of components from four to three. An additional stipulation that the product concentrations be equal is a second stoichiometric constraint that reduces the number of independent components to two.
If we have a one-phase system at equilibrium, we see that the pressure, the temperature, and the $C$ component-concentrations constitute a set of variables that must be related by an equation of state. If we specify all but one of these variables, the remaining variable is determined, and can be calculated from the equation of state. There are $C+2$ variables, but the existence of the equation of state means that only $C+1$ of them can be changed independently. Evidently, the number of degrees of freedom for a one-phase system is $F=C+1$.
To find the number of degrees of freedom when $P$ such phases are in equilibrium with one another requires a similar but more extensive analysis. We first consider the number of intensive variables that are required to describe completely a system that contains $C$ components and $P$ phases, if the phases are not at equilibrium with one another. (Remember that the description we seek is complete except for a specification of the absolute amount of each phase present. For the characterization of equilibrium that we seek, these amounts are arbitrary.) In this case, each phase is a subsystem in its own right. Each phase can have a pressure, a temperature, and a
concentration for each component. Each of these properties can have a value that is independent of its value in any other phase. There are $C+2$ variables for each phase or $P\left(C+2\right)$ variables for all $P$ phases. Table 1 displays these variables.
1 2 $P$
Pressure $P_1$ $P_2$ $P_P$
Temperature $T_1$ $T_2$ $T_P$
Component 1 ${n_{1,1}}/{V_1}$ ${n_{1,2}}/{V_2}$ ${n_{1,P}}/{V_P}$
Component 2 ${n_{2,1}}/{V_1}$ ${n_{2,2}}/{V_2}$ ${n_{2,P}}/{V_P}$
Component C ${n_{C,1}}/{V_1}$ ${n_{C,2}}/{V_2}$ ${n_{C,P}}/{V_P}$
If the system is at equilibrium, there are numerous relationships among these $P\left(C+2\right)$ variables. We want to know how many independent relationships there are among them. Each such relationship decreases by one the number of independent intensive variables that are needed to specify the state of the system when all of the phases are at equilibrium. Let us count these relationships.
• The pressure must be the same in each phase. That is, $P_1=P_2$, $P_1=P_3$, , $P_1=P_P$, $P_2=P_3$, , $P_2=P_P$, etc. Since $P_1=P_2$ and $P_1=P_3$ implies that $P_2=P_3$, etc., there are only $P-1$ independent equations that relate these pressures to one another.
• The temperature must be the same in each phase. As for the pressure, there are $P-1$ independent relationships among the temperature values.
• The concentration of species $A$ in phase 1 must be in equilibrium with the concentration of species $A$ in phase 2, and so forth. We can write an equation for phase equilibrium involving the concentration of $A$ in any two phases; for example,
$K=\frac{\left({n_{A,2}}/{V_{A,2}}\right)}{\left({n_{A,1}}/{V_{A,2}}\right)} \nonumber$
(In Chapter 14, we will find that this requirement can be stated more rigorously using a thermodynamic function that we call the chemical potential. At equilibrium, the chemical potential of species $A$ must be the same in each phase.) For the $P$ phases, there are again $P-1$ independent relationships among the component-$A$ concentration values. This is true for each of the $C$ components, so the total number of independent relationships among the concentrations is $C\left(P-1\right)$.
While every component need not be present in each phase, there must be a finite amount of each phase present. Each phase must have a non-zero volume. To express this requirement using intensive variables, we can say that the sum of the concentrations in each phase must be greater than zero. For phase 1, we must have
$\left({n_{A,1}}/{V_1}\right)+\left({n_{B,1}}/{V_1}\right)+\dots +\left({n_{Z,1}}/{V_1}\right)>0 \nonumber$
and so on for each of the $P$ phases. There are $P$ such relationships that are independent of one another.
If we subtract, from the total number of relevant relationships, the number of independent relationships that must be satisfied at equilibrium, we find Gibbs’ phase rule: There are
\begin{align*} F &=P\left(C+2\right)-\left(P-1\right)-\left(P-1\right)-C\left(P-1\right)-P \[4pt]& =2+C-P \end{align*}
independent relationships or degrees of freedom needed to describe the equilibrium system containing $C$ components and $P$ phases.
A component may not be present in some particular phase. If this is the case, the total number of relationships is one less than the number that we used above to derive the phase rule. The number of equilibrium constraints is also one less than the number we used. Consequently, the absence of a component from any particular phase has no effect on the number of degrees of freedom available to the system at equilibrium.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/06%3A_Equilibrium_States_and_Reversible_Processes/6.08%3A_Gibbs%27_Phase_Rule.txt
|
When we think about a physical system that is undergoing a reversible change, we imagine that the system passes through a series of states. In each of these states, every thermodynamic variable has a well-defined value in every phase of the system. We suppose that successive states of the changing system are arbitrarily close to one another in the sense that the successive values of every thermodynamic are arbitrarily close to one another. These suppositions are equivalent to assuming that the state of the system and the value of every thermodynamic variable are continuous functions of time. Then every thermodynamic variable is either constant or a continuous function of other thermodynamic variables. When we talk about a reversible process, we have in mind a physical system that behaves in this way and in which an arbitrarily small change in one of the thermodynamic variables can reverse the direction in which other thermodynamic variables change.
A process that is not reversible is said to be irreversible. We distinguish between two kinds of irreversible processes. A process that cannot occur under a given set of conditions is said to be an impossible process. A process that can occur, but does not do so reversibly, is called a possible process or a spontaneous process.
Another essential characteristic of a reversible process is that changes in the system are driven by conditions that are imposed on the system by the surroundings. In our discussion of the phase equilibria of water, we note that the surroundings can transfer heat to the system only when the temperature of the surroundings is greater than that of the system. However, if the process is to be reversible, this temperature difference must be arbitrarily small, so that heat can be made to flow from the system to the surroundings by an arbitrarily small decrease in the temperature of the surroundings.
Similar considerations apply when the process involves the exchange of work between system and surroundings. We focus on changes in which the work exchanged between system and surroundings is pressure–volume work. A process can occur reversibly only if the pressure of the system and the pressure applied to the system by the surroundings differ by an arbitrarily small amount. To abbreviate these statements, we customarily introduce a figure of speech and say that, for a reversible process, $T_{system}=T_{surroundings}$ (or $T=\hat{T}$) and that $P_{system}=P_{surroundings}$ (or $P=P_{applied}$).
Since a reversible process involves a complementary exchange of energy increments between system and surroundings, it is evident that an isolated system cannot undergo a reversible change. Any change that occurs in an isolated system must be spontaneous. By the contrapositive, an isolated system that cannot undergo change must be at equilibrium.
While $T=\hat{T}$ and $P=P_{applied}$ are necessary conditions for a reversible process, they are not sufficient. A spontaneous process can occur under conditions in which the system temperature is arbitrarily close to the temperature of the surroundings and the system pressure is arbitrarily close to the applied pressure. Consider a mixture of hydrogen and oxygen in a cylinder closed by a frictionless piston. We suppose that the surroundings are maintained at a constant temperature and that the surroundings apply a constant pressure to the piston. We suppose that the system contains a small quantity of a poorly effective catalyst. By controlling the activity of the catalyst, we can arrange for the formation of water to occur at an arbitrarily slow rate—a rate so slow that the temperature and pressure gradients that occur in the neighborhood of the catalyst are arbitrarily small. Nevertheless, the reaction is a spontaneous process, not a reversible one. If the process were reversible, an arbitrarily small increase in the applied pressure would be sufficient to reverse the direction of reaction, causing water to decompose to hydrogen and oxygen.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/06%3A_Equilibrium_States_and_Reversible_Processes/6.09%3A_Reversible_vs._Irreversible_Processes.txt
|
We view a chemical system as a collection of substances that occupies some volume. Let us consider a closed system whose volume is variable, and in which no work other than pressure–volume work is possible. If this system is undergoing a reversible change, it is at equilibrium, and it is in contact with its surroundings. Because the system is at equilibrium, all points inside the system have the same pressure and the same temperature. Since the change is reversible, the interior pressure is arbitrarily close to the pressure applied to the system by the surroundings. If the reversibly changing system can exchange heat with its surroundings, the temperature of the surroundings is arbitrarily close to the temperature of the system. (If a process takes place in a system that cannot exchange heat with its surroundings, we say that the process is adiabatic.)
We can measure the pressure, temperature, and volume of such a system without knowing anything about its composition. For a system composed of a known amount of a single phase of a pure substance, we know from experience that any cyclic change in pressure or temperature restores the initial volume. That is, for a pure phase, there is an equation of state that we can rearrange as $V=V\left(P,T\right)$, meaning that specifying $P$ and $T$ is sufficient to specify $V$ uniquely.
For other reversible systems, the function $V=V\left(P,T\right)$ may not exist. For example, consider a system that consists of a known amount of water at liquid–vapor equilibrium and whose pressure and temperature are known. For this system, the volume can have any value between that of the pure liquid and that of the pure gas. Specifying the pressure and temperature of this system is not sufficient to specify its state. However, if we specify the temperature of this system, the pressure is fixed by the equilibrium condition; and if we specify the volume of the system, we can find how much water is in each phase from the known molar volumes of the pure substances at the system pressure and temperature. For the water–water-vapor equilibrium system, we can write $P=P\left(V,T\right)$.
In each of these cases, we can view one of the variables as a function of the other two and represent it as a surface in a three dimensional space. The two independent variables define a plane. Projecting the system’s location in this independent-variable plane onto the surface establishes the value of the dependent variable. The two independent-variable values determine the point on the surface that specifies the state of the system. In the liquid–vapor equilibrium system, the pressure is a surface above the volume–temperature plane.
A complete description of the state of the system must also include the number of moles of liquid and the number of mole of vapor present. Each of these quantities can also be described as a surface in a three dimensional space in which the other two dimensions are volume and temperature. Duhem’s theorem asserts that these observations are special cases of a more general truth:
Duhem’s theorem
For a closed, reversible system in which only pressure–volume work is possible, specifying how some pair of state functions changes is sufficient to specify how the state of the system changes.
Duhem’s theorem asserts that two variables are sufficient to specify the state of the system in the following sense: Given the values of the system’s thermodynamic variables in some initial state, say $\mathrm{\{}$$X_1$, $Y_1$, $Z_1$, $W_1$,$\dots$$\mathrm{\}}$, specifying the change in some pair of variables, say $\Delta X$ and $\Delta Y$, is sufficient to determine the change in the remaining variables, $\Delta Z$, $\Delta W$,$\dots$ so that the system’s thermodynamic variables in the final state are $\mathrm{\{}$$X_2$, $Y_2$, $Z_2$, $W_2$,$\dots$$\mathrm{\}}$, where $W_2=W_1+\Delta W$, etc. The theorem does not specify which pair of variables is sufficient. In fact, from the discussion above of the variables that can be used to specify the state of a system containing only water, it is evident that a particular pair may not remain sufficient if there is a change in the number of phases present.
In Chapter 10, we see that Duhem’s theorem follows from the first and second laws of thermodynamics, and we consider the particular pairs of variables that can be used. For now, let us consider a proof of Duhem’s theorem for a system in which the pressure, temperature, volume, and composition can vary. We consider systems in which only pressure–volume work is possible. Let the number of chemical species present be $C^{'}$ and the number of phases be $P$. ($C$, the number of component in the phase rule, and $C^{'}$ differ by the number of stoichiometric constraints that apply to the system: $C$ is $C^{'}$ less the number of stoichiometric constraints.) We want to know how many variables can be changed independently while the system remains at equilibrium.
This is similar to the question we answered when we developed Gibbs’ phase rule. However, there are important differences. The phase rule is independent of the size of the system; it specifies the number of intensive variables required to prescribe an equilibrium state in which specified phases are present. The size of the system is not fixed; we can add or remove matter to change the size of any phase without changing the number of degrees of freedom. In the present problem, the system cannot exchange matter with its surroundings. Moreover, the number of phases present can change. We require only that any change be reversible, and a reversible process can change the number of phases. (For example, reversible vaporization can convert a two-phase system to a gaseous, one-phase system.)
We want to impose a change on an initial state of a closed system. This initial state is an equilibrium state, and we want to impose a change that produces a new Gibbsian equilibrium state of the same system. This means that the change we impose can neither eliminate an existing chemical species nor introduce a new one. A given phase can appear or disappear, but a given chemical species cannot.
We can find the number of independent variables for this system by an argument similar to the one we used to find the phase rule. To completely specify this system, we must specify the pressure, temperature, and volume of each phase. We must also specify the number of moles of each of $C^{'}$ chemical species in each phase. This means that $P\left(C^{'}+3\right)$ variables must be specified. Every relationship that exists among these variables decreases by one the number that are independent. The following relationships exist:
1. The pressure is the same in each phase. There are $P-1$ pressure constraints.
2. The temperature is the same in each phase. There are $P-1$ temperature constraints.
3. The volume of each phase is determined by the pressure, the temperature, and the number of moles of each species present in that phase. (In Chapter 14, we find that the volume of a phase, $V$, is given rigorously by the equation $V=\sum_k{n_k{\overline{V}}_k}$, where $n_k$ and ${\overline{V}}_k$ are the number of moles and the partial molar volume of the $k$${}^{th}$ species in that phase. The ${\overline{V}}_k$ depend only on pressure, temperature, and composition.) For $P$ phases, there are $P$ constraints, one for the volume of each phase.
4. To completely specify the system, the concentration of each species must be specified in each phase. This condition creates $C^{'}P$ constraints. (We can also reach this conclusion by a slightly different argument. To specify the concentrations of $C^{'}$ species in some one phase requires $C^{'}$ constraints. A distribution equilibrium relates the concentrations of each species in every pair of phases. There are $P-1$ independent pairs of phases. For $C^{'}$ chemical species, there are $C^{'}\left(P-1\right)$ such constraints. This is equivalent to the requirement in our phase rule analysis that there are $C\left(P-1\right)$ equilibrium relationships among $C$ components in P phases. In the present problem, the total number of concentration constraints is $C^{'}+C^{'}\left(P-1\right)=C^{'}$P.
Subtracting the number of constraints from the number of variables, we find that there are
$P\left(C^{'}+3\right)-\left(P-1\right)-\left(P-1\right)-P-C^{'}P=2 \nonumber$
independent variables for a reversible process in a closed system, if all work is pressure–volume work. The number of independent variables is constant; it is independent of the species that are present and the number of phases.
It is important to appreciate that there is no conflict between Duhem’s theorem and the phase-rule conclusion that $F$ degrees of freedom are required to specify an equilibrium state of a system containing specified phases. When we say that specifying some pair of variables is sufficient to specify the state of a particular closed system undergoing reversible change, we are describing a system that is continuously at equilibrium as it goes from a first equilibrium state to a second one. Because it is closed and continuously in an equilibrium state, the range of variation available to the system is circumscribed in such a way that specifying two variables is sufficient to specify its state. On the other hand, when we say that $F$ degrees of freedom are required to specify an equilibrium state of a system containing specified phases, we mean that we must know the values of $F$ intensive variables in order to establish that the state of the system is an equilibrium state.
To illustrate the compatibility of these ideas and the distinction between them, let us consider a closed system that contains nitrogen, hydrogen, and ammonia gasammonia gases. In the presence of a catalyst, the reaction
$\ce{N_2 + 3H2 <=> 2NH3} \nonumber$
occurs. For simplicity, let us assume that these gases behave ideally. (If the gases do not behave ideally, the argument remains the same, but more complex equations are required to express the equilibrium constant and the system pressure as functions of the molar composition.) This system has two components and three degrees of freedom. When we say that the system is closed, we mean that the total number of moles of the elements nitrogen and hydrogen are known and constant. Let these be $n_N$ and $n_H$, respectively. Letting the moles of ammonia present be $n_{NH_3}=x$, the number of moles of dihydrogen and dinitrogen are $n_{H_2}={\left(n_H-3x\right)}/{2}$ and $n_{N_2}={\left(n_N-x\right)}/{2}$, respectively.
If we know that this system is at equilibrium, we know that the equilibrium constant relationship is satisfied. We have
$K_P=\frac{P^2_{NH_3}}{P^3_{H_2}P_{N_2}}=\frac{n^2_{NH_3}}{n^3_{H_2}n_{N_2}}{\left(\frac{RT}{V}\right)}^{-2}=\frac{16x^2}{{\left(n_H-3x\right)}^3\left(n_N-x\right)}{\left(\frac{RT}{V}\right)}^{-2} \nonumber$
where $V$ is the volume of the system. The ideal-gas equilibrium constant is a function only of temperature. We assume that we know this function; therefore, if we know the temperature, we know the value of the equilibrium constant. The pressure of the system can also be expressed as a function of $x$ and $V$. We have
$P=P_{H_2}+P_{N_2}+P_{NH_3}=\left[\frac{\left(n_H+n_N\right)}{2}-x\right]\left(\frac{RT}{V}\right) \nonumber$
If we know the system pressure and we know that the system is at equilibrium, we can solve the equations for $K$ and $P$ simultaneously to find the unknowns $x$ and $V$. From these, we can calculate the molar composition of the system and the partial pressure of each of the gases. (We discuss ideal-gas equilibrium calculations in detail in Chapter 13.) Thus, if we know that the system is at equilibrium, knowledge of the pressure and temperature is sufficient to determine its composition and all of its other properties.
If we do not know that this system is at equilibrium, but instead want to collect sufficient experimental data to prove that it is, the phase rule asserts that we must find the values of some set of three intensive variables. Two are not sufficient. From the perspective provided by the equations developed above, we can no longer use the equilibrium constant relationship to find $x$ and $V$. Instead, our problem is to find the composition of the system by other means, so that we can test for equilibrium by comparing the value of the quantity
$\frac{P^2_{NH_3}}{P^3_{H_2}P_{N_2}}=\frac{16x^2}{{\left(n_H-3x\right)}^3\left(n_N-x\right)}{\left(\frac{RT}{V}\right)}^{-2} \nonumber$
to the value of the equilibrium constant. We could accomplish this goal by measuring the values of several different combinations of three intensive variables. A convenient combination is pressure, temperature, and ammonia concentration, ${x}/{V}$. When we rearrange the equation for the system pressure to
$P=\left[\frac{\left(n_H+n_N\right)}{2V}-\left(\frac{x}{V}\right)\right]RT \nonumber$
it is easy to see that knowing P, T, and ${x}/{V}$ enables us to find the volume of the system. Given the volume, we can find the molar composition of the system and the partial pressure of each of the gases. With these quantities in hand, we can determine whether the equilibrium condition is satisfied.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/06%3A_Equilibrium_States_and_Reversible_Processes/6.10%3A_Duhem%27s_Theorem_-_Specifying_Reversible_Change_in_A_Closed_Syste.txt
|
Let us explore our ideas about reversibility further by considering the familiar case of a bowling ball ball that can move vertically in the effectively constant gravitational field near the surface of the earth.
We begin by observing that we develop our description by abstracting from reality. We consider idealized models because we want to develop theories that capture the most important features of real systems. We ignore less important features. In the present example, we know that the behavior of the bowling ball will be slightly influenced by its frictional interaction with the surrounding atmosphere. (We attribute these interactions to a property of air that we call viscosity.) We assume that this effect can be ignored. This causes no difficulty so long as our experiments are too insensitive to observe the effects of this atmospheric drag. If necessary, of course, we could do our experiments inside a vacuum chamber, so that the system we study experimentally better meets the assumptions we make in our analysis. Alternatively, we could expand our theory to include the effects of atmospheric drag.
To raise an initially stationary bowling ball to a greater height requires that we apply a vertical upward force that exceeds the downward gravitational force on the ball. Let height increase in the upward direction, and let $h\left(t\right)$ and $v\left(t\right)$ be the height and (vertical) velocity of the ball at time $t$. Let the mass of the ball be $m$, and let the ball be at rest at time zero. Representing the initial velocity and height as $v_0$ and $h_0$, we have $v_0=v\left(0\right)=0$ and $h_0=h\left(0\right)=0$. Letting the gravitational acceleration be $g,$ the gravitational force on the ball is $f_{gravitation}=-mg$. To raise the ball, we must apply a vertical force, $f_{applied}>0$, that makes the net force on the ball greater than zero. That is, we require
$f_{net}=f_{applied}-mg>0 \nonumber$ so that ${m\frac{d^2h}{dt^2}=f}_{net} \nonumber$
If $f_{applied}$ is constant, $f_{net}$ is constant; we find for the height and velocity of the ball at any later time $t$,
$v\left(t\right)=\left(\frac{f_{net}}{m}\right)t \nonumber$
and
$h\left(t\right)=\left(\frac{f_{net}}{m}\right)\frac{t^2}{2} \nonumber$
Let us consider the state of the system when the ball reaches a particular height, $h_S$. Let the corresponding time, velocity, kinetic energy, and potential energy at $h_S$, be $t_S$, $v_S$, ${\tau }_S$, and ${\textrm{ʋ}}_S$, respectively. Since
$v_S=\left(\frac{f_{net}}{m}\right)t_S \nonumber$
and
$h_S=\left(\frac{f_{net}}{m}\right)\frac{t^2_S}{2} \nonumber$
we have
$\tau \left(h_S\right)=\frac{mv^2_S}{2} =\frac{m}{2}{\left(\frac{f_{net}}{m}\right)}^2t^2_S =f_{net}h_S \nonumber$
The energy we must supply to move the ball from height zero to $h_S$ is equal to the work done by the surroundings on the ball. The increase in the energy of the ball is $-\hat{w}$. At $h_S$ this input energy is present as the kinetic and potential energy of the ball. We have
$-\hat{w}=\int^{h_S}_{h=o}{f_{applied}dh} =\int^{h_S}_{h=0}{\left(f_{net}+mg\right)dh}=f_{net}h_S+mgh_S =\tau \left(h_S\right)+\textrm{ʋ}\left(h_S\right) \nonumber$
where the kinetic and potential energies are $\tau \left(h_S\right)=f_{net}h_S$ and $\textrm{ʋ}\left(h_S\right)=mgh_S$, respectively.
The ball rises only if the net upward force is positive: $f_{net}=f_{applied}-mg>0$. Then the ball arrives at $h_S$ with a non-zero velocity and kinetic energy. If we make $f_{net}$ smaller and smaller, it takes the ball longer and longer to reach $h_S$; when it arrives, its velocity and kinetic energy are smaller and smaller. However, no matter how long it takes the ball to reach $h_S$, when it arrives, its potential energy is $\textrm{ʋ}\left(h_S\right)=mgh_S$,
Now, let us consider the energy change in a process in which the ball begins at rest at height zero and ends at rest at $h_S$. At the end, we have $\tau \left(h_S\right)=0$. To effect this change in a real system, we must apply a net upward force to the ball to get it moving; later we must apply a net downward force to slow the ball in such a way that its velocity becomes zero at exactly the time that it reaches $h_S$. There are infinitely many ways we could apply forces to meet these conditions. The net change in the ball’s energy is the same for all of them.
We find it useful to use a hypothetical process to calculate this energy change. In this hypothetical process, the upward force is always just sufficient to oppose the gravitational force on the ball. That is, $f_{net}=0$ so that $f_{applied}=mg$, and from the development above $v_S=0$ and $\tau \left(h_S\right)=0$. Of course, $t_{\infty }=\infty$. This is a hypothetical process, because the ball would not actually move under these conditions. We see that the hypothetical process is the limiting case in a series of real processes in which we make $f_{net}>0$ smaller and smaller. In all of these processes, the potential energy change is
$\textrm{ʋ}\left(h_S\right)=\int^{h_S}_{h=0}{mgdh=}mgh_S \nonumber$
If the ball is stationary and $f_{applied}=mg$, the ball remains at rest, whatever its height. If we make $f_{applied}>mg$, the ball rises. If we make $f_{applied}<mg$>, the ball falls. If $f_{net}\approx 0$ and the ball is moving only slowly in either direction, a very small change in $f_{net}=f_{applied}-mg$ can be enough to reverse the direction of motion. These are the characteristics of a reversible process: an arbitrarily small change in the applied force changes the direction of motion.
The advantage of working with the hypothetical reversible process is that the integral of the applied force over the distance through which it acts is the change in the potential energy of the system. While we cannot actually carry out a reversible process, we can compute the work that must be done if we know the limiting force that is required in order to effect the change. This is true because the velocity and kinetic energy of the ball are zero throughout the process. When the process is reversible, the change in the potential energy of the ball is equal to the work done on the ball; we have
$-\hat{w}\left(h_S\right)=w\left(h_S\right)=\textrm{ʋ}\left(h_S\right) \nonumber$
Gravitational potential energy is an important factor in some problems of interest in chemistry. Other forms of potential energy are important much more often. Typically, our principal interest is in the potential energy change associated with a change in the chemical composition of a system. We are seldom interested in the kinetic energy associated with the motion of a macroscopic system as a whole. We can include effects that arise from gravitational forces or from the motion of the whole system in our thermodynamic models, but we seldom find a need to do so. For systems in which the motion of the whole system is important, the laws of mechanics are usually sufficient; we find out what we want to know about such systems by solving their equations of motion.
When we discuss the first law of thermodynamics, we write $E=q+w$ (or $dE=dq+dw$) for the energy change that accompanies some physical change in a system. Since chemical applications rarely require that we consider the location of the system or the speed with which it may be moving, “$w$” usually encompasses only work that changes the energy of the system itself. Then, $E$ designates the energy of the macroscopic system itself. As noted earlier, we often recognize this by calling the energy of the system its internal energy. Some writers use the symbol $U$ to represent the internal energy, intending thereby to make it explicit that the energy under discussion is independent of the system’s location and motion.
6.12: Equilibria and Reversible Processes
The distinction between a system at equilibrium and a system undergoing reversible change is razor-thin. What we have in mind goes to the way we choose to define the system and centers on the origin of the forces that affect its energy. For a system at equilibrium, the forces are fixed. For a system undergoing reversible change, some of the forces originate in the surroundings, and those that do are potentially variable.
To raise a bowling ball reversibly, we apply an upward force, \(+mg\), exactly equal and opposite to the downward force, \(-mg\), due to gravity. At any point in this reversible motion, the ball is stationary, which is the reason we say that a reversible process is a hypothetical change. If we were to change the system slightly, by adding a shelf to support the ball at exactly the same height, the forces on the ball would be the same; however, the forces would be fixed and we would say that the ball is at equilibrium.
We can further illustrate this distinction by returning to the water–water-vapor system. If an unchanging water–water-vapor mixture is enclosed in a container whose dimensions are fixed (like a sealed glass bulb) we say that the system is at equilibrium. If a piston encloses the same collection of matter, and the surroundings apply a force on the piston that balances the pressure exerted by the mixture, we can say that the system is changing reversibly.
In Section 1.6, we used the term “primitive equilibrium” to refer to an equilibrium state in which all of the state functions are fixed. A system that can undergo reversible change without changing the number or kinds of phases present can be in an infinite number of such states. Since the set of such primitive equilibrium states encompasses the accessible equilibrium conditions in the sense of Gibb’s phase rule, we can call this set a Gibbsian equilibrium manifold.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/06%3A_Equilibrium_States_and_Reversible_Processes/6.11%3A_Reversible_Motion_of_A_Mass_in_A_Constant_Gravitational_Field.txt
|
We usually consider that the first, second, and third laws of thermodynamics are basic postulates. One of our primary objectives is to understand the ideas that are embodied in these laws. We introduce these ideas here, using statements of the laws of thermodynamics that are immediately applicable to chemical systems. In the next three chapters, we develop some of the most important consequences of these ideas. In the course of doing so, we examine other ways that these laws have been stated.
The first law deals with the definition and properties of energy. The second and third laws deal with the definition and properties of entropy. The laws of thermodynamics assert that energy and entropy are state functions. In the next chapter, we discuss the mathematical properties of state functions. Energy and entropy changes are defined in terms of the heat and work exchanged between a system and its surroundings. We adopt the convention that heat and work are positive if they increase the energy of the system. In a process in which a closed system accepts increments of heat, $dq$, and work, $dw$, from its surroundings, we define the changes in the energy, $dE$, and the entropy, $dS$, of the system in terms of $dq$, $dw$, and the temperature.
The meaning of the first law is intimately related to a crucial distinction between the character of energy on the one hand and that of the variables heat and work on the other. When we say that energy is a state function, we mean that the energy is a property of the system. In contrast, heat and work are not properties of the system; rather they describe a process in which the system changes. When we say that the heat exchanged in a process is $q$, we mean that $q$ units of thermal energy are transferred from the surroundings to the system. If $q>0$, the energy of the system increases by this amount, and the energy of the surroundings decreases by the same amount. $q$ has meaning only as a description of one aspect of the process.
When the process is finished, the system has an energy, but $q$ exists only as an accounting record. Like the amount on a cancelled check that records how much we paid for something, $q$ is just a datum about a past event. Likewise, $w$ is the record of the amount of non-thermal energy that is transferred. Because we can effect the same change in the energy of a system in many different ways, we have to measure $q$ and $w$ for a particular process as the process is taking place. We cannot find them by making measurements on the system after the process has gone to completion.
In Section 6.1, we introduce a superscripted caret to denote a property (state function) of the surroundings. Thus, $E$ is the energy of the system; $\hat{E}$ is the energy of the surroundings; $dE$ is an incremental change in the energy of the system; and $d\hat{E}$ is an incremental change in the energy of the surroundings. If we are careful to remember that heat and work are not state functions, it is useful to extend this notation to increments of heat and work. If $q$ units of energy are transmitted to the system as heat, we let $\hat{q}$ be the thermal energy transferred to the surroundings in the same process. Then $\hat{q}=-q$, and $\hat{q}+q=0$. Likewise, we let $w$ be the work done on the system and $\hat{w}$ be the work done on the surroundings in the same process, so that $\hat{w}=-w$, and $\hat{w}+w=0$. Unlike $E$ and $\hat{E}$, which are properties of different systems, $q$ and $\hat{q}$ (or $w$ and $\hat{w}$) are merely alternative expressions of the same thing—the quantity of energy transferred as heat (or work).
We define the incremental change in the energy of a closed system as $dE=dq+dw$. The accompanying change in the energy of the surroundings is $d\hat{E}=d\hat{q}+d\hat{w}$, so that $dE+d\hat{E}=0$. Whereas $\hat{q}+q=0$ (or $d\hat{q}+dq=0$) is a tautology, because it merely defines $\hat{q}$ as $-q$, the first law asserts that $dE_{universe}=dE+d\hat{E}$ is a fundamental property of nature. Any increase in the energy of the system is accompanied by a decrease in the energy of the surroundings, and conversely. Energy is conserved; heat is not; work is not.
The first law of thermodynamics
In a process in which a closed system accepts increments of heat, ${dq}$, and work, ${dw}$, from its surroundings, the change in the energy of the system, ${dE}$, is $dE = dq + dw$. Energy is a state function. For any process, $dE_{universe} =0.$
For a reversible process in which a system passes from state A to state B, the amount by which the energy of the system changes is the line integral of $dE$ along the path followed. Denoting an incremental energy change along this path as $d_{AB}E$, we have $\Delta_{AB}E=\int^B_A{d_{AB}E}$. (We review line integrals in the next chapter.) The energy change for the surroundings is the line integral of $d\hat{E}$ along the path followed by the surroundings during the same process:
$\Delta_{AB}\hat{E}=\int^B_A{d_{AB}\hat{E}}. \nonumber$
For any process in which energy is exchanged with the surroundings, the change in the system’s energy is $\Delta E=q+w$, where $q$ and $w$ are the amounts of thermal and non-thermal energy delivered to the system. We can compute $\Delta E$ from $q$ and $w$ whether the process is reversible or irreversible.
In contrast, the definition of entropy change applies only to reversible processes. In a process in which a system reversibly accepts an increment of heat, $dq^{rev}$, from its surroundings, the entropy change is defined by $dS={dq^{rev}}/{T}$. (We introduce the superscript, “rev”, to distinguish heat and work exchanged in reversible processes from heat and work exchanged in irreversible, “irrev”, or spontaneous, “spon”, processes.) When a system passes reversibly from state A to state B, the entropy change for the system is the line integral of $d_{AB}S={q^{rev}}/{T}$ along the path followed:
$\Delta_{AB}S=\int^B_A{d_{AB}q^{rev}/T}. \nonumber$
The entropy change for the surroundings is defined by the same relationship,
$d_{AB}\hat{S}=d_{AB}{\hat{q}^{rev}}/{T}. \nonumber$
Every system has an entropy. The entropies of the system and of its surroundings can change whenever a system undergoes a change. If the change is reversible, $\Delta S=-\Delta \hat{S}$.
The second law of thermodynamics
In a reversible process in which a closed system accepts an increment of heat, $dq^{rev}$, from its surroundings, the change in the entropy of the system, ${dS}$, is $dS=dq^{rev}/T$. Entropy is a state function. For any reversible process, $dS_{universe} = 0$, and conversely. For any spontaneous process,${\ } {d}{ {S}}_{ {universe}} {>} {0}$, and conversely.
We define the entropy change of the universe by $dS_{universe}=dS+d\hat{S}$; it follows that $\Delta_{AB}S_{universe}=\Delta_{AB}S+\Delta_{AB}\hat{S}$ for any process in which a system passes from a state A to a state B, whether the process is reversible or not. Since $dS_{universe}=0$ for every part of a reversible process, we have $\Delta S_{universe}=0$ for any reversible process. Likewise, since$\ dS_{universe}>0$ for every part of a spontaneous process, we have $\Delta S_{universe}>0$ for any spontaneous process.
The third law deals with the properties of entropy at temperatures in the neighborhood of absolute zero. It is possible to view the third law as a statement about the properties of the temperature function. It is also possible to view it as a statement about the properties of heat capacities. A statement in which the third law attributes particular properties to the entropy of pure substances is directly applicable to chemical systems. This statement is that of Lewis and Randall${}^{3}$:
third law of thermodynamics
If the entropy of each element in some crystalline state be taken as zero at the absolute zero of temperature, every substance has a positive finite entropy; but at the absolute zero of temperature the entropy may become zero, and does so become in the case of perfect crystalline substances.
The Lewis and Randall statement focuses on the role that the third law plays in our efforts to express the thermodynamic properties of pure substances in useful ways. To do so, it incorporates a matter of definition when it stipulates that “the entropy of each element be taken as zero at the absolute zero of temperature.” The third law enables us to find thermodynamic properties (“absolute entropies” and Gibbs free energies of formation) from which we can make useful predictions about the equilibrium positions of reactions. The third law can be inferred from experimental observations on macroscopic systems. It also arises in a natural way when we develop the theory of statistical thermodynamics. In both developments, the choice of zero for the entropy of “each element in some crystalline state” at absolute zero is—while arbitrary—logical, natural, and compellingly convenient.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/06%3A_Equilibrium_States_and_Reversible_Processes/6.13%3A_The_Laws_of_Thermodynamics.txt
|
When the state of an isolated system can change, we say that the system is capable of spontaneous change. When an isolated system is incapable of spontaneous change, we say that it is at equilibrium. Ultimately, this statement defines what we mean by (primitive) equilibrium. From our statement of the second law of thermodynamics, we have criteria for spontaneous change and for equilibrium in any macroscopic system:
• An isolated system can undergo any change that results in an increase in the entropy of the system. The converse is also true; an isolated system whose entropy can increase can undergo change. Any such change is said to be spontaneous. If an isolated system cannot change in such a way that its entropy increases, the system cannot change at all and is said to be at equilibrium.
• A system that is not isolated can undergo any change that results in an increase in the entropy of the universe, and conversely. Such changes are also said to be spontaneous.
• If a system that is not isolated undergoes a change, but the entropy of the universe remains constant, the change is not spontaneous. The entropy changes for the system and the surroundings are equal in magnitude and opposite in sign. The change is said to be reversible.
Although the first statement applies to isolated systems and the second applies to systems that are not isolated, we usually consider that both are statements of the same criterion, because the second statement follows from the first when we view the universe as an isolated system. We can restate these criteria for spontaneous change and equilibrium using the compact notation that we introduce in Section 6.13.
From our definitions, any change that occurs in an isolated system must be spontaneous. From our statement of the second law, the entropy of the universe must increase in any such process. To indicate this, we write $\Delta S_{universe}>0$. The surroundings must be unaffected by any change in an isolated system; hence, none of the surroundings’ state functions can change. Thus, $\Delta \hat{S}=0$, and since $\Delta S+\Delta \hat{S}=\Delta S_{universe}>0$, we have $\Delta S>0$.
For a spontaneous change in a system that is not isolated, $\Delta S$ can be greater or less than zero. However, $\Delta S$ and $\Delta \hat{S}$ must satisfy
$\Delta S+\Delta \hat{S}=\Delta S_{universe}>0. \nonumber$
In a system that is not isolated, reversible change may be possible. A system that undergoes a reversible change is at—or is arbitrarily close to—one of its equilibrium states during every part of the process. For a reversible change, it is always true that $\Delta S=-\Delta \hat{S}$, so that
$\Delta S+\Delta \hat{S}=\Delta S_{universe}=0. \nonumber$
Our criteria for change are admirably terse, but to appreciate them we need to understand precisely what is meant by “entropy”. To use the criteria to make predictions about a particular system, we need to find the entropy changes that occur when the system changes. To use these ideas to understand chemistry, we need to relate these statements about macroscopic systems to the properties of the molecules that comprise the system.
Since an isolated system does not interact with its surroundings in any way, no change in an isolated system can cause a change in its surroundings. If an isolated system is at equilibrium, no change is possible, and hence there is no system change for which the entropy of the universe can increase. Evidently, the entropy of the universe is at a maximum when the system is at equilibrium.
Typically, we are interested in what happens when the interaction between the system and surroundings serves to impose conditions on the final state of a system. A common example of such conditions is that the surroundings maintain the system at a constant pressure, while providing a constant-temperature heat reservoir, with which the system can exchange heat. In such cases, the system is not isolated. It turns out that we can use the entropy criterion to develop supplemental criteria based on other thermodynamic functions. These supplemental criteria provide the most straightforward means to discuss equilibria and spontaneous change in systems that are not isolated. Which thermodynamic function is most convenient depends upon the conditions that we impose on the system.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/06%3A_Equilibrium_States_and_Reversible_Processes/6.14%3A_Thermodynamic_Criteria_for_Change.txt
|
In this chapter, we introduce ideas that underlie classical thermodynamics. Because the development of classical thermodynamics relies on the properties of reversible processes, we have devoted considerable attention to specifying what we mean by a reversible process and to the relationship between reversible processes and the equilibrium states available to a system. In Section 7.17, we develop this theory. The development assumes that we can measure the heat and work exchanged between a system and its surroundings. It assumes that we can measure the state functions volume, pressure, temperature, and the amounts (moles) of chemical substances in a system. We define other state functions, whose values can be computed from these measurable quantities. From observations that we make on systems that are undergoing reversible change, we develop numerous relationships among these state functions.
No real system can change in exactly the manner we have in mind when we talk about a reversible process. Strictly speaking, any process that actually occurs must be spontaneous. The idea of reversible change is clearly an abstraction from reality. Nevertheless, we can determine—to a good approximation—the way in which one variable depends on another in a reversible process. We accomplish this by making measurements on a real system whose behavior approximates the ideal of reversibility as closely as possible. We express the—approximate—result of any such measurement as a number. Normally, we view the approximate character of the number to be a consequence of experimental error. When we say that we make measurements on a system that is undergoing a reversible change, we mean that we are making the measurements on a process that satisfies our definition of reversibility closely enough for the purpose at hand.
There are two reasons for the fact that reversible processes play an essential role in the development of the equations of thermodynamics. The first is that we can measure the entropy change for a process only if the process is reversible. The second and subtler reason is that an intensive variable may not have a unique value in a system that is undergoing a spontaneous change. If the temperature, the pressure, or the concentration of a component varies from point to point within the system, then that state function does not have a unique value, and we cannot use it to model the change. This occurs, for example, when gasoline explodes in a cylinder of a piston engine. The system consists of the contents of the cylinder. At any given instant, the pressure, temperature, and component concentrations vary from place to place within the cylinder. In general, no single value of any of these intensive variables is an adequate approximation for use in the thermodynamic equations that characterize the system as a whole.
To explore this idea further, let us think about measuring changes in extensive state functions during a spontaneous process. Since we are free to define the system as we please, we can choose a definition that makes the volume readily measurable. In the piston-engine example, there is no ambiguity about the volume of the system at any instant. While point-to-point variability means that the concentrations of the chemical components are not defined for the system as a whole, we are confident that there is some specific number of moles of each component present in the system at every instant. We can reach this conclusion by imagining that we can instantaneously freeze the composition by stopping all reactions. We can then find the number of moles of each component at our leisure.
If the system is not too inhomogeneous, we can devise an alternative procedure for making—in concept—such composition measurements. We imagine dividing the system into a large number of macroscopic subsystems. Each of these subsystems has a well-defined volume. We suppose also that each of them has well-defined thermodynamic functions at any given instant; that is, we assume that the pressure, temperature, and concentrations are approximately homogeneous within each of these subsystems. If this condition is satisfied, we can sum up the number of moles of a component in each of the sub-volumes to obtain the number of moles of that component in the whole system.
We can make a similar argument for any extensive thermodynamic function, so it applies to the energy, entropy, enthalpy, and the Helmholtz and Gibbs free energies. As long as the point-to-point variability within the system is small enough so that a division of the system into macroscopic subsystems produces subsystems that are approximately homogeneous, we can find the value for any extensive thermodynamic function in each individual sub-system and for the system as a whole. The measurement we propose has the character of a gedanken experiment. We can describe a procedure for making the measurement, whether we can actually perform the procedure or not.
This argument does not work for intensive thermodynamic functions. It is true that we could produce a weighted-average value for the temperature by multiplying the temperature of each subsystem by the subsystem volume, adding up the products, and dividing the sum by the volume of the whole system; however, the result would not be an intensive property of the whole system. For one thing, we could produce a different average temperature for every extensive variable by using it rather than the volume as the weighting factor in the average-temperature computation. Moreover, no such weighted-average temperature can reflect the fact that different temperatures in different subsystems result in grossly different reaction rates. No single temperature represents the state of the whole system, and we can make the same statement about any other intensive thermodynamic function. For a non-homogeneous system that can be subdivided into approximately homogeneous macroscopic subsystems, we can measure, in principle, the values of the system’s extensive state functions; however, its intensive state functions are essentially undefined.
On the other hand, we may be able to assume that an effectively homogeneous subsystem of macroscopic proportions does have well-defined extensive and intensive state functions, even if it is not in an equilibrium state. While a spontaneously changing system need not be homogenous, we commonly encounter systems that are homogeneous to within some arbitrarily small deviation. Consider a closed and well-stirred system in which some chemical reaction is occurring slowly. We immerse this system in a constant-temperature bath, and arrange for the applied pressure to be constant. From experience, we know that the temperature and pressure within such a system will be essentially constant, equal to the bath temperature and the applied pressure, respectively, and homogeneous throughout the system. In such a system, the temperature and the pressure of the system are at equilibrium with those imposed by the surroundings. The chemical process is not at equilibrium, but the component concentrations are homogeneous.
An important question now arises: Are all of the equations of equilibrium thermodynamics applicable to a system in which some processes occur spontaneously? That they are not is evident from the fact that we can calculate an entropy change from its defining equation, $dS={dq^{rev}}/{T}$, only if the behavior of the system is reversible.
Nevertheless, we will find that the relationships among state functions that we derive for reversible processes can be augmented to describe spontaneous processes that occur in homogeneous systems. The necessary augmentation consists of the addition of terms that express the effects of changing composition. (In Section 9.14, we develop the fundamental equation,
$dE=TdS-PdV+dw_{NPV} \nonumber$
which applies to any reversible process in a closed system. In Section 14.1, we infer that the fundamental equation becomes
$dE=TdS-PdV+dw_{NPV}+\sum_i{{\mu }_i{dn}_i} \nonumber$
for a spontaneous process in which ${\mu }_i$ and $dn_i$ are the chemical potentialchemical potential and the change in the number of moles of component $i$, respectively.)
The distinction between reversiblereversible process and spontaneous processspontaneous processes plays a central role in our theory. We find a group of relationships that express this distinction, and we call these relationships criteria for change. (In Section 9.19, we find that ${\left(dE\right)}_{SV}=dw_{NPV}$ if and only if the process is reversible, while ${\left(dE\right)}_{SV}<dw_{npv}$ if and only if the process is spontaneous. We find a close connection between the criteria for change and the composition-dependent terms that are needed to model the thermodynamic functions during spontaneous processes. We find that $\sum_i{{\mu }_i{dn}_i}=0$ if and only if the process is reversible, while $\sum_i{{\mu }_i{dn}_i}<0$ if and only if the process is spontaneous.)
In thinking about spontaneous processes, we should also keep in mind that the validity of our general relationships among state functions does not depend on our ability to measure the state functions of any particular state of a system. For example, we can find relationships among the molar volume and other thermodynamic properties of liquid water. Liquid water does not exist at 200 C and 1 bar, so we cannot undertake to measure its thermodynamic properties. However, by using our relationships among state functions and properties that we measure for liquid water where it does exist, we can estimate the thermodynamic properties of liquid water at 200 C and 1 bar. The results are two steps removed from reality; they are the estimated properties of a hypothetical substancehypothetical substance. Nevertheless, they have predictive value; for example, we can use them to predict that liquid water at 200 C and 1 bar will spontaneously vaporize to form gaseous water at 1 bar. The equations of thermodynamics are creatures of theory. We should not expect every circumstance that is described by the theory to exist in reality. What we require is that the theory accurately describe every circumstance that actually occurs.
To develop the equations of classical thermodynamics, we consider reversible processes. We then find general criteria for change that apply to any sort of change in any system. Later, we devise criteria based on the changes that occur in the composition of the system. In this book, we consider such composition-based criteria only for homogeneous systems. An extensive theory${}^{4}$${}^{.5}$ has been developed to model spontaneous processes in systems that are not necessarily homogeneous. This theory is often called irreversible thermodynamics or non-equilibrium thermodynamics. Development of this theory has led to a wide variety of useful insights about various molecular processes. However, much of what we are calling classical thermodynamics also describes irreversible processes. Even as we develop our theory of reversible thermodynamics, we use arguments that apply the equations we infer from reversible processes to describe closely related systems that are not at equilibrium.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/06%3A_Equilibrium_States_and_Reversible_Processes/6.15%3A_State_Functions_in_Systems_Undergoing_Spontaneous_Change.txt
|
Use Le Chatelier’s principle to answer questions 1-8.
1. One gram of iodine just dissolves in $2950$ mL of water at ambient temperatures. One gram of iodine is added to $1000$ mL of pure water and the resulting system is allowed to come to equilibrium. When equilibrium is reached, will all of the iodine have dissolved? What will happen if a small amount of water is added to the equilibrated system?
2. A saturated solution of barium sulfate is in contact with excess solid barium sulfate. $BaSO_4\left(s\right)\rightleftharpoons Ba^{2+}+SO^{2-}_4 \nonumber$ A small amount of a concentrated solution of $BaCl_2$ is added. How does the system respond?
3. In the presence of a catalyst, oxygen reacts with sulfur dioxide to produce sulfur trioxide. ${SO}_2\left(g\right)+\textrm{½}O_2\left(g\right)\rightleftharpoons SO_3\left(g\right) \nonumber$ A particular system consists of an equilibrated mixture of these three gases. A small amount of oxygen is added. How does the system respond?
4. A system containing these three gases is at equilibrium. ${SO}_2\left(g\right)+\textrm{½}O_2\left(g\right)\rightleftharpoons SO_3\left(g\right) \nonumber$ We suddenly decrease the volume of this system. How does the system respond?
5. In the presence of a catalyst, oxygen reacts with nitrogen to produce nitric oxide. $N_2\left(g\right)+O_2\left(g\right)\rightleftharpoons 2NO\left(g\right) \nonumber$ A particular system consists of an equilibrated mixture of these three gases. While keeping the temperature constant, we suddenly increase the volume of this system. How does the system respond?
6. Nitric oxide formation $N_2\left(g\right)+O_2\left(g\right)\rightleftharpoons 2NO\left(g\right) \nonumber$ is endothermic. (At constant temperature, the system absorbs heat as reaction occurs from left to right.) How does the position of equilibrium change when we increase the temperature of this system?
7. Pure water dissociates to a slight extent, producing hydronium, $H_3O^+$, and hydroxide, $OH^-$, ions. This reaction is called the autoprotolysis of water. ${2H_2O\rightleftharpoons H}_3O^++OH^- \nonumber$ Is the autoprotolysisautoprotolysis reaction endothermic or exothermic? (What happens to the temperature when we mix an acid with a base?) How does the autoprotolysis equilibrium change when we increase the temperature of pure water?
8. At the melting point, most substances are more dense in their solid state than they are in their liquid state. Such a substance is at its melting point at a particular pressure. Suppose that we now increase the pressure on this system. Does the melting point of the substance increase or decrease?
For each of the systems 9 – 20, specify
(a) what phases are present,
(b) the number of phases, $P$,
(c) the substances that are present,
(d) the number of components*, $C$, and
(e) the number of degrees of freedom, $F$.
Assume that the temperature and pressure of each system is constant and that all relevant chemical reactions are at equilibrium.
9. Pure helium gas, $He$, sealed in a glass bulb.
10. A mixture of helium gas, $He$, and neon gas, $Ne$, sealed in a glass bulb.
11. A mixture of $N_2O_4$ gas and $NO_2$ gas sealed in a glass bulb. These compounds react according to the equation $N_2O_4\left(g\right)\rightleftharpoons 2NO_2\left(g\right) \nonumber$
12. A mixture of $N_2O_4$ gas, $NO_2$ gas, and $He$ gas sealed in a glass bulb.
13. A mixture of $PCl_5$ gas, $PCl_3$ gas, and $Cl_2$ gas, sealed in a glass bulb. The proportions of $PCl_3$ and $Cl_2$ are arbitrary. These compounds react according to the equation $PCl_5\left(g\right)\rightleftharpoons PCl_3\left(g\right)+Cl_2\left(g\right) \nonumber$
14. A mixture of $PCl_5$ gas, $PCl_3$ gas, $Cl_2$ gas, and $He$ gas sealed in a glass bulb. The proportions of $PCl_3$ and $Cl_2$ are arbitrary.
15. A mixture of $PCl_5$ gas, $PCl_3$ gas, and $Cl_2$ gas sealed in a glass bulb. In this particular system, the number of moles of$\ PCl_3$ is the same as the number of moles of $Cl_2$.
16. A saturated aqueous solution of iodine, $I_2$. The solution is in contact with a quantity of solid $I_2$.
17. An aqueous solution that contains potassium ion, $K^+$, iodide ion, $I^-$, triiodide ion, $I^-_3$, and dissolved $I_2$. The solution is in contact with a quantity of solid $I_2$. Recall that triiodide ion is formed by the reaction $I^-+I_2\rightleftharpoons I^-_3 \nonumber$
18. An aqueous solution that contains $K^+$, and $I^-$. This solution is in contact with a quantity of solid silver iodide, $AgI.$ Recall that $AgI$ is quite insoluble. Neutral molecules of $AgI$ do not exist as such in aqueous solution. The solid substance equilibrates with its dissolved ions according to the reaction $AgI\left(s\right)\rightleftharpoons Ag^++I^- \nonumber$
19. An aqueous solution that contains $K^+$, $I^-$, and chloride ions, $Cl^-$. This solution is in contact with a mixture of (pure) solid $AgI$ and (pure) solid $AgCl$.
20. An aqueous solution that contains $K^+$, $I^-$, $Cl^-$, and nitrate ions, $NO^-_3$. This solution is in contact with a mixture of (pure) solid $AgI$ and (pure) solid $AgCl$.
21. A large vat contains oil and water. The oil floats as a layer on top of the water. Orville has another tank with a reserve supply of oil. He also has pipes and pumps that enable him to pump oil between his tank and the vat. Wilbur has a third tank with a reserve supply of water. Wilbur has pipes and pumps that enable him to pump water between his tank and the vat. Their pumps are calibrated to show the volume of oil or water added to or removed from the vat. Normally, Orville and Wilbur work as a team to keep the total mass of liquid in the vat constant. The oil and water have densities of $0.80\ \mathrm{kg\ }{\mathrm{L}}^{\mathrm{-1}}$ and $1.00$ $\mathrm{kg\ }{\mathrm{L}}^{\mathrm{-1}}$, respectively. Let $M_{vat}$ be the total mass of the liquids in the vat.
(a) If Orville pumps a small volume of oil, $dV_{oil}$, into or out of the vat, while Wilbur does nothing, what is the change in the mass of liquid in the vat? (i.e., $dM_{vat}=?$)
(b) If Wilbur pumps a small volume of water, $dV_{water}$, into or out of the vat, while Orville does nothing, what is the change in the mass of liquid in the vat? (i.e., $dM_{vat}=?$)
(c) Suppose that Orville and Wilbur make adjustments, $dV_{oil}$ and $dV_{water}$ at the same time, but contrary to their customary practice, they do not coordinate their adjustments with one another. What would be the change in the mass of liquid in the vat? (i.e., $dM_{vat}=?$)
(d) If Orville and Wilbur make adjustments, $dV_{oil}$ and $dV_{water}$, at the same time, in such a way as to keep the mass of liquid in the vat constant, what value of $dM_{vat}$ results from this combination of adjustments? (i.e., $dM_{vat}=?$)
(e) From your answers to (c) and (d), what relationship between $dV_{oil}$${}_{\ }$and $dV_{water}$ must Orville and Wilbur maintain in order to keep $dM_{vat}$ constant?
(f) One day, the boss, Mr. Le Chatelier, instructs Orville to add 1.00 L of oil to the vat. Qualitatively, what change does Mr. Le Chatelier impose on the mass of the vat’s contents? (That is, what is the direction of the imposed change?)
(g) Quantitatively, what is the change in mass that Mr. Le Chatelier imposes? (That is, what is the equation for the change in the mass in the vat in kg?
(h) Qualitatively, how must Wilbur respond?
(i) Quantitatively, how must Wilbur respond? (That is, what is the equation for the change in the mass in the vat in kg?)
22. Which of the following processes can be carried out reversibly?
(a) Melting an ice cube.
(b) Melting an ice cube at 273.15 K and 1.0 bar.
(c) Melting an ice cube at 275.00 K.
(d) Melting an ice cube at 272.00 K and 1.0 bar.
(e) Frying an egg.
(f) Riding a roller coaster.
(g) Riding a roller coaster and completing the ride in 10 minutes.
(h) Separating pure water from a salt solution at 1 bar and 280.0 K.
(i) Dissolving NaCl in an aqueous solution that is saturated with NaCl.
(j) Compressing a gas.
(k) Squeezing juice from a lemon.
(l) Growing a bacterial culture.
(m) Bending (flexing) a piece of paper.
(n) Folding (creasing) a piece of paper.
Notes
${}^{1}$ The ordinate (pressure) values for the solid–liquid and liquid–gas equilibrium lines are severely compressed. The ranges of pressure values are so different that the three equilibrium lines cannot otherwise be usefully exhibited on the same graph.
${}^{2}$ We also use closely related quantities that we call fugacities. We think of a fugacity as a “corrected pressure.” For present purposes, we can consider a fugacity to be a particular type of activity.
${}^{3}$ G. N. Lewis and M. Randall, Thermodynamics and the Free Energy of Chemical Substances, 1${}^{st}$ Ed., McGraw-Hill, Inc., New York, 1923, p. 448.
${}^{4}$ Ilya Prigogine, Introduction to the Thermodynamics of Irreversible Processes, Second Edition, Interscience Publishers, 1961.
${}^{5}$${}^{\ }$S. R. de Groor and P. Mazur, Non-Equilibrium Thermodynamics, Dover Publications, New York, 1984. (Published originally by North Holland Publishing Company, Amsterdam, 1962.)
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/06%3A_Equilibrium_States_and_Reversible_Processes/6.16%3A_Problems.txt
|
• 7.1: Changes in a State Function are Independent of Path
We can specify an equilibrium state of a system by giving the values of a sufficient number of the system’s measurable properties. We call any measurable property that can be used in this way a state function or a state variable. If a system undergoes a series of changes that return it to its original state, any state function must have the same value at the end as it had at the beginning. A system can return to its initial state only all state variables returns to their original values.
• 7.2: The Total Differential
The total differential of the function is the sum over all of the independent variables of the partial derivative of the function with respect to a variable times the total differential of that variable.
• 7.3: Line Integrals
The significance of the distinction between exact and inexact differential expressions comes into focus when we use the differential, df, to find how the quantity, f, changes when the system passes from the state defined by (x₁, y₁) to the state defined by (x₂, y₂).
• 7.4: Exact Differentials and State Functions
• 7.5: Determining Whether an Expression is an Exact Differential
Since exact differentials have these important characteristics, it is valuable to know whether a given differential expression is exact or not. That is, given a differential expression of the form df=M(x,y)dx+ N(x,y)dy, we would like to be able to determine whether df is exact or inexact. It turns out that there is a simple test for exactness: The differential df=M(x,y)dx+ N(x,y)dy is exact if and only if ∂M/∂y=∂N/∂x.
• 7.6: The Chain Rule and the Divide-through Rule
The divide-through rule is a convenient way to generate thermodynamic relationships.
• 7.7: Measuring Pressure-Volume Work
Pressure–volume work is done whenever a force in the surroundings applies pressure on the system while the volume of the system changes. Because chemical changes typically do involve volume changes, pressure–volume work often plays a significant role. Perhaps the most typical chemical experiment is one in which we carry out a chemical reaction at the constant pressure imposed by the earth’s atmosphere.
• 7.8: Measuring Work- Non-Pressure-Volume Work
For chemical systems, pressure–volume work is usually important. Many other kinds of work are possible. From our vector definition of work, any force that originates in the surroundings can do work on a system. The force drives a displacement in space of the system or some part of the system. Stretching a strip of rubber is a one-dimensional analog of pressure–volume work. Changing the surface area of a liquid is a two-dimensional analog of pressure–volume work.
• 7.9: Measuring Heat
When we want to measure the heat added to a system, measuring the temperature increase that occurs is often the most convenient method. If we know the temperature increase in the system, and we know the temperature increase that accompanies the addition of one unit of heat, we can calculate the heat input to the system. Evidently, it is useful to know how much the temperature increases when one unit of heat is added to various substances.
• 7.10: The First Law of Thermodynamics
While we can measure the heat and work that a system exchanges with its surroundings, neither the heat nor the work is necessarily zero when the system traverses a cycle. Heat and work are not state functions. Nevertheless, adding heat to a system increases its energy. Likewise, doing work on a system increases its energy. If the system surrenders heat to the surroundings or does work on the surroundings, the energy of the system is decreased.
• 7.11: Other Statements of the First Law
The first law has been stated in many ways. Some are intended to be humorous or evocative rather than precise statements; for example, “You can’t get something (useful work in some system) for nothing (no decrease in the energy of some other system).” Others are potentially ambiguous, because we construct them to be as terse as possible. To make them terse, we omit ideas that we deem to be implicit.
• 7.12: Notation for Changes in Thermodynamic Quantities - E vs. ∆E
From the outset of our study of energy, we recognize that we are always dealing with energy changes.
• 7.13: Heat Capacities for Gases- Cv, Cp
• 7.14: Heat Capacities of Solids- the Law of Dulong and Petit
• 7.15: Defining Enthalpy, H
Any mathematical expression that involves only state functions must itself be a state function. We can define several state functions that have the units of energy and that turn out to be particularly useful. One of them is named enthalpy and is customarily represented by the symbol H. We define enthalpy: H = E + PV .
• 7.16: Heat Transfer in Reversible Processes
• 7.17: Free Expansion of a Gas
To develop the theory of thermodynamics, we must be able to model the thermodynamic properties of gases as functions of pressure, temperature, and volume. To do so, we consider processes in which the volume of a gas changes. For the expansion (or compression) of a gas to be a reproducible process, the exchange of heat between the system and its surroundings must be controlled.
• 7.18: Reversible vs. Irreversible Pressure-Volume Work
Gas in a piston can be compressed only if the applied pressure exceeds the gas pressure. If the applied pressure equals the gas pressure, the piston remains stationary. If the applied pressure is greater than the gas pressure by any ever-so-small amount, the gas will be compressed. If the applied pressure is infinitesimally less than the gas pressure, the gas will expand. The work done under such conditions is reversible work.
• 7.19: Isothermal Expansions of An Ideal Gas
• 7.20: Adiabatic Expansions of An Ideal Gas
• 7.21: Problems
07: State Functions and The First Law
We can specify an equilibrium state of a physical system by giving the values of a sufficient number of the system’s measurable properties. We call any measurable property that can be used in this way a state function or a state variable. If a system undergoes a series of changes that return it to its original state, any state function must have the same value at the end as it had at the beginning. The relationship between our definition of a physical state and our definition of a state function is tautological. A system can return to its initial state only if every state variable returns to its original value.
It is evident that the change in a state function when the system goes from an initial state, $I$, to some other state, $II$, must always be the same. Consider the state functions $X$, $Y$, $Z$, and $W$. Suppose that functions $Y$, $Z$, and $W$ are sufficient to specify the state of a particular system. Let their values in state $I$ be $X_I$, $Y_I$, $Z_I$, and $W_I$. We can express their interdependence by saying that $X_I$ is a function of the other state functions $Y_I$, $Z_I$, and $W_I$: $X_I=f\left(Y_I,\ Z_I,W_I\right)$. In state $II$, this relationship becomes $X_{II}=f\left(Y_{II},\ Z_{II},W_{II}\right)$. The difference
$X_{II}-X_I=f\left(Y_{II},\ Z_{II},W_{II}\right)-f\left(Y_I,\ Z_I,W_I\right) \nonumber$
depends only on the states $I$ and $II$. In particular, $X_{II}-X_I$ is independent of the values of $Y$, $Z$, and $W$ in any intermediate states that the system passes through as it undergoes the change from state $I$ to state $II$. We say that the change in the value of a state function depends only on the initial and final states of the system; the change in the value of a state function does not depend on the path along which the change is effected.
We can also develop this conclusion by a more explicit argument about the path. Suppose that the system goes from state $I$ to state $II$ by path $A_{out}$ and then returns to state $I$ by path $A_{back}$, as sketched in Figure 1. Let $X$ be some state function. If the change in $X$ as the system traverses path $A_{out}$ is $X_{II}-X_I=\Delta X\left(A_{out}\right)$, and the change in $X$ as the system traverses $A_{back}$ is $X_I-X_{II}=\Delta X\left(A_{back}\right)$, we must have $\Delta X\left(A_{out}\right)+\Delta X\left(A_{back}\right)=0$, so that $\Delta X\left(A_{out}\right)=-\Delta X\left(A_{back}\right) \nonumber$
For some second path comprising $B_{out}$ followed by $B_{back}$, the same must be true:
$\Delta X\left(B_{out}\right)+\Delta X\left(B_{back}\right)=0 \nonumber$ and $\Delta X\left(B_{out}\right)=-\Delta X\left(B_{back}\right) \nonumber$
The same is true for any other path. In particular, in must be true for the path $A_{out}$ followed by $B_{back}$, so that $\Delta X\left(A_{out}\right)+\Delta X\left(B_{back}\right)=0$, and hence
$\Delta X\left(A_{out}\right)=-\Delta X\left(B_{back}\right) \nonumber$
But this means that
$\Delta X\left(A_{out}\right)=\Delta X\left(B_{out}\right) \nonumber$
Since the paths $A$ and $B$ are arbitrary, the change in $X$ in going from state $I$ to state $II$ must have the same value for any path.
Notes
${}^{1}$ Since the temperature of the water increases and the process is to be reversible, we must keep the temperature of the thermal reservoir just $dT$ greater than that of the water throughout the process. We can accomplish this by using a quantity of ideal gas as the heat reservoir. By reversibly compressing the ideal gas, we can reversibly deliver the required heat while maintaining the required temperature. We consider this operation further in §12-5.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/07%3A_State_Functions_and_The_First_Law/7.01%3A_Changes_in_a_State_Function_are_Independent_of_Path.txt
|
If $f\left(x,y\right)$ is a continuous function of the variables $x$ and $y$, we can think of $f\left(x,y\right)$ as a surface in a three-dimensional space. $f\left(x,y\right)$ is the height of the surface above the $xy$-plane at the point $\left(x,y\right)$ in the plane. If we consider points $\left(x_1,y_1\right)$ and $\left(x_2,y_2\right)$ in the $xy$-plane, the vertical separation between the corresponding points on the surface, $f\left(x_1,y_1\right)$ and $f\left(x_2,y_2\right)$, is
$\Delta f=f\left(x_2,y_2\right)-f\left(x_1,y_1\right) \nonumber$
We can add $f\left(x_1,y_2\right)-f\left(x_1,y_2\right)$ to $\Delta f$ without changing its value. Then
$\Delta f=\left[f\left(x_2,y_2\right)-f\left(x_1,y_2\right)\right]+\left[f\left(x_1,y_2\right)-f\left(x_1,y_1\right)\right] \nonumber$
If we consider a small change, such that $x_2=x_1+\Delta x$ and $y_2=y_1+\Delta y$, we have
$\Delta f=\frac{\left[f\left(x_1+\Delta x,y_1+\Delta y_1\right)-f\left(x_1,y_1+\Delta y_1\right)\right]\Delta x}{\Delta x} +\frac{\left[f\left(x_1,y_1+\Delta y_1\right)-f\left(x_1,y_1\right)\right]\Delta y}{\Delta y} \nonumber$
Letting $df={\mathop{\mathrm{lim}}_{ \begin{array}{c} \Delta x\to 0 \ \Delta y\to 0 \end{array} } \Delta f\ }$, we have
\begin{align*} df &={\mathop{\mathrm{lim}}_{ \begin{array}{c} \Delta x\to 0 \ \Delta y\to 0 \end{array} } \left\{\frac{\left[f\left(x_1+\Delta x,y_1+\Delta y_1\right)-f\left(x_1,y_1+\Delta y_1\right)\right]\Delta x}{\Delta x}\right\}\ } +\mathop{\mathrm{lim}}_{\Delta y\to 0}\left\{\frac{\left[f\left(x_1,y_1+\Delta y_1\right)-f\left(x_1,y_1\right)\right]\Delta y}{\Delta y}\right\} \[4pt]&={\mathop{\mathrm{lim}}_{\Delta y\to 0} \left\{{\left(\frac{\partial f\left(x_1,y_1+\Delta y\right)}{\partial x}\right)}_ydx\right\}\ }+{\left(\frac{\partial f\left(x_1,y_1\right)}{\partial y}\right)}_xdy = {\left(\frac{\partial f\left(x_1,y_1\right)}{\partial x}\right)}_ydx+{\left(\frac{\partial f\left(x_1,y_1\right)}{\partial y}\right)}_xdy \end{align*}
We call $df$ the total differential of the function $f\left(x,y\right)$:
$df=\left(\frac{\partial f}{\partial x}\right)_ydx + \left(\frac{\partial f}{\partial y}\right)_xdy \nonumber$
where $df$ is the amount by which $f\left(x,y\right)$ changes when $x$ changes by an arbitrarily small increment, $dx$, and $y$ changes by an arbitrarily small increment, $dy$. We use the notation
${\ \ \ \ f_x\left(x,y\right)=\left(\frac{\partial f}{\partial x}\right)}_y \nonumber$
and
${\ \ \ \ f_y\left(x,y\right)=\left(\frac{\partial f}{\partial y}\right)}_x \nonumber$
to represent the partial derivatives more compactly. In this notation, $df=f_x\left(x,y\right)dx+\ f_y\left(x,y\right)dy$. We indicate the partial derivative with respect to $x$ with $y$ held constant at the particular value $y=y_0$ by writing $\ f_x\left(x,y_0\right)$.
We can also write the total differential of$\ f\left(x,y\right)$ as
$df=M\left(x,y\right)dx+\ N\left(x,y\right)dy \label{total1}$
in which case $M\left(x,y\right)$ and $N\left(x,y\right)$ are merely new names for ${\left({\partial f}/{\partial x}\right)}_y$ and ${\left({\partial f}/{\partial y}\right)}_x$, respectively. To express the fact that there exists a function, $f\left(x,y\right)$, such that ${M\left(x,y\right)=\left({\partial f}/{\partial x}\right)}_y$ and ${N\left(x,y\right)=\left({\partial f}/{\partial y}\right)}_x$, we say that $df$ is an exact differential.
Inexact Differentials
It is important to recognize that a differential expression in Equation \ref{total1}, may not be exact. In our efforts to model physical systems, we encounter differential expressions that have this form, but for which there is no function, $\boldsymbol{f}\left(\boldsymbol{x},\boldsymbol{y}\right)$, such that ${M\left(x,y\right) = \left({\partial f}/{\partial x}\right)}_y$ and ${N\left(x,y\right) = \left({\partial f}/{\partial y}\right)}_x$. We call a differential expression, $df\left(x,y\right)$, for which there is no corresponding function, $f\left(x,y\right)$, an inexact differential. Heat and work are important examples. We will develop differential expressions that describe the amount of heat, $dq$, and work, $dw$, exchanged between a system and its surroundings. We will find that these differential expressions are not necessarily exact. (We develop examples in Section 7.17 to Section 7.20.) It follows that heat and work are not state functions.
7.03: Line Integrals
The significance of the distinction between exact and inexact differential expressions comes into focus when we use the differential, $df$, to find how the quantity,$\ f$, changes when the system passes from the state defined by $\left(x_1,y_1\right)$ to the state defined by $\left(x_2,y_2\right)$. We suppose that the system undergoes this change along some continuous path in the $xy$-plane. We can specify such a path as a function, $c=g\left(x,y\right)$, where $c$ is a constant, or as $y=h\left(x\right)$. Whether the differential is exact or inexact, we can sum up increments of change, $\Delta f$, along short segments of the path to find the change in $f$ between $\left(x_1,y_1\right)$ and $\left(x_2,y_2\right)$. Let $\left(x_i,y_i\right)$ and $\left(x_i+\Delta x,y_i+\Delta y\right)$ be two neighboring points on the curve $c=g\left(x,y\right)$. As the system traverses $c=g\left(x,y\right)$ between these points, the change in $f$ is
$\Delta f\approx M\left(x_i,y_i\right)\Delta x+\ N\left(x_i,y_i\right)\Delta y \nonumber$
If we sum up such increments of $\Delta f$, along the curve $c=g\left(x,y\right)$, from $\left(x_1,y_1\right)$ to $\left(x_2,y_2\right)$, the sum approximates the change in $f$ along this path. In the limit that all of the incremental $\Delta x$ and $\Delta y$ become arbitrarily small, the approximation becomes exact. The limit of this sum is called the line integral of $df$ along the path $c=g\left(x,y\right)$, between $\left(x_1,y_1\right)$ and $\left(x_2,y_2\right)$.
Whether $df$ is exact or inexact, the line integral of $df$ is defined along any continuous path in the $xy$-plane. If the path is $c=g\left(x,y\right)$ and it connects the points $\left(x_1,y_1\right)$ and $\left(x_2,y_2\right)$ in the $xy$-plane, we designate the value of the line integral as
$\Delta f=\int_g{df}=\int^{c=g\left(x_2,y_2\right)}_{c=g\left(x_1,y_1\right)}{df} \nonumber$
(any differential expression)
However, if $df$ is exact, we know that $\Delta f=f\left(x_2,y_2\right)-f\left(x_1,y_1\right)$. In this case, the line integral of $df$ along curve $c=g\left(x,y\right)$ between these points has the value
$\Delta f=f\left(x_2,y_2\right)-f\left(x_1,y_1\right)=\int^{c=g\left(x_2,y_2\right)}_{c=g\left(x_1,y_1\right)}{df} \nonumber$
(for exact differential $df$)
Because the value of the line integral depends only on the values of $f\left(x,y\right)$ at the end points of the integration path, the line integral of the total differential, $df$, is independent of the path, $c=g\left(x,y\right)$. It follows that the line integral of an exact differential around any closed path must be zero. A circle in the middle of the integral sign is often used to indicate that the line integral is being taken around a closed path. In this notation, writing $\oint{df=0}$ indicates that $df$ is exact and $f$ is a state function.
In concept, the evaluation of line integrals is straightforward. Since the path of integration is a line, the integrand involves only one dimension. A line integral can always be expressed using a single variable of integration. Three approaches to the evaluation of line integrals are noteworthy.
If we are free to choose an arbitrary path, we can choose the two-segment path $\left(x_1,y_1\right)\to \left(x_2,y_1\right)\to \left(x_2,y_2\right)$. Along the first segment, $y$ is constant at $y_1$, so we can evaluate the change in $f$ as
$\Delta f_I=\int^{x_2}_{x_1}{M\left(x,y_1\right)dx} \nonumber$
Along the second segment, $x$ is constant at $x_2$, so we can evaluate the change in $f$ as
$\Delta f_{II}=\int^{y_2}_{y_1}{N\left(x_2,y\right)dy} \nonumber$
Then $\Delta f=\Delta f_I+\Delta f_{II}$.
If the path, $c=g\left(x,y\right)$, is readily solved for $y$ as a function of $x$, say $y=h\left(x\right)$, substitution converts the differential expression into a function of only $x$:
$df=M\left(x,h\left(x\right)\right)dx+\ N\left(x,h\left(x\right)\right)\left(\frac{dh}{dx}\right)dx \nonumber$
Integration of this expression from $x_1$ to $x_2$ gives $\Delta f$.
The path, $c=g\left(x,y\right)$, can always be expressed as a parametric function of a dummy variable, $t.$ That is, we can always find functions $x=x\left(t\right)$ and $y=y\left(t\right)$ such that $c=g\left(x\left(t\right),y\left(t\right)\right)=g\left(t\right)$, $x_1=x\left(t_1\right)$, $y_1=y\left(t_1\right)$, $x_2=x\left(t_2\right)$, and $y_2=y\left(t_2\right)$. Then substitution converts the differential expression into a function of $t$:
$df=M\left(x\left(t\right),y\left(t\right)\right)dt+\ N\left(x\left(t\right),y\left(t\right)\right)dt \nonumber$
Integration of this expression from $t_1$ to $t_2$ gives $\Delta f$.
While the line integral of an exact differential between two points is independent of the path of integration, this not the case for an inexact differential. For an inexact differential, the integral between two points depends on the path of integration. To illustrate these ideas, let us consider some examples. These examples illustrate methods for finding the integral of a differential along a particular path. They illustrate also the path-independence of the integral of an exact differential and the path-dependence of the integral of an inexact differential.
Example $1$: An exact Differential
We begin by considering the function
$f\left(x,y\right)=xy^2 \nonumber$
for which $df=y^2dx+2xy\ dy$. Since $f\left(x,y\right)$ exists, $df$ must be exact. Let us integrate $df$ between the points $\left(1,\ 1\right)$ and $\left(2,\ 2\right)$ along four different paths, sketched in Figure 2, that we denote as paths a, b, c, and d.
• Path a has two linear segments. The first segment is the portion of the line $y=1$ from $x=1$ to $x=2$. Along this segment, $dy=0$. The second segment is portion of the line $x=2$ from $y=1$ to $y=2$. Along the second segment, $dx=0$.
• Path b has two linear segments also. The first segment is the portion of the line $x=1$ from $y=1$ to $y=2$. Along the first segment, $dx=0$. The second segment is portion of the line $y=2$ from $x=1$ to $x=2$. Along the second segment, $dy=0$.
• Path c is the line $y=x$, from $x=1$ to $x=2$, and for which $dy=dx$.
• Path d is the line $y=x^2-2x+2$, which we can express in parametric form as $y=t^2+1$ and $x=t+1$. At $\left(1,\ 1\right)$, $t=0$. At $\left(2,\ 2\right)$, $t=1$. Also, $dx=dt$ and $dy=2t\ dt$.
The integrals along these paths are
• Path a: \begin{align*} \int_a {df} &=\int^{x=2}_{x=1} 1^2dx+\int^{y=2}_{y=1} \left(2\right)\left(2\right)y \, dy \[4pt] &=\left. x\right|^2_1+\left.2y^2\right|^2_1 \[4pt] &=7 \end{align*}
• Path b: \begin{align*} \int_b{df}&=\int^{x=2}_{x=1} 2^2dx+\int^{y=2}_{y=1} \left(2\right)\left(1\right) y\, dy \[4pt] &=\left.4x\right|^2_1+\left.y^2\right|^2_1 \[4pt] &=7\end{align*}
• Path c: \begin{align*} \int_c{df} &= \int^{x=2}_{x=1}{3x^2\ dx} \[4pt] &= \left.x^3\right|^2_1 \[4pt]&=7 \end{align*}
• Path d: \begin{align*} \int_d {df}&=\int^{t=1}_{t=0} \left\{(t^2+1)^2+2 (t+1) (t^2+1) ( 2t )\right\} dt \[4pt] &=\int^{t=1}_{t=0} \left\{5t^4 + 4t^3 + 6t^2 + 4t + 1 \right\}dt \[4pt] &= \left.t^5+t^4+2t^3+2t^2+t\right|^1_0 =7 \end{align*}
The integrals along all four paths are the same. The value is 7, which, as required, is the difference $f\left(2,2\right)-f\left(1,1\right)=7$.
Example $2$: An inexact Differential
Now, let us consider the differential expression
$dh=y\ dx+2xy\ dy. \nonumber$
This expression has the form of a total differential, but we will see that there is no function, $h\left(x,y\right)$, for which this expression is the total differential. That is, $dh$ is an inexact differential. If we integrate $dh$ over the same four paths, we find
• Path a: \begin{align*} \int_a{dh}&=\int^{x=2}_{x=1} \left(1\right)dx+\int^{y=2}_{y=1}(2)(2) y\ dy \[4pt]& =\left.x\right]^2_1+\left.2y^2\right|^2_1 \[4pt]& =7 \end{align*}
• Path b: \begin{align*} \int_b{dh}&=\int^{x=2}_{x=1}\left(2\right)dx+\int^{y=2}_{y=1}\left(2\right)\left(1\right)y\ dy \[4pt] &= \left.2x\right|^2_1+\left.y^2\right|^2_1 \[4pt]&=5 \end{align*}
• Path c: \begin{align*} \int_c{dh}&=\int^{x=2}_{x=1}{\left(x+2x^2\right)\ dx} \[4pt]&={\left[\frac{x^2}{2}+\frac{{2x}^3}{3}\right]}^2_1 \[4pt]&=6\frac{1}{6} \end{align*}
• Path d: \begin{align*} \int_d{dh}&=\int^{t=1}_{t=0} \left\{(t^2+1)+2(t+1) (t^2+1)(2t)\right\}dt \[4pt]&=\int^{t=1}_{t=0}\left\{4t^4 + 4t^3 + 5t^2 + 4t + 1\right\} dt \[4pt]&= \left[4t^5/5 + t^4 + 5t^3/3 + 2t^2 + t\right]^1_0 \[4pt]&=6 \frac{7}{15} \end{align*}
For $dh\left(x,y\right)$, the value of the integral depends on the path of integration, confirming that $dh\left(x,y\right)$ is an inexact differential: Since the value of the integral depends on path, there can be no $h\left(x,y\right)$ for which
$\Delta h=h\left(x_2,y_2\right)-h\left(x_1,y_1\right)=\int^{\left(x_2,y_2\right)}_{\left(x_1,y_1\right)}{dh} \nonumber$
That is, $h\left(x_2,y_2\right)-h\left(x_1,y_1\right)$ cannot have four different values.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/07%3A_State_Functions_and_The_First_Law/7.02%3A_The_Total_Differential.txt
|
Now, let us consider the general case of a continuous function $f\left(x,y\right)$, for which the exact differential is
$df=f_x\left(x,y\right)dx+f_y\left(x,y\right)dy. \nonumber$
We want to integrate the exact differential over very short paths like paths a and b in Section 7.3. Let us evaluate the integral between $\left(x_0,y_0\right)$ and $\left(x_0+\Delta x,y_0+\Delta y\right)$ over the paths a* and b* sketched in Figure 3.
• Path a* has two linear segments. The first segment is the portion of the line ${y=y}_0$ as $x$ goes from $x_0$ to $x_0+\Delta x$. Along the first segment $\Delta y=0$. The second segment is the portion of the line ${x=x}_0+\Delta x$ as $y$ goes from $y_0$ to $y_0+\Delta y$. Along the second segment, $\Delta x=0$.
• Path b* has two linear segments also. The first segment is the portion of the line $x=x_0$ as $y$ goes from $y_0$ to $y_0+\Delta y$. Along the first segment, $\Delta x=0$. The second segment is the portion of the line ${y=y}_0+\Delta y$ as $x$ goes from $x_0$ to $x_0+\Delta x$. Along the second segment, $\Delta y=0$.
Along path a*, we have
${\Delta }_{a^*}f=f_x\left(x_0,y_0\right)\Delta x+f_y\left(x_0+\Delta x,y_0\right)\Delta y\nonumber$
Along path b*,
${\Delta }_{b^*}f=f_x\left(x_0,y_0+\Delta y\right)\Delta x+f_y\left(x_0,y_0\right)\Delta y\nonumber$
In the limit as $\Delta x$ and $\Delta y$ become arbitrarily small, we must have ${\Delta }_{a^*}f={\Delta }_{b^*}f$, so that
$f_x\left(x_0,y_0\right)\Delta x+f_y\left(x_0+\Delta x,y_0\right)\Delta y=f_x\left(x_0,y_0+\Delta y\right)\Delta x+f_y\left(x_0,y_0\right)\Delta y\nonumber$
Rearranging this equation so that terms in $f_x$ are on one side and terms in $f_y$ are on the other side, dividing both sides by $\Delta x\Delta y$, and taking the limit as $\Delta x\to 0$ and $\Delta y\to 0$, we have
${\mathop{\mathrm{lim}}_{\Delta x\to 0} \left[\frac{f_y\left(x_0+\Delta x,y_0\right)-f_y\left(x_0,y_0\right)}{\Delta x}\right]\ }={\mathop{\mathrm{lim}}_{\Delta y\to 0} \left[\frac{f_x\left(x_0,y_0+\Delta y\right)-f_x\left(x_0,y_0\right)}{\Delta y}\right]\ }\nonumber$
These limits are the partial derivative of $f_y\left(x_0,y_0\right)$ with respect to $x$ and of $f_x\left(x_0,y_0\right)$ with respect to $y$. That is
${\left[{\frac{\partial }{\partial x}f}_y\left(x_0,y_0\right)\right]}_y={\left[\frac{\partial }{\partial x}\left(\frac{\partial f\left(x_0,y_0\right)}{\partial y}\right)\right]}_y=\frac{{\partial }^2f\left(x_0,y_0\right)}{\partial y\partial x}\nonumber$ and ${\left[{\frac{\partial }{\partial y}f}_x\left(x_0,y_0\right)\right]}_x={\left[\frac{\partial }{\partial y}\left(\frac{\partial f\left(x_0,y_0\right)}{\partial x}\right)\right]}_x=\frac{{\partial }^2f\left(x_0,y_0\right)}{\partial x\partial y}\nonumber$
This shows that, if $f\left(x,y\right)$ is a continuous function of $x$ and $y$ whose partial derivatives exist, then
$\frac{{\partial }^2f\left(x_0,y_0\right)}{\partial y\partial x}=\frac{{\partial }^2f\left(x_0,y_0\right)}{\partial x\partial y}\nonumber$
The mixed second partial derivative of $f\left(x,y\right)$ is independent of the order of differentiation. We also write these second partial derivatives as $f_{xy}\left(x_0,y_0\right)$ and $f_{yx}\left(x_0,y_0\right)$.
To summarize these points, if $f\left(x,y\right)$ is a continuous function of $x$ and $y$, all of the following are true:
1. $f\left(x,y\right)$ represents a surface in a three-dimensional space.
2. $f\left(x,y\right)$ is a state function.
3. The total differential is $df={\left({\partial f}/{\partial x}\right)}_ydx+{\left({\partial f}/{\partial y}\right)}_xdy.\nonumber$
4. The total differential is exact.
5. The line integral of $df$ between two points is independent of the path of integration.
6. The line integral of $df$ around any closed path is zero: $\oint{df=0}$.
7. The mixed second-partial derivatives are equal; that is, $\frac{{\partial }^2f}{\partial y\partial x}=\frac{{\partial }^2f}{\partial x\partial y}\nonumber$
7.05: Determining Whether an Expression is an Exact Differential
Since exact differentials have these important characteristics, it is valuable to know whether a given differential expression is exact or not. That is, given a differential expression of the form
$df=M\left(x,y\right)dx+\ N\left(x,y\right)dy, \label{eq1}$
we would like to be able to determine whether $df$ is exact or inexact. It turns out that there is a simple test for exactness:
test for exactness
The differential in the form of Equation \ref{eq1} is exact if and only if
$\dfrac{\partial M}{ \partial y} = \dfrac{\partial N}{ \partial x}. \label{eq2}$
That is, this condition is necessary and sufficient for the existence of a function, $f\left(x,y\right)$, for which $M\left(x,y\right)=f_x\left(x,y\right)$ and $N\left(x,y\right)=f_y\left(x,y\right)$.
In §4 we demonstrate that the condition is necessary. Now we want to show that it is sufficient. That is, we want to demonstrate: If Equation \ref{eq2} hold, then there exists a $f\left(x,y\right)$, such that $M\left(x,y\right)=f_x\left(x,y\right)$ and $N\left(x,y\right)=f_y\left(x,y\right)$. To do this, we show how to find a function, $f\left(x,y\right)$, that satisfies the given differential relationship. If we integrate $M\left(x,y\right)$ with respect to $x$, we have $f\left(x,y\right)=\int{M\left(x,y\right)dx+h\left(y\right)} \nonumber$
where $h\left(y\right)$ is a function only of $y$; it is the arbitrary constant in the integration with respect to $x$, which we carry out with $y$ held constant.
To complete the proof, we must find a function $h\left(y\right)$ such that this $f\left(x,y\right)$ satisfies the conditions:
\begin{align} \label{GrindEQ_1} M\left(x,y\right) &=f_x\left(x,y\right)\Leftrightarrow \[4pt] &=\frac{\partial }{\partial x}\left[\int{M\left(x,y\right)dx+h\left(y\right)}\right] \end{align}
\begin{align}\label{GrindEQ_2} N\left(x,y\right) &=f_y\left(x,y\right) \Leftrightarrow \[4pt] &=\frac{\partial }{\partial y}\left[\int{M\left(x,y\right)dx+h\left(y\right)}\right] \end{align}
The validity of condition in Equation \ref{GrindEQ_1} follows immediately from the facts that the order of differentiation and integration can be interchanged for a continuous function and that $h\left(y\right)$ is a function only of $y$, so that ${\partial h}/{\partial x=0}$.
To find $h\left(y\right)$ such that condition in Equation \ref{GrindEQ_2} is satisfied, we observe that
$\frac{\partial }{\partial y}\left[\int{M\left(x,y\right)dx+h\left(y\right)}\right]=\int{\left(\frac{\partial M\left(x,y\right)}{\partial y}\right)}dx+\frac{dh\left(y\right)}{dy} \nonumber$
But since $\frac{\partial M\left(x,y\right)}{\partial y} = \frac{\partial N\left(x,y\right)}{\partial x} \nonumber$
this becomes
$\frac{\partial }{\partial y}\left[\int{M\left(x,y\right)dx+h\left(y\right)}\right] = \int{\left(\frac{\partial N\left(x,y\right)}{\partial x}\right)}dx+\frac{dh\left(y\right)}dy =N\left(x,y\right)+\frac{dh\left(y\right)}{dy} \nonumber$
Hence, condition in Equation \ref{GrindEQ_2} is satisfied if and only if ${dh\left(y\right)}/{dy}=0$, so that $h\left(y\right)$ is simply an arbitrary constant.
7.06: The Chain Rule and the Divide-through Rule
If we have $f\left(x,y\right)$ while $x$ and $y$ are functions of another variable, $u$, the chain rule states that
$\frac{df}{du}={\left(\frac{\partial f}{\partial x}\right)}_y\frac{dx}{du}+{\left(\frac{\partial f}{\partial y}\right)}_x\frac{dy}{du} \nonumber$
If $x$ and $y$ are functions of variables $u$ and $v$; that is, $x=x\left(u,v\right)$ and $y=y\left(u,v\right)$, the chain rule for partial derivatives is
${\left(\frac{\partial f}{\partial u}\right)}_v={\left(\frac{\partial f}{\partial x}\right)}_y{\left(\frac{\partial x}{\partial u}\right)}_v+{\left(\frac{\partial f}{\partial y}\right)}_x{\left(\frac{\partial y}{\partial u}\right)}_v \nonumber$
A useful mnemonic recognizes that these equations can be generated from the total differential by “dividing through” by $du$. We must specify that the “new” partial derivatives are taken with $v$ held constant. This is sometimes called the divide-through rule.
The divide-through rule is a reliable expedient for generating new relationships among partial derivatives. As a further example, dividing by $dx$ and specifying that any other variable is to be held constant produces a valid equation. Letting w be the variable held constant, we obtain
\begin{align*} \left(\frac{\partial f}{\partial x}\right)_w &=\left(\frac{\partial f}{\partial x}\right)_y\left(\frac{\partial x}{\partial x}\right)_w + \left(\frac{\partial f}{\partial y}\right)_x \left(\frac{\partial y}{\partial x}\right)_w \[4pt] &= \left(\frac{\partial f}{\partial x}\right)_y + \left(\frac{\partial f}{\partial y}\right)_x \left(\frac{\partial y}{\partial x}\right)_w \end{align*}
where we recognize that ${\left({\partial x}/{\partial x}\right)}_w=1$. The result is just the chain rule for${\left({\partial f}/{\partial x}\right)}_w$ when $f=f\left(x,y\right)$ and $y=y\left(x,w\right)$; that is, when $\ f=f\left(x,y\left(x,w\right)\right)$.
If we require that $f\left(x,y\right)$ remain constant while $x$ and $y$ vary, we can use the divide-though rule to obtain another useful relationship from the total differential. If $f\left(x,y\right)$ is constant, $df\left(x,y\right)=0$. This can only be true if there is a relationship between $x$ and $y$. To find this relationship we use the divide-through rule to find ${\left({\partial f}/{\partial y}\right)}_f$ when $f=f\left(x\left(y\right),y\right)$. Dividing
${ df=\left(\frac{\partial f}{\partial x}\right)}_ydx+{\left(\frac{\partial f}{\partial y}\right)}_xdy \nonumber$
by $dy$, and stipulating that $f$ is constant, we find
${\left(\frac{\partial f}{\partial y}\right)}_f={\left(\frac{\partial f}{\partial x}\right)}_y{\left(\frac{\partial x}{\partial y}\right)}_f+{\left(\frac{\partial f}{\partial y}\right)}_x{\left(\frac{\partial y}{\partial y}\right)}_f \nonumber$
Since ${\left({\partial f}/{\partial y}\right)}_f=0$ and ${\left({\partial y}/{\partial y}\right)}_f=1$, we have
${\left(\frac{\partial f}{\partial y}\right)}_x=-{\left(\frac{\partial f}{\partial x}\right)}_y{\left(\frac{\partial x}{\partial y}\right)}_f \nonumber$
In Chapter 10, we find that the divide-through rule is a convenient way to generate thermodynamic relationships.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/07%3A_State_Functions_and_The_First_Law/7.04%3A_Exact_Differentials_and_State_Functions.txt
|
By definition, the energy of a system can be exploited to produce a mechanical change in the surroundings. The energy of the surroundings increases; the energy of the system decreases. Raising a weight against the earth’s gravitational force is the classical example of a mechanical change in the surroundings. When we say that work is done on a system, we mean that the energy of the system increases because of some non-thermal interaction between the system and its surroundings. The amount of work done on a system is determined by the non-thermal energy change in its surroundings. We define work as the scalar product of a vector representing an applied force, ${\mathop{F}\limits^{\rightharpoonup}}_{applied}$, and a second vector, $\mathop{r}\limits^{\rightharpoonup}$, representing the displacement of the object to which the force is applied. The definition is independent of whether the process is reversible or not. If the force is a function of the displacement, we have
$dw={\mathop{F}\limits^{\rightharpoonup}\left(\mathop{r}\limits^{\rightharpoonup}\right)}_{applied}d\mathop{r}\limits^{\rightharpoonup} \nonumber$
Pressure–volume work is done whenever a force in the surroundings applies pressure on the system while the volume of the system changes. Because chemical changes typically do involve volume changes, pressure–volume work often plays a significant role. Perhaps the most typical chemical experiment is one in which we carry out a chemical reaction at the constant pressure imposed by the earth’s atmosphere. When the volume of such a system increases, the system pushes aside the surrounding atmosphere and thereby does work on the surroundings.
When a pressure, $P_{applied}$, is applied to a surface of area $A$, the force normal to the area is $F=P_{applied}A$. For a displacement, $dx$, normal to the area, the work is $Fdx=dw=P_{applied}A\ dx$. We can find the general relationship between work and the change in the volume of a system by supposing that the system is confined within a cylinder closed by a piston. (See Figure 4.) The surroundings apply pressure to the system by applying force to the piston. We suppose that the motion of the piston is frictionless.
The system occupies the volume enclosed by the piston. If the cross-sectional area of the cylinder is $A$, and the system occupies a length $x,$ the magnitude of the system’s volume is $V=Ax$. If an applied pressure moves the piston a distance $dx$, the volume of the system changes by $dV_{system}=A\ dx$. The magnitude of the work done in this process is therefore
\begin{align*} \left|dw_{system}\right| &=\left|P_{applied}A\,dx\right| \[4pt] &=\left|P_{applied}dV_{system}\right| \end{align*}
work is positive if it is done on the system
We are using the convention that work is positive if it is done on the system. This means that a compression of the system, for which $dx<0$ and ${dV}_{system}<0$, does a positive quantity of work on the system. Therefore, the work done on the system is $dw_{system}=-P_{applied}dV_{system}$ or, using our convention that unlabeled variables always characterize the system,
$dw=-P_{applied}dV \nonumber$
7.08: Measuring Work- Non-Pressure-Volume Work
For chemical systems, pressure–volume work is usually important. Many other kinds of work are possible. From our vector definition of work, any force that originates in the surroundings can do work on a system. The force drives a displacement in space of the system or some part of the system. Stretching a strip of rubber is a one-dimensional analog of pressure–volume work. Changing the surface area of a liquid is a two-dimensional analog of pressure–volume work. When only internal forces act, a liquid system minimizes its surface area. We can model this property by attributing a surface-area minimizing force, which we call the surface tension, to the surface of the liquid. We can think of the layer of molecules at the surface as a film that separates the bulk liquid from its surroundings. To increase the area of a liquid system requires an expenditure of work by the surroundings against the surface tension of the film. Gravitational, electrical, and magnetic forces can all do work on particular systems.
In this book, we give little attention to the details of the various kinds of non-pressure–volume work that can be important. (There are two exceptions: Electrical work is important in electrochemistry, which we discuss in Chapter 17. We discuss gravitational work in examples that illustrate reversible processes and some aspects of the criteria for change.) Nevertheless, no development of the basic concepts can be complete without including the effects of non-pressure–volume work. For this reason, we include non-pressure–volume work in our discussions frequently. For the most part, however, we do so in a generalized or abstract way. To do so, we must identify some essential features of any process that does work on a system.
Whenever a particular kind of work is done on a system, some change occurs in a thermodynamic variable that is characteristic of that kind of work. For pressure–volume work this is the volume change. For stretching a strip of rubber, it is the change in length. For gravitational work, it is the displacement of a mass in a gravitational field. For changing the shape of a liquid, it is the change in surface area. For electrical work, it is the displacement of a charge in an electrical field. For magnetic work, it is the displacement of a magnetic moment in a magnetic field. For an arbitrary form of non-pressure–volume work, let us use $\theta$ to represent this variable. We can think of $\theta$ as a generalized displacement. When there is an incremental change, $d\theta$, in this variable, there is a corresponding change, $dE$, in the energy of the system.
For a displacement, $d\theta$, let the increase in the energy of the system be $dw_{\theta }$. The energy increase also depends on the magnitude of the force that must be applied to the system, parallel to the displacement $d\theta$. Let this force be $f_{\theta }$. Then, for this arbitrary abstract process, we have $dw_{\theta }=f_{\theta }d\theta$, or $f_{\theta }={dw_{\theta }}/{d\theta }$. Since $dw_{\theta }$ is the contribution to the incremental change in the energy of the system associated with the displacement $d\theta$, we can also write this as
$f_{\theta }=\frac{dw_{\theta }}{d\theta }=\frac{\partial E}{\partial \theta } \nonumber$
We can generalize this perspective. $\theta$ need not be a vector, and ${\partial E}/{\partial \theta }$ need not be a mechanical force. So long as $d\theta$ determines the energy change, $dE$, we have
$dw_{\theta }=\left(\frac{\partial E}{\partial \theta }\right)d\theta \nonumber$
We call ${\partial E}/{\partial \theta }$ a potential. If we let
${\mathit{\Phi}}_{\theta }=\left(\frac{\partial E}{\partial \theta }\right) \nonumber$
the energy increment becomes $dw_{\theta }={\mathit{\Phi}}_{\theta }d\theta$. If multiple forms of work are possible, we can distinguish them by their characteristic variables, which we label ${\theta }_1$, ${\theta }_2$,, ${\theta }_k$,, ${\theta }_{\omega }$. For each of these characteristic variables, there is a corresponding potential, ${\mathit{\Phi}}_1$, ${\mathit{\Phi}}_2$,, ${\mathit{\Phi}}_k$,, ${\mathit{\Phi}}_{\omega }$. The total energy increment, which we also call the non-pressure–volume work, $dw_{NPV}$, becomes $dw_{NPV}=\sum^{\omega }_{k=1}{{\mathit{\Phi}}_kd{\theta }_k} \nonumber$
For pressure–volume work, $dw_{PV}=-PdV$. The characteristic variable is volume, $d\theta =dV,$ and the potential is the negative of the pressure, ${\mathit{\Phi}}_V=-P$. For gravitational work, the characteristic variable is elevation, $d\theta =dh$; for a given system, the potential depends on the gravitational acceleration, $g$, and the mass of the system: ${\mathit{\Phi}}_h=mg$.
When a process changes the composition of a system, it is often important to relate the work done on the system to the composition change. Formally, we express the incremental work resulting from the $k$-th generalized displacement as
$dw_k=\left(\frac{\partial E}{\partial {\theta }_k}\right)\left(\frac{\partial {\theta }_k}{\partial n}\right)dn \nonumber$
where $dw_k$ and $dn$ are the incremental changes in the work done on the system and the number of moles of the substance in the system. To see how this works out in practice, let us consider the particular case of electrical work. The electrodes of an electrochemical cell can be at different electric potentials. We usually designate the potential difference between the electrodes as ${\mathcal{E}}_{cell}$. (We can also write ${\mathit{\Phi}}_{cell}={\mathcal{E}}_{cell}$ when we want to keep our notation uniform. The unit of electrical potential is the volt, V. One volt is one joule per coulomb, $1\ \mathrm{V}=1\ \mathrm{J\ }{\mathrm{C}}^{\mathrm{-1}}$.) We are usually interested in cases in which we can assume that ${\mathcal{E}}_{cell}$ is constant.
Whenever a current flows in an electrochemical cell, electrons flow through an external circuit from one electrode to the other. By our definition of electrical potential, the energy change that occurs when a charge $dq$ passes through a potential difference, ${\mathcal{E}}_{cell}$, is
$dE=\ {\mathcal{E}}_{cell}dq=dw_{elect} \nonumber$
We have ${\mathcal{E}}_{cell}=\left({\partial E}/{\partial q}\right)$. Evidently, charge is the characteristic variable for electrical work; we have ${\theta }_{elect}=q$, and $\left(\frac{\partial E}{\partial {\theta }_{elect}}\right)=\left(\frac{\partial E}{\partial q}\right)={\mathcal{E}}_{cell} \nonumber$
Letting the magnitude of the electron charge be $e$, $dN$ electrons carry charge $dq=-e\ dN$. Then, $dq=\left(-e\overline{N}\right)\ \left({dN}/{\overline{N}}\right)$. The magnitude of the charge carried by one mole of electrons is the faraday, $\mathcal{F}$. That is, $1\ \mathcal{F}=\left|e\overline{N}\right|=96,485\ \mathrm{C\ }{\mathrm{mol}}^{\mathrm{-1}}$. (See §17-8.) Letting $dn$ be the number of moles of electrons, we have $dn={dN}/{\overline{N}}$ and
$dq=-\mathcal{F}dn$, so that
$\left(\frac{\partial {\theta }_{elect}}{\partial n}\right)=\left(\frac{\partial q}{\partial n}\right)=-\mathcal{F} \nonumber$
The work done when $dn$ moles of electrons pass through the potential difference ${\mathcal{E}}_{cell}$ becomes
$dw_{elect}=\left(\frac{\partial E}{\partial {\theta }_{cell}}\right)\left(\frac{\partial {\theta }_{cell}}{\partial n}\right)dn \nonumber$ $\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ =\left(\frac{\partial E}{\partial q}\right)\left(\frac{\partial q}{\partial n}\right)dn=-\mathcal{F}{\mathcal{E}}_{cell}\ dn \nonumber$
We find the work done when ions pass through a potential difference $\mathcal{E}$ by essentially the same argument. If ions of species $j$ carry charge $z_je$, then $dN_j$ ions carry charge $dq=z_je\ dN_j=z_j\mathcal{F}dn_j$, and the electrical work is ${\left(dw_{elect}\right)}_j=z_j\mathcal{F}\mathcal{E}dn_j$. If $\omega$ different species pass through the potential difference, the total electrical work becomes $dw_{elect}=\sum^{\omega }_{j=1}{z_j\mathcal{F}\mathcal{E}dn_j} \nonumber$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/07%3A_State_Functions_and_The_First_Law/7.07%3A_Measuring_Pressure-Volume_Work.txt
|
As the idea of heat as a form of transferring energy was first being developed, a unit amount of heat was taken to be the amount that was needed to increase the temperature of a reference material by one degree. Water was the reference material of choice, and the calorie was defined as the quantity of heat that raised the temperature of one gram of water one degree kelvin. The amount of heat exchanged by a known amount of water could then be calculated from the amount by which the temperature of the water changed. If, for example, introducing 63.55 g (1 mole) of copper metal, initially at 274.0 K, into 100 g of water, initially at 373.0 K, resulted in thermal equilibrium at 288.5 K, the water surrendered
$\mathrm{100\ \ g}\ \times \mathrm{1\ cal\ }{\mathrm{g}}^{\mathrm{-1}}\mathrm{\ }{\mathrm{K}}^{\mathrm{-1}}\times 84.5\ \mathrm{K}=8450\mathrm{\ cal} \nonumber$
This amount of heat was taken up by the copper, so that 0.092 cal was required to increase the temperature of one gram of copper by one degree K. Given this information, the amount of heat gained or lost by a known mass of copper in any subsequent experiment can be calculated from the change in its temperature.
Joule developed the idea that mechanical work can be converted entirely into heat. The quantity of heat that could be produced from one unit of mechanical work was called the mechanical equivalent of heat. Today we define the unit of heat in mechanical units. That is, we define the unit of energy, the joule ($\mathrm{J}$), in terms of the mechanical units mass ($\mathrm{kg}$), distance ($\mathrm{m}$), and time ($\mathrm{s}$). One joule is one newton-meter or one $\mathrm{kg\ }{\mathrm{m}}^{\mathrm{2}}\mathrm{\ }{\mathrm{s}}^{\mathrm{-2}}$. One calorie is now defined as $\boldsymbol{\mathrm{4}}.\boldsymbol{\mathrm{184}}\boldsymbol{\mathrm{\ }}\mathrm{J}$, exactly. This definition assumes that heat and work are both forms of energy. This assumption is an intrinsic element of the first law of thermodynamics. This aspect of the first law is, of course, just a restatement of Joule’s original idea.
When we want to measure the heat added to a system, measuring the temperature increase that occurs is often the most convenient method. If we know the temperature increase in the system, and we know the temperature increase that accompanies the addition of one unit of heat, we can calculate the heat input to the system. Evidently, it is useful to know how much the temperature increases when one unit of heat is added to various substances. Let us consider a general procedure for accumulating such information.
First, we need to choose some standard amount of the substance in question. After all, if we double the amount, it takes twice as much heat to effect the same temperature change. One mole is a natural choice for this standard amount. If we add small increments of heat to one mole of a pure substance, we can measure the temperature after each addition and plot heat versus temperature. Figure 5 shows such a plot. (In experiments like this, it is often convenient to introduce the heat by passing a known electrical current, $I$, through a known resistance, $R$, immersed in the substance. The rate at which heat is produced is $I^{\mathrm{2}}R$. Except for the usually negligible amount that goes into warming the resistor, all of it is transferred to the substance.) At any particular temperature, the slope of the graph is the increment of heat input divided by the incremental temperature increase. This slope is so useful, it is given a name; it is the molar heat capacity of the substance, $C$. Since this slope is also the derivative of the $q$-versus-$T$ curve, we have
$C=\frac{dq}{dT} \nonumber$
The temperature increase accompanying a given heat input varies with the particular conditions under which the experiment is done. In particular, the temperature increase will be less if some of the added heat is converted to work, as is the case if the volume of the system increases. If the volume increases, the system does work on the surroundings. For a given $q$, $\Delta T$ will be less when the system is allowed to expand, which means that ${q}/{\Delta T}$ will be greater. Heat capacity measurements are most conveniently done with the system at a constant pressure. However, the heat capacity at constant volume plays an important role in our theoretical development. The heat capacity is denoted $C_P$ when the pressure is constant and $C_V$ when the volume is constant. We have the important definitions
$C_P={\left(\frac{\partial q}{\partial T}\right)}_P \nonumber$ and $C_V={\left(\frac{\partial q}{\partial T}\right)}_V \nonumber$
Since no pressure–volume work can be done when the volume is constant, less heat is required to effect a given temperature change, and we have $C_P>C_V$, as a general result. (In §14, we consider this point further.) If the system contains a gas, the effect of the volume increase can be substantial. For a monatomic ideal gas, the temperature increase at constant pressure is only $\mathrm{60\%}$ of the temperature increase at constant volume for the same input of heat.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/07%3A_State_Functions_and_The_First_Law/7.09%3A_Measuring_Heat.txt
|
A state function must return to its original value if a system is taken through a series of changes and finally returned to its original state. We say that the change in a state function must be zero if the system is taken through a cyclic process, or somewhat more picturesquely, if the system traverses a cyclic path. While we can measure the heat and work that a system exchanges with its surroundings, neither the heat nor the work is necessarily zero when the system traverses a cycle. Heat and work are not state functions. Nevertheless, adding heat to a system increases its energy. Likewise, doing work on a system increases its energy. If the system surrenders heat to the surroundings or does work on the surroundings, the energy of the system is decreased. In any change that a closed system undergoes, the total energy change is $E=q+w$, where $q$ and $w$ can be either positive or negative. For very small changes, we write
$dE=dq+dw. \nonumber$
Anything we do to increase the energy of a closed system can be classified as either adding heat to the system or doing work on the system.
Heat, work, and energy are all extensive variables. They are additive. If a system acquires an increment of heat $q_1$ from one source and an increment $q_2$ from another source, the total heat acquired by the system is $q_1+q_2$. If work, $w_1$, of one kind, and work, $w_2$, of a second kind are done on the system, the total work is $w_1+w_2$.
In keeping with the thermodynamic perspective that we can partition the universe into system and surroundings, we assume that any energy lost by the system is taken up by the surroundings, and vice versa. By definition, we have $q=-\hat{q}$, $w=-\hat{w}$, and $E=-\hat{E}$; for any process, $E_{universe}=E+\hat{E}$. This is the principle of conservation of energy, which is usually stated:
For any change in any system, the energy of the universe remains constant.
So, conservation of energy is built into our energy accounting scheme. It is a consequence of the thermodynamic perspective and our rules for keeping track of exchanges of heat and work between system and surroundings. Conservation of energy is an “accounting convention,” but it is not arbitrary. That is, we are not free to choose another convention for this “energy accounting.” Ample experimental evidence supports our assumption that energy conservation is a fundamental property of nature.
In summary, we postulate that for any change whatsoever that a closed system may undergo, we can identify energy inputs either as heat or as one or more forms of work such that
$E=q+w \nonumber$
and, if a system undergoes a series of changes that ultimately return it to its original state, the energy change for the entire series of changes will be zero. There are two components to this postulate. The first component is an operational definition of energy. An important aspect of this definition is that the principle of conservation of energy is embedded in it. The second is an assertion that energy is a state function. Our operational definition of energy is open-ended. An essential element of the postulate is that we can always identify work inputs that make the energy, whose changes we compute as $q+w$, a state function. The facts that energy is conserved and that energy is a state function are related properties of a single aspect (energy) of nature. The relationship between these facts is a characteristic property of physical reality; it is not a matter of logic in the sense that one fact implies the other.
All of these ideas are essential components of the concept of energy. We roll them all together and assert them as a postulate that we call the first law of thermodynamics. We introduce the first law in Chapter 6. We repeat it here.
The first law of thermodynamics
In a process in which a closed system accepts increments of heat, $d q$, and work, $dw$, from its surroundings, the change in the energy of the system, $dE$, is $dE = dq + dw$. Energy is a state function. For any process, $dE_{universe} = 0$.
This statement of the first law does not deal explicitly with the mechanical energy of the system as a whole or with the energy effects of a transport of matter across the boundary of an open system. Because $dw$ can include work that changes the position or motion of a system relative to an external reference frame, increments of mechanical energy can be included in $dE$. In chemical applications, we seldom need to consider the mechanical energy of the system as a whole; we can assume that the system has no kinetic or potential energy associated with the movement or location of its mass. When this is case, the total incremental energy change,$\ dE$, is the same thing as the incremental change in the internal energy of the system. When we need to distinguish the internal energy of a system from its total energy, we write $U$ for the internal energy and $E$ for the total energy. Letting incremental changes in the kinetic and potential energy of the whole system be $d\tau$ and $d\textrm{ʋ}$, respectively, we have
$dU=dq+\ dw \nonumber$
and
$dE=dU+d\tau +\ d\textrm{ʋ}. \nonumber$
For processes of interest in chemical systems, we normally have $d\tau =\ d\textrm{ʋ}=0$. Then the total energy and the internal energy are the same thing: $dE=dU$.
In Section 7.8, we introduce characteristic variables, ${\theta }_k$, to represent changes in the system that result from various forms of non-pressure–volume work done on the system. We let ${\mathit{\Phi}}_k=\left({\partial E}/{\partial {\theta }_k}\right)$, so that we can represent the incremental energy change that results from the non-pressure–volume work of all kinds as
$dw_{NPV}=\sum^{\omega }_{k=1}{{\mathit{\Phi}}_kd{\theta }_k} \nonumber$
When both pressure–volume and non-pressure–volume work occur, we have
\begin{align*} dw &=dw_{PV}+dw_{NPV} \[4pt] &=-PdV+\sum^{\omega }_{k=1} \mathit{\Phi}_kd{\theta }_k \end{align*}
When a non-thermal process changes the energy of a closed, constant-volume system, we have $dE=dw_{NPV}$.
We state the first law for a closed system. Extending the first law to open systems is straightforward. The energy of a system depends on the substances that are present, their amounts, and their states. At any specified conditions, a given amount of a particular substance makes a fixed contribution to the energy of the system. If we transfer matter across the boundary of a system, we change the energy of the system. We can always alter the original system to include the matter that is to be transferred. The altered system is closed; and so, by the first law, its energy is the same after the transfer as it was before. In Section 14-2 we develop an explicit mathematical function to model the contribution made to the energy of a system by a specified quantity of matter in a specified state. If matter crosses the boundary of a system, the energy models for the separate collections of substances pre-transfer must equate to that for the new system post-transfer.
Finally, we make a further simple but important observation: We imagine that we can always identify an energy increment that crosses a system boundary as work, $dw$, or heat, $dq$. However, the essence of the first law is that these increments lose their identities—so to speak—in the system. The effect of a work input, $dw$, doesn’t necessarily appear as an increase in the mechanical energy of the system; a heat input, $dq$, doesn’t necessarily appear as in increase in the thermal energy of the system.
To illustrate this point, let us consider a reversible process and an irreversible process, each of which increases the temperature of one gram of water by one degree K. The initial and final states of the system are the same for both processes. In the reversible process, we bring the water, whose temperature is $T$, into contact with a thermal reservoir at an incrementally higher temperature $T+dT$ and allow $\mathrm{4.184\ }\mathrm{J}$ of heat to transfer to the system by convection${}^{1}$. No work is done. We have
• $q=-\hat{q}=\mathrm{4.184\ }\mathrm{J}$,
• $w=-\hat{w}=0$, and
• $E=q=\mathrm{4.184\ }\mathrm{J}$.
In this case, there is no inter-conversion of heat and work.
In the irreversible process, we stir one gram of water that is thermally isolated. The stirring generates heat in the system. We supply the energy to drive the stirrer from the surroundings, perhaps by allowing a spring to uncoil. When $\mathrm{4.184\ J}$ of work from the surroundings has been frictionally dissipated in the system, the state of the one-gram system is the same as it was at the end of the reversible process. In this irreversible process, all of the energy traverses the system boundary as work (non-thermal energy). No heat traverses the system boundary. We have
• $w=-\hat{w}=\mathrm{4.184\ J}$,
• $q=-\hat{q}=0$, and
• $\Delta E=w=\mathrm{4.184\ }\mathrm{J}$.
Uncoiling the spring generates bulk motion within the water. Within a short time, the energy of this bulk motion is completely dissipated as molecular-level kinetic energy, or heat.
Beginning in §18, we consider the reversible isothermal expansion of an ideal gas. This simple process provides a further illustration of the inter-conversion of heat and work within a system. For this process, we find that thermal energy, $q>0$, crosses the system boundary and is converted entirely into work that appears in the surroundings, $\hat{w}=-w>0$. The energy of the system is unchanged. The energy of an ideal gas depends only on temperature. In an isothermal expansion, the temperature is constant, so the transport of heat across the system boundary has no effect on the energy of the system.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/07%3A_State_Functions_and_The_First_Law/7.10%3A_The_First_Law_of_Thermodynamics.txt
|
The first law has been stated in many ways. Some are intended to be humorous or evocative rather than precise statements; for example, “You can’t get something (useful work in some system) for nothing (no decrease in the energy of some other system).” Others are potentially ambiguous, because we construct them to be as terse as possible. To make them terse, we omit ideas that we deem to be implicit.
A compact and often used statement is, “$E=q+w$, and $E$ is a state function.” In this statement, the fact that energy is conserved is taken to be implicit in the operational definition, $E=q+w$. We can give an equally valid statement by saying, “Energy is conserved ($E_{universe}=E+\hat{E}=0$) in all processes.” In making this statement, we assume that the definition of energy ($E=q+w$) is understood and that the state-function postulate is implicit in this definition.
To see that the postulate that energy is conserved and the postulate that energy is a state function are logically independent, let us consider a system that undergoes a particular cyclic process, which we call “Cycle A.” In Cycle A, the final state of the system is the same as its initial state; the postulate that energy is a state function is then equivalent to the statement that $E_{Cycle\ A}=0$. The postulate that energy is conserved is equivalent to the statement that $E_{Cycle\ A}+{\hat{E}}_{Cycle\ A}=0$. Now, what can we say about ${\hat{E}}_{Cycle\ A}$? Obviously, if we combine the information from the two postulates, it follows that ${\hat{E}}_{Cycle\ A}=0$. The essential point, however, is that ${\hat{E}}_{Cycle\ A}=0$ is not required by either postulate alone.
• ${\hat{E}}_{Cycle\ A}=0$ is not required by the postulate that energy is a state function, because the surroundings do not necessarily traverse a cycle whenever the system does.
• ${\hat{E}}_{Cycle\ A}=0$ is not required by conservation of energy, which merely requires ${\hat{E}}_{Cycle\ A}=-E_{Cycle\ A}$, and absent the requirement that $E$ be a state function, $E_{Cycle\ A}$ could be anything.
In Chapter 9, we explore a statement of the second law that denies the possibility of constructing a “perpetual motion machine of the second kind.” Such a perpetual motion machine converts heat from a constant-temperature reservoir into work. This statement is: “It is impossible to construct a machine that operates in a cycle, exchanges heat with its surroundings at only one temperature, and produces work in the surroundings.”
A parallel statement is sometimes taken as a statement of the first law. This statement denies the possibility of constructing a “perpetual motion machine of the first kind.” This statement is, “It is impossible to construct a machine that operates in a cycle and converts heat into a greater amount of work.” The shared perspective and phrasing of these statements is esthetically pleasing. Let us consider the relationship between this statement of the first law and the statement given in §10. (For brevity, let us denote this impossibility statement as the “machine-based” statement of the first law and refer to it as proposition “MFL.” We refer to the statement of the first law given in §10 as proposition “FL.”)
In the machine-based statement (MFL), we mean by “a machine” a system that accepts heat from its surroundings and produces a greater amount of work, which appears in the surroundings. If such a machine exists, the machine-based statement of the first law is false, and proposition $\mathrm{\sim}$MFL is true. For one cycle of this first-law violating machine, we have $\hat{w}>q>0$. Since $q=-\hat{q}$, we have $\hat{w}>-\hat{q}>0$. It follows that $\hat{E}=\hat{q}+\hat{w}>0$. Our statement of the principle of conservation of energy ($E+\hat{E}=0$) then requires that, for one cycle of this perpetual motion machine, $E<0$. Our statement of the first law, FL, requires that, since energy is a state function, $E=0$. Since this is a contradiction, the existence of a perpetual motion machine of the first kind (proposition $\mathrm{\sim}$MFL) implies that the first law (energy is a state function, proposition FL) is false ($\mathrm{\sim}$MFL $\mathrm{\Rightarrow }$ $\mathrm{\sim}$FL).
From this result, we can validly conclude: If the first law is true, the existence of a perpetual motion machine of the first kind is impossible:
($\mathrm{\sim}$MFL $\mathrm{\Rightarrow }$ $\mathrm{\sim}$FL) $\mathrm{\Rightarrow }$($\mathrm{\sim}$$\mathrm{\sim}$FL$\mathrm{\Rightarrow }$$\mathrm{\sim}$$\mathrm{\sim}$MFL)$\mathrm{\Rightarrow }$(FL$\mathrm{\Rightarrow }$MFL)
We cannot conclude that the impossibility of perpetual motion of the first kind implies that energy is a state function
($\mathrm{\sim}$MFL $\mathrm{\Rightarrow }$ $\mathrm{\sim}$FL) does not imply (MFL$\mathrm{\Rightarrow }$FL)
That is, the impossibility of perpetual motion of the first kind, as we have interpreted it, is not shown (by this argument) to be equivalent to the first law, as we have stated it. (It remains possible, of course, that this equivalence could be proved by some other argument.)
Evidently, when we take the impossibility of constructing a perpetual motion machine of the first kind as a statement of the first law, we have a different interpretation in mind. The difference is this: When we specify a machine that operates in a cycle, we intend that everything about the machine shall be the same at the end of the cycle as at the beginning—including its energy. That is, we intend the statement to be understood as requiring that, for one cycle of the perpetual motion machine $E=0$. Equivalently, we intend the statement to be understood to include the stipulation that energy is a state function.
Now, for one cycle of the perpetual motion machine, we have $E=0$ and $\hat{E}>0$. Given the basic idea that energy is additive, so that $E_{universe}=E+\hat{E}$, we have that $E_{universe}>0$. The impossibility statement asserts that this is false; equivalently, the impossibility statement asserts that energy cannot be created. This conclusion is a weak form of the principle of conservation of energy; it says less than we want the first law to say. We postulate that energy can be neither created nor destroyed. That is, $E_{universe}>0$ and $E_{universe}<0$ are both false. When we consider the impossibility statement to assert the principle of energy conservation, we implicitly stipulate that the machine can also be run in reverse. (See problem 10.)
We intend the first law to assert the existence of energy and to summarize its properties. However we express the first law, we recognize that the concept of energy encompasses several closely interrelated ideas.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/07%3A_State_Functions_and_The_First_Law/7.11%3A_Other_Statements_of_the_First_Law.txt
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.